good? privacy violations.

 Deckey grounds her discussion of privacy rights in Madison’s definition, which doesn’t see property rights and privacy rights as separate entitlements. In Madison’s account, “any violation of property violates one’s privacy, privacy is not linked to identity per se but it is linked to one’s liberty to exist and exercise their free choice over their property” (pg. 28). Deckey acknowledges that under Madison’s definition, “the need for an independent privacy right, separate from the property right one holds in themselves, would be redundant” (pg. 30). I would like to offer a counter-example, where Madison’s definition might not be sufficient to address people’s privacy concerns.

Today’s dot-com companies thrive on intruding on people’s privacy, with their valid justifications. I have worked on a Care-For-Depression algorithm with ByteDance, and we basically detect those that are of high risk of suffering from depression and fill their for-you-page up with videos that promote self-care and suicide prevention. I contend that the purpose of this project is evidently positive: build an algorithm that tries to care for people’s emotions, especially those that are frequently browsing emotionally discouraging content which might worsen their symptoms. At the end of the day, ByteDance can’t earn a buck by building such an algorithm and even has to spend more money, because they have to invite more mental health-related content producers to their platform. However, it is also truly a privacy violation where people’s deepest vulnerabilities are involuntarily exposed, and they are being categorized and stigmatized because of these vulnerabilities. Privacy intrusions used in voting might negatively intermingle with election outcomes, but in this case on the surface, it seems like privacy intrusions have their benefits. To further put this situation under Madison’s account, it wouldn’t constitute a property violation, since it has not induced emotional and private harm, and doesn’t harm the person’s economic values as well. It is easy to argue that even though the person’s liberty is sacrificed to some extent, their personal data are used to serve them a good cause.

I would argue that this situation is problematically understood under Madison’s account, because there are certainly better ways to approach caring for people’s mental health without a clear violation of people’s privacy. When the system detects users’ negative emotions from the content they most frequently browse, the system can send notifications for users to take a psychological quiz or use the chatbot function to initiate a caring conversation, etc. No matter what the methodology is, the idea is that users can exit the notification at any point if they don’t consent to the care that the system is trying to deliver. Thus, the system doesn’t stigmatize people’s vulnerabilities but offers them a chance to seek out help, rendering autonomy to the users instead of intruding on personal data for granted because of a good cause. This might not be a solution, as I can see how notifications can alarm people that they are being watched by the system. But my point here is that even when personal data are intruded on for a good cause, it is still a privacy intrusion that can be avoided with other mediums of care, in this case.

Comments

Popular posts from this blog

Development as White Saviorism

I used to be a libertarian and i think Nozick is full of shit

The other face of the father of capitalism?