Law in the Internet Society

View   r3  >  r2  >  r1
LindaShenFirstPaper 3 - 23 Aug 2014 - Main.EbenMoglen
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="FirstPaper"
>
>
META TOPICPARENT name="FirstEssay"
 

The Value of Privacy

-- By LindaShen - 29 Oct 2012


LindaShenFirstPaper 2 - 11 Jan 2013 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"

The Value of Privacy

-- By LindaShen - 29 Oct 2012

Changed:
<
<
As Professor Moglen points out, one critical problem with today’s internet is that it compromises our privacy, offering additional “services” such as targeted advertising when it could just as well operate at zero marginal cost without offering such services. While internet users have a general understanding that their privacy is compromised through using products by the likes of Google and Facebook, there exists a question of what this privacy loss costs, both for individual users and for society. Relatedly, there exists a question of whether such privacy impositions create benefits other than the revenue gains for the companies who provide them. Most users would elect to maintain their privacy all else equal, but is there a benefit to giving up some privacy that significantly diminishes the net cost? If not, then why do people continue to behave in ways that facilitate the compromise of privacy?
>
>
As Professor Moglen points out, one critical problem with today’s internet is that it compromises our privacy, offering additional “services” such as targeted advertising when it could just as well operate at zero marginal cost without offering such services.

I don't understand this statement of an idea supposedly mine. It sounds confused to me. I think we could use free software, which is made and distributed at zero marginal cost, to provide these services to ourselves and others in federated as opposed to centralized, privacy-invading ways. Those services provided in such a fashion aren't zero marginal cost, because bandwidth, storage, and other necessary components of service-provision have positive marginal cost. But the actual non-zero cost of providing free services to ourselves and others is small enough to be well worth doing in order to maintain privacy.

While internet users have a general understanding that their privacy is compromised through using products by the likes of Google and Facebook, there exists a question of what this privacy loss costs, both for individual users and for society. Relatedly, there exists a question of whether such privacy impositions create benefits other than the revenue gains for the companies who provide them. Most users would elect to maintain their privacy all else equal, but is there a benefit to giving up some privacy that significantly diminishes the net cost? If not, then why do people continue to behave in ways that facilitate the compromise of privacy?

Is this question, why are people not rational? Given that we know that people aren't rational, why would one ask this particular series of questions?
 

Two Questions

This paper explores two questions associated with the issue above: (1) the empirical question of how much users (facially) value their privacy, as approximated by their pattern of behavior in a variety of situations, and (2) the normative question of how much users should value their privacy, taking into account factors such as possible user ignorance (either of consequences or of alternatives) and social externalities.
Line: 12 to 31
 

Empirical Question

One way to approximate the value that users assign to privacy is by extrapolating from other real-world scenarios where people face privacy compromises. A quick survey of non-internet-related examples suggests that many people are willing to at least partially compromise their privacy regularly. People often sign up for reward cards to receive store discounts in exchange for the tracking of their purchases, use dressing rooms in exchange for being potentially monitored by personnel, and do not unsubscribe from physical junk mail despite its inconvenience, possibly because they account for the probability that certain pieces of junk mail are items they actually wish to receive.
Changed:
<
<
Thus, people’s willingness to expose personal information on the internet seems consistent with behavior elsewhere. Moreover, with regard to the internet, it is not simply that users are willing to share information with service-providers like Google and Facebook; it is that even while using such services, they frequently share beyond the minimum required. For instance, although Facebook provides customizable privacy filters, people often choose to share information with all “friends” (often hundreds), or “friends of friends” (often thousands).
>
>
Having agreed that there's no such thing as a distinction between "Internet examples," and "non-Internet examples" of human behavior, what have we gotten from this supposedly "empirical" form of argument (which so far as I can see consists of assertions rather than measurements)?

Thus, people’s willingness to expose personal information on the internet seems consistent with behavior elsewhere.

But that wasn't the conclusion you were supposedly establishing, which was an "approximation" of "the value [people] assign to privacy."

Moreover, with regard to the internet, it is not simply that users are willing to share information with service-providers like Google and Facebook; it is that even while using such services, they frequently share beyond the minimum required. For instance, although Facebook provides customizable privacy filters, people often choose to share information with all “friends” (often hundreds), or “friends of friends” (often thousands).

We don't know that people are "choosing" to do these things, because—as I've mentioned before,—we have an actual empirical investigation, measuring what percent of Facebook users achieve something different with their "privacy settings" than what they want, and establishing that for a very significant sample the percentage failing to get what they think they're "choosing" is 100.

While information sharing is a large part of the purpose of both Facebook in particular and the internet more generally, the sharing that occurs on social media sites may foster two related mentalities: (A) that one must “give to get” – i.e., that privacy is a good that one can give up in exchange for service by the host site or for information shared by others; and (B) that surrendering personal information to distant acquaintances or strangers is no big deal.

Neither of which would be relevant to the problem. The problem is that they're all sharing with one Big Friend, and what they're sharing with him isn't what they think they're sharing with anybody else. Because what they're sharing with him isn't only all the stuff everybody publishes, but all the data about what everybody LOOKS AT. What people think they are doing in social networking is fine, and the technology should help them to achieve what they think they're doing. What the technology ought NOT to do is to spy on them in ways they have no idea they are being spied on, involving extraction of data they don't realize they're giving up, and the conduct of trade and analysis of THAT information, over which they have absolutely no control.
 
Changed:
<
<
While information sharing is a large part of the purpose of both Facebook in particular and the internet more generally, the sharing that occurs on social media sites may foster two related mentalities: (A) that one must “give to get” – i.e., that privacy is a good that one can give up in exchange for service by the host site or for information shared by others; and (B) that surrendering personal information to distant acquaintances or strangers is no big deal. (A) may arise from the fact that sharing information among friends and networks is a loosely reciprocal process: the more content we share, the more likely there will be feedback content (either in the form of commentary or of sharing by others) that makes the sharing process more worthwhile. (B) may arise from the fact that the internet has generally made personal facts about us more public, and is perhaps enhanced by a cultural exhibitionism / voyeurism that has developed in the process, or a groupthink that condones it. A third common mentality is that sharing information with a disembodied entity is less disconcerting than sharing information with identifiable individuals. These mentalities create a somewhat inhospitable environment for convincing individuals of the need to zealously guard their privacy.
>
>
(A) may arise from the fact that sharing information among friends and networks is a loosely reciprocal process: the more content we share, the more likely there will be feedback content (either in the form of commentary or of sharing by others) that makes the sharing process more worthwhile. (B) may arise from the fact that the internet has generally made personal facts about us more public, and is perhaps enhanced by a cultural exhibitionism / voyeurism that has developed in the process, or a groupthink that condones it. A third common mentality is that sharing information with a disembodied entity is less disconcerting than sharing information with identifiable individuals. These mentalities create a somewhat inhospitable environment for convincing individuals of the need to zealously guard their privacy.

Only if those were the only facts available for discussion. But if you were to describe the problem with greater completeness and accuracy, it would turn out that those mental reservations and defenses have little to do with the resulting conversation.
 

Normative Question

The cost of compromising privacy is fairly user-dependent. Some people have more to hide than others, and some care more about disclosure. (For instance, Professor Moglen pays for groceries with cash, while I use reward cards, allowing stores to track purchase patterns.) However, is this because I’m less informed about the risks of sharing my personal information? Am I more selfish for not internalizing the costs that my information-sharing imposes on others?
Added:
>
>
Yes, but that hasn't occurred to you yet as plausible, because you're thinking once again in discreet single-transaction terms. Suppose you asked instead, "How much of the information I give away results in reductions of other peoples' privacy besides my own?" At that point, you would be asking an ecological rather than a psychological question: not, "how does this make me feel?" but "how many other people am I hurting?"
 There are certainly potentially serious risks to sharing personal information. While most users understand the nuisance of targeted advertisements, they may not fully appreciate the impact of aggregation or prediction: insurance companies may learn individual facts and demographic behavioral patterns and use this to alter premiums, and the government may do much more. Additionally, information shared about us may be out of our control: through others’ sharing, Google and Facebook may possess my conversations and images even if I personally use neither. Moreover, those who possess this data can make reliable predictions about individuals with my characteristics, as long as people with sufficiently similar characteristics opt to not safeguard their data. Aside from a commons problem, users with low subjective value of privacy can easily undermine the privacy of those who value it much more by not internalizing the latter’s costs, thereby depriving the latter group of choice.
Added:
>
>

You mean something like, "people who don't mind dirty streets may, by littering, cause distaste to more sensitive people?" or "people who like smoking and subject their children to an atmosphere permeated by cigarette smoke will permanently harm their children's health?" Even the former is an appropriate subject of public regulation, of course. But the latter is also a matter for mature, critical and unsparing self-judgment. Don't you think your "normative" analysis ought to deal more forthrightly with the actual normative issues?

 On the flip side, privacy compromises may at least partially function to customize services we (like to) receive. For example, Amazon can use our history to deliver desirable recommendations and discounts. And while Facebook photos may allow surveillance of our whereabouts, similar tracking enables tools like Google Flu Trends, one of many ways in which personal data aggregation can deliver social good. Ultimately, some privacy compromises are inevitable to enable the provision of widescale social services, from screening security threats to controlling disease. The goal is to minimize the intrusion after balancing the gravity of the social interest with the risk that information-gathering poses (including risk that our information is nefariously used), which may not always align with individuals’ incentives. As it’s difficult for individuals to assess the risk that their information be used nefariously, and as it’s not always in the individuals’ (perceived) self-interest to protect their data, they may engage in behavior that is risk-seeking on a more global scale, at least from the more informed user’s perspective.
Deleted:
<
<

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:
 
Changed:
<
<
>
>

As I've indicated, I think the route to a better essay is a more factually complete and accurate description of the nature of the problem, as well as a normative discussion that deals with the actual normative issues, which are the result of the ecological rather than personal nature of the consequences of privacy destruction.

 
Deleted:
<
<
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.
 \ No newline at end of file
Added:
>
>
 \ No newline at end of file

LindaShenFirstPaper 1 - 29 Oct 2012 - Main.LindaShen
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstPaper"

The Value of Privacy

-- By LindaShen - 29 Oct 2012

As Professor Moglen points out, one critical problem with today’s internet is that it compromises our privacy, offering additional “services” such as targeted advertising when it could just as well operate at zero marginal cost without offering such services. While internet users have a general understanding that their privacy is compromised through using products by the likes of Google and Facebook, there exists a question of what this privacy loss costs, both for individual users and for society. Relatedly, there exists a question of whether such privacy impositions create benefits other than the revenue gains for the companies who provide them. Most users would elect to maintain their privacy all else equal, but is there a benefit to giving up some privacy that significantly diminishes the net cost? If not, then why do people continue to behave in ways that facilitate the compromise of privacy?

Two Questions

This paper explores two questions associated with the issue above: (1) the empirical question of how much users (facially) value their privacy, as approximated by their pattern of behavior in a variety of situations, and (2) the normative question of how much users should value their privacy, taking into account factors such as possible user ignorance (either of consequences or of alternatives) and social externalities.

Empirical Question

One way to approximate the value that users assign to privacy is by extrapolating from other real-world scenarios where people face privacy compromises. A quick survey of non-internet-related examples suggests that many people are willing to at least partially compromise their privacy regularly. People often sign up for reward cards to receive store discounts in exchange for the tracking of their purchases, use dressing rooms in exchange for being potentially monitored by personnel, and do not unsubscribe from physical junk mail despite its inconvenience, possibly because they account for the probability that certain pieces of junk mail are items they actually wish to receive.

Thus, people’s willingness to expose personal information on the internet seems consistent with behavior elsewhere. Moreover, with regard to the internet, it is not simply that users are willing to share information with service-providers like Google and Facebook; it is that even while using such services, they frequently share beyond the minimum required. For instance, although Facebook provides customizable privacy filters, people often choose to share information with all “friends” (often hundreds), or “friends of friends” (often thousands).

While information sharing is a large part of the purpose of both Facebook in particular and the internet more generally, the sharing that occurs on social media sites may foster two related mentalities: (A) that one must “give to get” – i.e., that privacy is a good that one can give up in exchange for service by the host site or for information shared by others; and (B) that surrendering personal information to distant acquaintances or strangers is no big deal. (A) may arise from the fact that sharing information among friends and networks is a loosely reciprocal process: the more content we share, the more likely there will be feedback content (either in the form of commentary or of sharing by others) that makes the sharing process more worthwhile. (B) may arise from the fact that the internet has generally made personal facts about us more public, and is perhaps enhanced by a cultural exhibitionism / voyeurism that has developed in the process, or a groupthink that condones it. A third common mentality is that sharing information with a disembodied entity is less disconcerting than sharing information with identifiable individuals. These mentalities create a somewhat inhospitable environment for convincing individuals of the need to zealously guard their privacy.

Normative Question

The cost of compromising privacy is fairly user-dependent. Some people have more to hide than others, and some care more about disclosure. (For instance, Professor Moglen pays for groceries with cash, while I use reward cards, allowing stores to track purchase patterns.) However, is this because I’m less informed about the risks of sharing my personal information? Am I more selfish for not internalizing the costs that my information-sharing imposes on others?

There are certainly potentially serious risks to sharing personal information. While most users understand the nuisance of targeted advertisements, they may not fully appreciate the impact of aggregation or prediction: insurance companies may learn individual facts and demographic behavioral patterns and use this to alter premiums, and the government may do much more. Additionally, information shared about us may be out of our control: through others’ sharing, Google and Facebook may possess my conversations and images even if I personally use neither. Moreover, those who possess this data can make reliable predictions about individuals with my characteristics, as long as people with sufficiently similar characteristics opt to not safeguard their data. Aside from a commons problem, users with low subjective value of privacy can easily undermine the privacy of those who value it much more by not internalizing the latter’s costs, thereby depriving the latter group of choice.

On the flip side, privacy compromises may at least partially function to customize services we (like to) receive. For example, Amazon can use our history to deliver desirable recommendations and discounts. And while Facebook photos may allow surveillance of our whereabouts, similar tracking enables tools like Google Flu Trends, one of many ways in which personal data aggregation can deliver social good. Ultimately, some privacy compromises are inevitable to enable the provision of widescale social services, from screening security threats to controlling disease. The goal is to minimize the intrusion after balancing the gravity of the social interest with the risk that information-gathering poses (including risk that our information is nefariously used), which may not always align with individuals’ incentives. As it’s difficult for individuals to assess the risk that their information be used nefariously, and as it’s not always in the individuals’ (perceived) self-interest to protect their data, they may engage in behavior that is risk-seeking on a more global scale, at least from the more informed user’s perspective.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 23 Aug 2014 - 19:31:21 - EbenMoglen
Revision 2r2 - 11 Jan 2013 - 00:28:11 - EbenMoglen
Revision 1r1 - 29 Oct 2012 - 15:39:26 - LindaShen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM