Computers, Privacy & the Constitution

View   r8  >  r7  ...
UdiKarklinskyFirstPaper 8 - 12 May 2015 - Main.UdiKarklinsky
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<

Law and Economics Applications to Privacy Law

>
>

Complementing Notice with Periodical Disclosures

 

Introduction

Changed:
<
<
Professor Moglen has been accusing us students, and the vast majority of society in general, at surrendering freedom voluntarily, for the limited benefits of “shiny” products. According to Moglen, by inviting these products into our lives, we self-inflict harm, ranging from the “trivial” use of private information for commercial purposes; to the grave risk of persecution by those with power.
>
>
Privacy policies or terms of use agreements (“notices”) are too long, time consuming, and complicated for most people, and therefore do not result in truly informed consent of those that click “agree”. To make things worse, notices often require you to consent at an early stage for different collections and uses of data that would span over a long period of time, and are very hard to process in advance. This essay suggests a framework, drawn from the field of behavioral economic analysis of consumer protection, that I found helpful in thinking about these problems, and most importantly, providing ideas for a solution. It should be noted that I am familiar with class-mate’s interesting essay on notices, but I address this issue from a very different angle.
 
Deleted:
<
<
Under Moglen’s premise, I would like to suggest relevant theories from the fields of classical and behavioral law and economics (“L&E”) that are useful in understanding why people still choose to use these products. Furthermore, such analysis bears the potential to contribute to activists’ discussions on how to raise awareness to the harm people inflict upon themselves and society, and to persuade people to amend their behavior.
 
Changed:
<
<

Classical L&E

>
>

The Framework

 
Changed:
<
<
One possibly relevant theory, drawn from the field of classical L&E, is rational ignorance (or apathy) (Lemley 2001, Rational Ignorance at the Patent Office). Consumers might be rationally ignorant if they make an informed decision that the expected cost of further educating themselves about certain risks is higher than the expected harm. One might argue that our case is indeed such a case of economically rational behavior; if the harm from the use of personal information for commercial purposes does not bother many consumers, and the risk of persecution is extremely low. However, an economically rational actor operates based on the full information available to him (at no cost) and it does not seem to be the case here. The majority of consumers are not even aware of any risks involved and surely cannot perceive the harsh long-term societal risks they impose upon themselves. Therefore, many consumers do not have any dilemma concerning how much should they invest in exploring whether the product is “worth the risks.”
>
>
Bar-Gill and Ferrari discuss the issue of “consumer’s mistakes,” where imperfect information and imperfect rationality lead consumers to misperception about products they use. In certain cases, this results in harm to these consumers, and the writers argue that the more harmful mistakes are those concerning the individual consumer’s product use-pattern, as opposed to mistakes about the product’s attributes or about average use-patterns (because the latter are easier to identify and correct quickly). As a solution, they suggest that when the seller has a long-term relationship and is therefore, voluntarily collecting individual use information, regulation should mandate certain disclosures by sellers of consumer’s product use-pattern. For instance, credit card consumers tend to “optimism” and often fail to take into consideration the probability that they personally will end up paying over-limit and late fees. Mandating credit card issuers to disclose individual fee-paying patterns, could be helpful in gradually amending individual consumers’ misperceptions.
 
Changed:
<
<
Therefore, I think other theories – from the field of behavioral L&E, which sets aside the assumption of rationality – could provide better insight here.
>
>
This framework, I argue, could be applicable to notices. In some sense, consumers’ automatic consents to notices, and continued “pay-with-data” exchanges, reflect a “consumer’s mistake”, which stems from consumers’ information asymmetries and imperfect rationality (optimism, neglect of small probabilities, and myopic behavior). To be clear, I do not argue that mistakes regarding over-paying a few dollars a month are of the same harm and magnitude as the loss of privacy; just, that from a pragmatic standpoint, such framing could be insightful and productive. Like credit card consumers, consenting visitors in different online “pay-with-data” exchanges fail to grasp the long-term consequences of their consent to the initial “contract”. Different mechanisms set to improve the effectiveness of notices could definitely raise people’s awareness, but might be inherently limited because of their timing, usually at the beginning of the relationship. At that stage, even if the notice is very apprehensible, all one can truly learn about is the “product’s attributes” – what data does a certain website collect, for what purposes, etc… Because of consumers’ imperfect information, and propensity toward optimism (“this wouldn’t happen to me”), such “general” notices fail to pass through.
 
Deleted:
<
<

Behavioral L&E

 
Changed:
<
<

Consumers’ misperceptions

>
>

Thinking About Solutions

 
Changed:
<
<
Consumers tend to myopic behavior, ignoring small probabilities (Sunstein 2001, Probability Neglect: Emotions, Worst Cases, and Law) and optimism (Jolls 1998, A Behavioral Approach to Law and Economics). These well-recorded tendencies could potentially lead consumers to underestimate and disregard the risks of their use of such products. Consumers that tend to myopic behavior might disregard long-term effects of their decisions, and therefore will not fear for their personal freedom (even if they understand the gravity of such harm), as long as they consider it as more of a longer-term threat. Tendency to ignore small probabilities could result similarly, if consumers consider a future where they will be under eminent threat of persecution as a “state of nature” of low probability. Also, even if consumers realize that this is already an existing threat, they might still disregard it if they live where there is currently a very small chance of such an occurrence. Even if the consumer fully understands the gravity of being persecuted, he will not take it into account because he assigns 0% probability to such a scenario. Similarly, Optimism could bring a consumer, even if he understands the general grave risks involved, to think that “it wouldn’t happen to me.”
>
>
Bar-Gill and Ferrari argue in favor of mandating on-going individual use-pattern disclosures when the seller has a long-term relationship and is voluntarily collecting individual use information. Obviously, websites that present notice (for collection and use of data) fit this description perfectly.
 
Changed:
<
<
This leads to the conclusion that perhaps, in order for things to get better, they first have to get worse. As the situation will deteriorate, potentially more and more people will be exposed to stories about those harmed, and only such “availability” could eliminate these tendencies. If the government took your neighbor, you will know that it might also eventually come for you. Is it possible to fix these misperceptions in less painful ways? In the context of crime deterrence, it has been argued that in order to eliminate criminals’ perception of “I’ll never get caught” the police should make arrests as “loud” as possible, to reach other criminals’ attention and impact their perception of the arrest probability. Applying this logic here provides an additional justification to the need for activists to effectively communicate the risks to as many consumers as possible.
>
>
Alongside “improved” notices, there could also be a great benefit in an ongoing individualized use-pattern disclosure mechanism that will provide people with a chance to gradually “correct their privacy mistakes.” Ideally, a certain website’s disclosure should provide each user with a periodic review of the data that it acquired from him specifically, and a general explanation about how has this data been used. Such personalized disclosure could demonstrate to people what information have they been giving up, and enable a more informed reassessment of personal risks.
 
Changed:
<
<
So the result of the application of all this rhetoric to the situation is to confirm our view that people should be better informed and the people who are better informed should work as hard as possible to inform others.

Contract Design

Another contribution of behavioral L&E is to the understanding of information asymmetries and contract design. Researches on the exploitative nature of consumer contracts have had a significant impact on regulation and practices in different markets, such as home loans or mobile telecommunications (Bar-Gill 2009, The Law, Economics and Psychology of Subprime Mortgage Contracts). As far as I know, despite growing attention to the relations between companies such as Google and Facebook and their users, these contracts/privacy policies are still not being taken as seriously (by researchers and regulators) as more “classic” consumer contracts. The reason, I suspect, is the lack of money consideration; a very unpersuasive justification. To the contrary – because no money changes hands, these cases are even more prone to abusive practices – consumers are much better “trained” to identify “how much” are they paying in a deal, in comparison to “what they give.” Therefore, more efforts are required in order to promise sufficient disclosures that will overcome information asymmetries and allow consumers to truly realize what they sacrifice.

It is also important that consumers will understand that the freedoms that they give up have value (even monetary!). Researchers have identified an endowment effect: people tend to ascribe more value to what they own (Knetsch 1989, The Endowment Effect and Evidence of Nonreversible Indifferences Curves). Such an effect means that if consumers will have better understanding of the freedoms they lose and their worth, and develop a sense of ownership over them, Google will have much harder time taking them. With that regard, perhaps even the information on how much am I, as a consumer, is worth to Google, could affect my decision.

Here the outcome is that if people knew their value they wouldn't sell too low. But presumably we should not run an active slave market so that people would know what their bodies were worth, and wouldn't accept low wages. Or should we?

Conclusion

The application of (mainly behavioral) L&E into privacy law could improve the understanding of why consumers disregard the harms of their choices; and raise interesting ideas about how to promote change. Obviously, this paper has only touched upon several applications, and there is much room for further thinking.

I'm not sure what "applications" we got here. I think what was demonstrated instead is that a vocabulary could be applied. We now know that it is possible to use some pre-existing system of description to describe also this stuff. There is absolutely no intellectual or social payoff yet for doing so, so far as the essay extends. We haven't learned anything new, or been presented with a new idea, yet.

I think the way to improve this draft is by having an actual good idea to offer the reader up front. You could then show the reader how you got this good idea through behavioral economics, if that's in fact how you did it, which would be an advertisement for behavioral economics, if your purpose is to advertise it. Those of us who are more interested in the idea itself than we are in how you came by it could pay attention to that too, of course.

>
>
In the age of Big Data, and given most people’s limited technical capabilities, one could worry that such disclosures would still be too complicated for consumers, but in my opinion, this depends on design. Throwing masses of data at consumers would probably be ineffective, but an automatic “summary” or “highlights” could be very helpful. For example, a user might benefit from a brief periodical report explaining that the application possesses data about his whereabouts on X amount of days over the last year/month/week. An even more effective disclosure would highlight certain personal details that were collected about you, and provide some explanation on their use. A more personalized disclosure is more likely to get to people, demonstrating what personal information is exposed and making people think twice on whether this is worth it.
 
Added:
>
>
The big question is how could such disclosures become reality? Regulatory mandated disclosures could, in my opinion, be an effective solution also regarding “use of data.” However, it is important to note that personal data privacy protection is less regulated than general consumer protection, and therefore, to apply this idea here is somewhat more “ambitious”. Also, mandatory on-going disclosures, even if designed thoughtfully by the regulator, might not be as effective as hoped. Companies are likely to make disclosures as “dry” as possible, and it would be difficult to require them to effectively highlight the individual risks. With that regard, technical solutions, putting the “disclosure” in the hands of an independent third party, more adequately incentivized, might have some advantages over regulatory mandated solutions. Perhaps like tosdr.org provides accessibility at the notice stage, others could assist on an ongoing basis, providing automatic periodical reports that identify the information you provide to a certain website, and more importantly, reflect the risks involved in a comprehensible manner. For instance, such software could provide automated simple explanations about “worst-case scenarios” it deduces: “news website Y holds a list of all articles you read this year, including this one about ‘how to hide that you cheated on your wife.’ This information has probably been sold to Z and W and could end up…”). Although there are technical measures that allow users to understand, in some circumstances, what data did they provide, in my research I did not find software that allows on-going potential-risk-oriented “disclosures” which deal exactly with the informational limitations that are so prevalent among users.
 
META TOPICMOVED by="UdiKarklinsky" date="1425659661" from="CompPrivConst.TWikiGuestFirstPaper" to="CompPrivConst.UdiKarklinskyFirstPaper"

Revision 8r8 - 12 May 2015 - 19:49:37 - UdiKarklinsky
Revision 7r7 - 29 Apr 2015 - 23:32:57 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM