Law in Contemporary Society

View   r2  >  r1  ...
KirillLevashovFirstPaper 2 - 02 Mar 2012 - Main.KirillLevashov
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 7 to 7
 

The Tools

Changed:
<
<
On 3/1/2012, Google’s new privacy policy will go into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
>
>
On 3/1/2012, Google’s new privacy policy went into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
 Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable. Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

KirillLevashovFirstPaper 1 - 13 Feb 2012 - Main.KirillLevashov
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstPaper"

The Expanding Bounds of Identifiability

The Tools

On 3/1/2012, Google’s new privacy policy will go into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.

Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable. Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

However, as Google intertwines its available information, and as people’s Facebook surging amounts of site activity provides more data to Facebook’s servers, there is a growing pool of user information that can predict the user’s characteristics to a mind-boggling extent. Even those who only have access to the publicly listed information on Facebook are able to extract unlisted information. A group of MIT students in 2009 designed an algorithm that predicted a male user’s unlisted sexual orientation based on his Facebook profile. While they had obvious problems in confirming their algorithm’s results, the program correctly identified 10 out of 10 users’ sexualities of which the students had independent verification. With the amount of information that users currently provide to Google, the predictive algorithms that Google could create have the potential to deduce much more about the user than the user ever intended to reveal.

The Legal Barriers

This type of subtle invasion of privacy is one the law will have trouble curbing. If we take Cohen’s functional approach to legal decisions and ask what such a law would have to do, we run into several problems. Much of the information provided to the servers of these sites is useless in its individual bytes, so there is no individual characteristic that can be withheld to deplete the power of these invasive mechanisms. Further, almost every invasive mechanism confers a corresponding perceived benefit: Google’s Docs and Calendar features grant the servers access to the user’s whereabouts and interests, but they also allow the user to access his schedule and documents instantaneously from any terminal of the user’s choice. Facebook’s “Check-In” feature allows the servers to track users’ whereabouts and tastes in products, if the check-in location is a store or restaurant; however, this feature also allows users to meet when they find themselves in the same vicinity. Because the loss of privacy is not always manifested, many users may be unwilling to forfeit the benefits until the consequences hit home.

This issue will be troublesome for the courts because it is not susceptible to a beloved bright-line rule. The extent to which privacy is invaded is the extent to which user information is impermissibly used to intrude on the user’s persona and communications. This in turn varies directly with the capacity and creativity of the intruder to deduce a greater whole from the sum of the data the user sends to the server. Thus, the question for the court becomes whether the data is being used in an impermissible way beyond the purposes for which the user voluntarily provided it. This will in turn give rise to problems of transcendental nonsense, as questions such as “voluntariness” and “permissibility” could fall prey to purely legal operational definitions.

Lastly, there is a practical problem against which the courts will clash in attempting to resolve such problems through legal channels. One of Holmes’ statements about the law describes it as a habit; the rules of our fathers and neighbors will carry forth by momentum. It takes a gradual chipping away of the existing understanding of privacy to effect a change that recognizes such a right on the internet, a place without a physical location in which that right can be violated. The law is not a speedy tool of change. Only in 2010 did the federal legal system acknowledge a right to privacy in private e-mail, ruling in US v. Warshak that the government must obtain a search warrant before seizing such e-mail. Even then, the court acknowledged this right based on the similarities between e-mail and traditional mail. If the right to privacy that is being violated by the data-gathering social tools of the 21st century is to be recognized at law, it will likely have to be analogized to another, established, right to privacy. This is particularly difficult due to the quickly evolving nature of technology and social tools; the rights they violate may not resemble any particular set of established rights for a long enough period of time for a law to protect them. Perhaps the efficient solution is technological rather than legal, but until some safeguard is implemented, the servers will continue to hone their predictive powers unchecked.


-- By KirillLevashov - 13 Feb 2012


Revision 2r2 - 02 Mar 2012 - 19:03:17 - KirillLevashov
Revision 1r1 - 13 Feb 2012 - 06:49:56 - KirillLevashov
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM