Law in Contemporary Society

View   r7  >  r6  >  r5  >  r4  >  r3  >  r2  ...
KirillLevashovFirstPaper 7 - 22 Jan 2013 - Main.IanSullivan
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="FirstPaper"
>
>
META TOPICPARENT name="FirstPaperSpring2012"
 

Google knows roughly who you are, roughly what you care about, roughly who your friends are. What else will it know?


KirillLevashovFirstPaper 6 - 21 May 2012 - Main.KirillLevashov
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 6 to 6
 

The Infrastructure

Changed:
<
<

          We tell our search engines things that we would never tell our closest friends, assuming that we are cloaked in the web’s anonymity. With Google’s expanding ability to collect, keep, and analyze data, this asset may be evaporating. Google’s new privacy policy applies uniformly to products like Gmail, Calendar, and Google search. On its face, the unification promotes transparency by limiting the number of privacy policies that Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows the services to share user data with each other. This creates a communal pool of user data that can be used to personalize the user’s experience, but also to identify the user’s interests, concerns, and true identity (if the last of these has not already been explicitly provided to Gmail during the sign-up process).

          Google’s ability to collect this data is based on the array of services it provides, free of charge, to its users. It is a central hub to which millions of users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the pull of improved personalized results may encourage users to log in). In exchange for these services, millions of users are willing to forfeit their control over the flow of their data, and allow Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.
>
>

          We tell our search engines things that we would never tell our closest friends, believing that we are cloaked in anonymity. With Google’s expanding ability to collect, keep, and analyze data, this belief may be increasingly misguided. Google’s new privacy policy, given effect in early 2012, applies uniformly to products like Gmail, Calendar, and Google search. On its face, the unification promotes transparency by limiting the number of privacy policies that Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows the services to share user data with each other. This creates a communal pool of user data that can be used to personalize the user’s experience, but also to identify key personal characteristics such as interests, location, and true identity (if the last of these has not already been explicitly provided to Gmail during the sign-up process).

          Google collects this data through channels embedded in the services it provides, free of charge, to its users. Google acts as a central hub to which millions of users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the allure of improved personalized search results may encourage users to log in). In exchange for these services, users forfeit control over the flow of their data, and implicitly permit Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.
 

The Problem

Changed:
<
<

          The rate and direction of technological development make it difficult to predict what voluntarily provided data could be used for in the future. Users may expect the online information they provide to be divorced from their flesh and blood, but that is no longer the case. Being careless with information on Facebook can now get teachers and fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” This is a troubling proposition for those of us who have cruised web pages out of sheer curiosity, and now have to worry about how that search history could be interpreted, and what the consequences could be. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using one Google service incautiously. With the synergistic effect of data sharing between services, the data users produce could become even more dangerous in the future. Consider the potential uses of this data if an insurance company were able to get its hands on it. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. Google Checkout, after all, requires an account, which requests real-name identification. It is a possibility that Google is currently not so invidious. However, it is worth asking—what does the information that I provide to Google say about me?
>
>

          The rapid rate of technological development makes it difficult to predict potential future uses of data at the time of its voluntary provision. Users may psychologically divorce their online personas from their flesh and blood, but that is no longer the reality. Being careless with information on Facebook can now get teachers and bartenders fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” The troubling consequence of such a stance is that Google users are encouraged to restrict their online behavior to comport with an unarticulated and uncertain standard of propriety, at the risk of having their behavior used against them. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using a single Google service incautiously. With the synergistic effect of data sharing between services, user-produced data could become more dangerous in the future, particularly if more parties were able to access it. This danger is illustrated by the example of a health insurance provider that is able to access Google data. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. In many cases, acquiring the identity of the user who produced the data would not be an issue; Google Checkout requires an account, which requests real-name identification. It is unlikely that Google is currently so invidious (or so indifferent to backlash) that it would allow an insurance company to acquire any user data. However, the data provided by users today is not going to disappear, and future uses remain potentially infinite.

          Aside from internet absence, there is no unifying solution to this problem. There are actions that Google could take to mitigate the potential harmfulness of user-provided data, but the impetus to take this action must come from the external pressure of the users themselves. For example, a scrubbing process between the user terminal and the Google server could strip the data of all information unique to the user (such as IP address and username), and prevent such data from being stored on the server. Google would then retain aggregated data for marketing and research purposes. Another alternative is to treat a user’s usage data the same way medical data is treated—require the user’s written consent prior to any release. Unfortunately, potential solutions of greater complexity often face questions that do not lend themselves to bright-line resolutions.

 

The Solution The Questions

Deleted:
<
<

          People’s voluntary provision of data is protected by the First Amendment freedom to exchange information. If a user wants to use a Google service, the price is the bit of information that Google then knows about that user. As long as this is voluntary, any sort of government regulation on the exchange is likely to face stern backlash on constitutional grounds. These include such solutions as “Do Not Track” options, much like the “Do Not Call” lists for telemarketers. In the case of the telemarketers, however, the element of initial positive voluntariness on the part of the user was lacking.

          The European Union has proposed, among other things, a right to be forgotten. This would allow people to have their data deleted if there is no "legitimate ground" for retaining it. This puts some power back in the hands of who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to find a justification for retaining most of its data. This could one day evolve into a reasonably successful solution, but it will require the articulation of a standard that punishes only impermissible use of voluntarily provided data. This is a question with no simple answer: at what point does the voluntarily provided data become impermissibly used? What property rights are there in data that voluntarily dropped into a depository, even if the user did not know it?

          By the time users get their insurance raised for Googling “breathing problems,” it will be too late to prevent the abuse of user-provided data. But a successful solution cannot prevent Google from functioning—the exchange of information is critical in our society, and the concept of restricting it in order to protect the user is unsettlingly Orwellian. Currently, searching symptoms on Google can even be empowering to patients who can then walk into a doctor’s office feeling better informed about their condition. A solution that begins to make any progress will have to be one that prevents the flow of information from being used against the users who create it. Is this possible to do, without shutting down the search and sharing services we treasure? And in a disparate, disconnected group of users, where will we get the unified political will?

 \ No newline at end of file
Added:
>
>

          People’s voluntary provision of data is protected by the First Amendment. If a user wants to use a Google service, the cost of doing so is the information that Google then knows about that user. As long as the exchange is voluntary, government regulation is likely to face stern backlash on constitutional grounds. However, if the legislature were to make only impermissible use of this voluntarily-provided user data punishable, the user’s first Amendment right would not be harmed. Yet such a solution would require an answer to the question: when does voluntarily provided data become impermissibly used? Furthermore, ownership of the data is entirely unclear. What actionable property rights are there in data that voluntarily dropped into a depository, even if the user did not know he was providing data?

          The European Union has proposed a legislative solution: the creation of a statutory right to be forgotten.. This would allow people to have their data deleted if there is no "legitimate ground" for retaining it. This puts some power back in the hands of who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to find a justification for retaining most of its data, sapping the right of its bite. However, if the legitimacy of the ground were subject to judicial review, the right could retain protective strength.

          A successful solution cannot prevent Google from functioning—the free exchange of information is critical in our society, and the concept of restricting it in order to protect the user is unsettlingly Orwellian. If an insurance company cannot gain access to a user’s search history, searching symptoms on Google can even be empowering to patients who can then walk into a doctor’s office feeling better informed about their condition. A solution that begins to make progress will have to prevent the flow of information from being used against the users who create it. Is this possible to do, without shutting down the search and sharing services we treasure? And in a disparate, disconnected group of users, where will we get the unified political will?


KirillLevashovFirstPaper 5 - 23 Apr 2012 - Main.KirillLevashov
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 7 to 7
 

The Infrastructure

          We tell our search engines things that we would never tell our closest friends, assuming that we are cloaked in the web’s anonymity. With Google’s expanding ability to collect, keep, and analyze data, this asset may be evaporating. Google’s new privacy policy applies uniformly to products like Gmail, Calendar, and Google search. On its face, the unification promotes transparency by limiting the number of privacy policies that Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows the services to share user data with each other. This creates a communal pool of user data that can be used to personalize the user’s experience, but also to identify the user’s interests, concerns, and true identity (if the last of these has not already been explicitly provided to Gmail during the sign-up process).

Changed:
<
<
          Google’s ability to collect this data is based on the array of services it provides, free of charge, to its users. It is a central hub to which millions users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the pull of improved personalized results may encourage users to log in). In exchange for these services, millions of users are willing to forfeit their control over the flow of their data, and allow Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.
>
>
          Google’s ability to collect this data is based on the array of services it provides, free of charge, to its users. It is a central hub to which millions of users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the pull of improved personalized results may encourage users to log in). In exchange for these services, millions of users are willing to forfeit their control over the flow of their data, and allow Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.
 

The Problem

Changed:
<
<

          The rate and direction of technological development make it difficult to predict what voluntarily provided data could be used for in the future. Users may expect the online information they provide to be divorced from their flesh and blood, but that is no longer the case. Being stupid with information on Facebook can now get teachers and fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” This is a troubling proposition for those of us who have cruised web pages out of sheer curiosity, and now have to worry about how that search history could be interpreted, and what the consequences could be. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using one Google service incautiously. With the synergistic effect of data sharing between services, the data users produce could become even more dangerous in the future. Consider the potential uses of this data if an insurance company were able to get its hands on it. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. Google Checkout, after all, requires an account, which requests real-name identification. I admit that this is purely hypothetical, and perhaps Google is not so invidious. However, it is worth asking—what does the information that I provide to Google say about me?
>
>

          The rate and direction of technological development make it difficult to predict what voluntarily provided data could be used for in the future. Users may expect the online information they provide to be divorced from their flesh and blood, but that is no longer the case. Being careless with information on Facebook can now get teachers and fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” This is a troubling proposition for those of us who have cruised web pages out of sheer curiosity, and now have to worry about how that search history could be interpreted, and what the consequences could be. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using one Google service incautiously. With the synergistic effect of data sharing between services, the data users produce could become even more dangerous in the future. Consider the potential uses of this data if an insurance company were able to get its hands on it. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. Google Checkout, after all, requires an account, which requests real-name identification. It is a possibility that Google is currently not so invidious. However, it is worth asking—what does the information that I provide to Google say about me?
 

The Solution The Questions

          People’s voluntary provision of data is protected by the First Amendment freedom to exchange information. If a user wants to use a Google service, the price is the bit of information that Google then knows about that user. As long as this is voluntary, any sort of government regulation on the exchange is likely to face stern backlash on constitutional grounds. These include such solutions as “Do Not Track” options, much like the “Do Not Call” lists for telemarketers. In the case of the telemarketers, however, the element of initial positive voluntariness on the part of the user was lacking.

Changed:
<
<
          The European Union has proposed, among other things, a right to be forgotten. This would allow people to have their data deleted if there is no legitimate ground for retaining it. This puts some power back in the hands back in users who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to retain most of its data, justified by the promise of an improved user experience. This could one day evolve into a reasonably successful solution, but it will require the articulation of a standard that punishes only impermissible use of voluntarily provided data. This is a question with no simple answer: at what point does the voluntarily provided data become impermissibly used? What property rights are there in data that voluntarily dropped into a depository, even if the user did not know it?
>
>
          The European Union has proposed, among other things, a right to be forgotten. This would allow people to have their data deleted if there is no "legitimate ground" for retaining it. This puts some power back in the hands of who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to find a justification for retaining most of its data. This could one day evolve into a reasonably successful solution, but it will require the articulation of a standard that punishes only impermissible use of voluntarily provided data. This is a question with no simple answer: at what point does the voluntarily provided data become impermissibly used? What property rights are there in data that voluntarily dropped into a depository, even if the user did not know it?
 

          By the time users get their insurance raised for Googling “breathing problems,” it will be too late to prevent the abuse of user-provided data. But a successful solution cannot prevent Google from functioning—the exchange of information is critical in our society, and the concept of restricting it in order to protect the user is unsettlingly Orwellian. Currently, searching symptoms on Google can even be empowering to patients who can then walk into a doctor’s office feeling better informed about their condition. A solution that begins to make any progress will have to be one that prevents the flow of information from being used against the users who create it. Is this possible to do, without shutting down the search and sharing services we treasure? And in a disparate, disconnected group of users, where will we get the unified political will?


KirillLevashovFirstPaper 4 - 21 Apr 2012 - Main.KirillLevashov
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Changed:
<
<

The Expanding Bounds of Identifiability

>
>

Google knows roughly who you are, roughly what you care about, roughly who your friends are. What else will it know?

 
Added:
>
>

The Infrastructure

 
Changed:
<
<

The Tools

>
>

          We tell our search engines things that we would never tell our closest friends, assuming that we are cloaked in the web’s anonymity. With Google’s expanding ability to collect, keep, and analyze data, this asset may be evaporating. Google’s new privacy policy applies uniformly to products like Gmail, Calendar, and Google search. On its face, the unification promotes transparency by limiting the number of privacy policies that Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows the services to share user data with each other. This creates a communal pool of user data that can be used to personalize the user’s experience, but also to identify the user’s interests, concerns, and true identity (if the last of these has not already been explicitly provided to Gmail during the sign-up process).

          Google’s ability to collect this data is based on the array of services it provides, free of charge, to its users. It is a central hub to which millions users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the pull of improved personalized results may encourage users to log in). In exchange for these services, millions of users are willing to forfeit their control over the flow of their data, and allow Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.
 
Changed:
<
<
On 3/1/2012, Google’s new privacy policy went into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
>
>

The Problem

 
Changed:
<
<
Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable.
>
>

          The rate and direction of technological development make it difficult to predict what voluntarily provided data could be used for in the future. Users may expect the online information they provide to be divorced from their flesh and blood, but that is no longer the case. Being stupid with information on Facebook can now get teachers and fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” This is a troubling proposition for those of us who have cruised web pages out of sheer curiosity, and now have to worry about how that search history could be interpreted, and what the consequences could be. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using one Google service incautiously. With the synergistic effect of data sharing between services, the data users produce could become even more dangerous in the future. Consider the potential uses of this data if an insurance company were able to get its hands on it. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. Google Checkout, after all, requires an account, which requests real-name identification. I admit that this is purely hypothetical, and perhaps Google is not so invidious. However, it is worth asking—what does the information that I provide to Google say about me?
 
Changed:
<
<
This is technical nonsense. Many of these services require signing in, and the new privacy policy specifically covers sharing data among these services only when the user is signed in. So there is no identifiability question involved at all.
>
>

The Solution The Questions

 
Deleted:
<
<
Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

A non-sequitur.

However, as Google intertwines its available information, and as people’s Facebook surging amounts of site activity provides more data to Facebook’s servers, there is a growing pool of user information that can predict the user’s characteristics to a mind-boggling extent. Even those who only have access to the publicly listed information on Facebook are able to extract unlisted information. A group of MIT students in 2009 designed an algorithm that predicted a male user’s unlisted sexual orientation based on his Facebook profile. While they had obvious problems in confirming their algorithm’s results, the program correctly identified 10 out of 10 users’ sexualities of which the students had independent verification. With the amount of information that users currently provide to Google, the predictive algorithms that Google could create have the potential to deduce much more about the user than the user ever intended to reveal.

So it would be correct to say that the changes at Google were designed to make Google's treatment of user data derived from its services more unitary, like Facebook's. You could just have said that, without additions of dubious accuracy. And Google does not require "real name" identification, unlike Facebook, which you don't mention.

The Legal Barriers

This type of subtle invasion of privacy is one the law will have trouble curbing. If we take Cohen’s functional approach to legal decisions and ask what such a law would have to do, we run into several problems. Much of the information provided to the servers of these sites is useless in its individual bytes, so there is no individual characteristic that can be withheld to deplete the power of these invasive mechanisms. Further, almost every invasive mechanism confers a corresponding perceived benefit: Google’s Docs and Calendar features grant the servers access to the user’s whereabouts and interests, but they also allow the user to access his schedule and documents instantaneously from any terminal of the user’s choice. Facebook’s “Check-In” feature allows the servers to track users’ whereabouts and tastes in products, if the check-in location is a store or restaurant; however, this feature also allows users to meet when they find themselves in the same vicinity. Because the loss of privacy is not always manifested, many users may be unwilling to forfeit the benefits until the consequences hit home.

This is all waste motion, complexity without purpose. The reason the law "will have trouble curbing" this activity is that people voluntarily agree to provide their information in return for the services they receive. Regulatory intervention therefore requires a showing that people should not be allowed to make the arrangements they evidently presently choose to make. This is, and should be, difficult. Freedom to exchange information is strongly protected against government intervention by an object called the First Amendment.

This issue will be troublesome for the courts because it is not susceptible to a beloved bright-line rule. The extent to which privacy is invaded is the extent to which user information is impermissibly used to intrude on the user’s persona and communications. This in turn varies directly with the capacity and creativity of the intruder to deduce a greater whole from the sum of the data the user sends to the server. Thus, the question for the court becomes whether the data is being used in an impermissible way beyond the purposes for which the user voluntarily provided it. This will in turn give rise to problems of transcendental nonsense, as questions such as “voluntariness” and “permissibility” could fall prey to purely legal operational definitions.

Lastly, there is a practical problem against which the courts will clash in attempting to resolve such problems through legal channels. One of Holmes’ statements about the law describes it as a habit; the rules of our fathers and neighbors will carry forth by momentum. It takes a gradual chipping away of the existing understanding of privacy to effect a change that recognizes such a right on the internet, a place without a physical location in which that right can be violated. The law is not a speedy tool of change. Only in 2010 did the federal legal system acknowledge a right to privacy in private e-mail, ruling in US v. Warshak that the government must obtain a search warrant before seizing such e-mail. Even then, the court acknowledged this right based on the similarities between e-mail and traditional mail. If the right to privacy that is being violated by the data-gathering social tools of the 21st century is to be recognized at law, it will likely have to be analogized to another, established, right to privacy. This is particularly difficult due to the quickly evolving nature of technology and social tools; the rights they violate may not resemble any particular set of established rights for a long enough period of time for a law to protect them. Perhaps the efficient solution is technological rather than legal, but until some safeguard is implemented, the servers will continue to hone their predictive powers unchecked.

I teach a course or two on this subject, which you might find it interesting to take. The present analysis is blowsy, repetitive, occasionally inaccurate, and out of touch with the arguments on the other side. The problem you are writing about is real, and I spend more than a little bit of my professional time working on it. But if you're going either to understand it or do something about it, you need to back up and consider the nature of the objections posed at each of your key points, both in order to learn about the parts of the problem you don't yet fully understand, and in order to deal more effectively with the real legal issues, which have less to do with Cohen or Holmes than you seem to think.
 \ No newline at end of file
Added:
>
>

          People’s voluntary provision of data is protected by the First Amendment freedom to exchange information. If a user wants to use a Google service, the price is the bit of information that Google then knows about that user. As long as this is voluntary, any sort of government regulation on the exchange is likely to face stern backlash on constitutional grounds. These include such solutions as “Do Not Track” options, much like the “Do Not Call” lists for telemarketers. In the case of the telemarketers, however, the element of initial positive voluntariness on the part of the user was lacking.

          The European Union has proposed, among other things, a right to be forgotten. This would allow people to have their data deleted if there is no legitimate ground for retaining it. This puts some power back in the hands back in users who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to retain most of its data, justified by the promise of an improved user experience. This could one day evolve into a reasonably successful solution, but it will require the articulation of a standard that punishes only impermissible use of voluntarily provided data. This is a question with no simple answer: at what point does the voluntarily provided data become impermissibly used? What property rights are there in data that voluntarily dropped into a depository, even if the user did not know it?

          By the time users get their insurance raised for Googling “breathing problems,” it will be too late to prevent the abuse of user-provided data. But a successful solution cannot prevent Google from functioning—the exchange of information is critical in our society, and the concept of restricting it in order to protect the user is unsettlingly Orwellian. Currently, searching symptoms on Google can even be empowering to patients who can then walk into a doctor’s office feeling better informed about their condition. A solution that begins to make any progress will have to be one that prevents the flow of information from being used against the users who create it. Is this possible to do, without shutting down the search and sharing services we treasure? And in a disparate, disconnected group of users, where will we get the unified political will?


KirillLevashovFirstPaper 3 - 15 Apr 2012 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 9 to 9
 On 3/1/2012, Google’s new privacy policy went into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
Changed:
<
<
Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable. Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.
>
>
Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable.
 
Changed:
<
<
However, as Google intertwines its available information, and as people’s Facebook surging amounts of site activity provides more data to Facebook’s servers, there is a growing pool of user information that can predict the user’s characteristics to a mind-boggling extent. Even those who only have access to the publicly listed information on Facebook are able to extract unlisted information. A group of MIT students in 2009 designed an algorithm that predicted a male user’s unlisted sexual orientation based on his Facebook profile. While they had obvious problems in confirming their algorithm’s results, the program correctly identified 10 out of 10 users’ sexualities of which the students had independent verification. With the amount of information that users currently provide to Google, the predictive algorithms that Google could create have the potential to deduce much more about the user than the user ever intended to reveal.
>
>
This is technical nonsense. Many of these services require signing in, and the new privacy policy specifically covers sharing data among these services only when the user is signed in. So there is no identifiability question involved at all.

Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

 
Added:
>
>
A non-sequitur.

However, as Google intertwines its available information, and as people’s Facebook surging amounts of site activity provides more data to Facebook’s servers, there is a growing pool of user information that can predict the user’s characteristics to a mind-boggling extent. Even those who only have access to the publicly listed information on Facebook are able to extract unlisted information. A group of MIT students in 2009 designed an algorithm that predicted a male user’s unlisted sexual orientation based on his Facebook profile. While they had obvious problems in confirming their algorithm’s results, the program correctly identified 10 out of 10 users’ sexualities of which the students had independent verification. With the amount of information that users currently provide to Google, the predictive algorithms that Google could create have the potential to deduce much more about the user than the user ever intended to reveal.

 
Added:
>
>
So it would be correct to say that the changes at Google were designed to make Google's treatment of user data derived from its services more unitary, like Facebook's. You could just have said that, without additions of dubious accuracy. And Google does not require "real name" identification, unlike Facebook, which you don't mention.
 

The Legal Barriers

This type of subtle invasion of privacy is one the law will have trouble curbing. If we take Cohen’s functional approach to legal decisions and ask what such a law would have to do, we run into several problems. Much of the information provided to the servers of these sites is useless in its individual bytes, so there is no individual characteristic that can be withheld to deplete the power of these invasive mechanisms. Further, almost every invasive mechanism confers a corresponding perceived benefit: Google’s Docs and Calendar features grant the servers access to the user’s whereabouts and interests, but they also allow the user to access his schedule and documents instantaneously from any terminal of the user’s choice. Facebook’s “Check-In” feature allows the servers to track users’ whereabouts and tastes in products, if the check-in location is a store or restaurant; however, this feature also allows users to meet when they find themselves in the same vicinity. Because the loss of privacy is not always manifested, many users may be unwilling to forfeit the benefits until the consequences hit home.

Changed:
<
<
This issue will be troublesome for the courts because it is not susceptible to a beloved bright-line rule. The extent to which privacy is invaded is the extent to which user information is impermissibly used to intrude on the user’s persona and communications. This in turn varies directly with the capacity and creativity of the intruder to deduce a greater whole from the sum of the data the user sends to the server. Thus, the question for the court becomes whether the data is being used in an impermissible way beyond the purposes for which the user voluntarily provided it. This will in turn give rise to problems of transcendental nonsense, as questions such as “voluntariness” and “permissibility” could fall prey to purely legal operational definitions.
>
>
This is all waste motion, complexity without purpose. The reason the law "will have trouble curbing" this activity is that people voluntarily agree to provide their information in return for the services they receive. Regulatory intervention therefore requires a showing that people should not be allowed to make the arrangements they evidently presently choose to make. This is, and should be, difficult. Freedom to exchange information is strongly protected against government intervention by an object called the First Amendment.
 
Changed:
<
<
Lastly, there is a practical problem against which the courts will clash in attempting to resolve such problems through legal channels. One of Holmes’ statements about the law describes it as a habit; the rules of our fathers and neighbors will carry forth by momentum. It takes a gradual chipping away of the existing understanding of privacy to effect a change that recognizes such a right on the internet, a place without a physical location in which that right can be violated. The law is not a speedy tool of change. Only in 2010 did the federal legal system acknowledge a right to privacy in private e-mail, ruling in US v. Warshak that the government must obtain a search warrant before seizing such e-mail. Even then, the court acknowledged this right based on the similarities between e-mail and traditional mail. If the right to privacy that is being violated by the data-gathering social tools of the 21st century is to be recognized at law, it will likely have to be analogized to another, established, right to privacy. This is particularly difficult due to the quickly evolving nature of technology and social tools; the rights they violate may not resemble any particular set of established rights for a long enough period of time for a law to protect them. Perhaps the efficient solution is technological rather than legal, but until some safeguard is implemented, the servers will continue to hone their predictive powers unchecked.
>
>
 
Added:
>
>
This issue will be troublesome for the courts because it is not susceptible to a beloved bright-line rule. The extent to which privacy is invaded is the extent to which user information is impermissibly used to intrude on the user’s persona and communications. This in turn varies directly with the capacity and creativity of the intruder to deduce a greater whole from the sum of the data the user sends to the server. Thus, the question for the court becomes whether the data is being used in an impermissible way beyond the purposes for which the user voluntarily provided it. This will in turn give rise to problems of transcendental nonsense, as questions such as “voluntariness” and “permissibility” could fall prey to purely legal operational definitions.
 
Changed:
<
<

-- By KirillLevashov - 13 Feb 2012
>
>
Lastly, there is a practical problem against which the courts will clash in attempting to resolve such problems through legal channels. One of Holmes’ statements about the law describes it as a habit; the rules of our fathers and neighbors will carry forth by momentum. It takes a gradual chipping away of the existing understanding of privacy to effect a change that recognizes such a right on the internet, a place without a physical location in which that right can be violated. The law is not a speedy tool of change. Only in 2010 did the federal legal system acknowledge a right to privacy in private e-mail, ruling in US v. Warshak that the government must obtain a search warrant before seizing such e-mail. Even then, the court acknowledged this right based on the similarities between e-mail and traditional mail. If the right to privacy that is being violated by the data-gathering social tools of the 21st century is to be recognized at law, it will likely have to be analogized to another, established, right to privacy. This is particularly difficult due to the quickly evolving nature of technology and social tools; the rights they violate may not resemble any particular set of established rights for a long enough period of time for a law to protect them. Perhaps the efficient solution is technological rather than legal, but until some safeguard is implemented, the servers will continue to hone their predictive powers unchecked.
 
Deleted:
<
<
 \ No newline at end of file
Added:
>
>
I teach a course or two on this subject, which you might find it interesting to take. The present analysis is blowsy, repetitive, occasionally inaccurate, and out of touch with the arguments on the other side. The problem you are writing about is real, and I spend more than a little bit of my professional time working on it. But if you're going either to understand it or do something about it, you need to back up and consider the nature of the objections posed at each of your key points, both in order to learn about the parts of the problem you don't yet fully understand, and in order to deal more effectively with the real legal issues, which have less to do with Cohen or Holmes than you seem to think.
 \ No newline at end of file

KirillLevashovFirstPaper 2 - 02 Mar 2012 - Main.KirillLevashov
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 7 to 7
 

The Tools

Changed:
<
<
On 3/1/2012, Google’s new privacy policy will go into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
>
>
On 3/1/2012, Google’s new privacy policy went into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.
 Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable. Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

KirillLevashovFirstPaper 1 - 13 Feb 2012 - Main.KirillLevashov
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstPaper"

The Expanding Bounds of Identifiability

The Tools

On 3/1/2012, Google’s new privacy policy will go into effect. This policy will apply uniformly to Google products like GMail, Calendar and Google Search, and will unify the 70+ policies that currently govern data sharing in the individual products. Google claims this change will create a more optimized, individually tailored internet experience. On its face, the unification promotes transparency by limiting the number of policies which Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows communication between the products such that a user’s actions while using one product can be reflected in the user’s experience of other products. For example, watching a video tour of a law school on YouTube may result in a Phoenix Law advertisement above the user’s GMail inbox. While this seemingly innocuous feature could marginally improve the user’s experience, it also gives rise to a problem increasingly characteristic of online services.

Individually, Google’s services can only collect a limited amount of data by which those who have access to the server log can identify the user. When the services begin to share such data, the users’ unique combinations of activity make them more identifiable. Beyond making them identifiable, these users also create a rich pattern of behavior from which anyone who has access to the server logs can create a predictive algorithm for that user’s future actions. Services like Facebook, whose superficial purpose is social networking, make no effort to hide the fact that the service will collect user information and make predictions about the user’s desires and future activities. Google’s policy, however, states that they may share “non-personally identifiable information” publicly and with its partners.

However, as Google intertwines its available information, and as people’s Facebook surging amounts of site activity provides more data to Facebook’s servers, there is a growing pool of user information that can predict the user’s characteristics to a mind-boggling extent. Even those who only have access to the publicly listed information on Facebook are able to extract unlisted information. A group of MIT students in 2009 designed an algorithm that predicted a male user’s unlisted sexual orientation based on his Facebook profile. While they had obvious problems in confirming their algorithm’s results, the program correctly identified 10 out of 10 users’ sexualities of which the students had independent verification. With the amount of information that users currently provide to Google, the predictive algorithms that Google could create have the potential to deduce much more about the user than the user ever intended to reveal.

The Legal Barriers

This type of subtle invasion of privacy is one the law will have trouble curbing. If we take Cohen’s functional approach to legal decisions and ask what such a law would have to do, we run into several problems. Much of the information provided to the servers of these sites is useless in its individual bytes, so there is no individual characteristic that can be withheld to deplete the power of these invasive mechanisms. Further, almost every invasive mechanism confers a corresponding perceived benefit: Google’s Docs and Calendar features grant the servers access to the user’s whereabouts and interests, but they also allow the user to access his schedule and documents instantaneously from any terminal of the user’s choice. Facebook’s “Check-In” feature allows the servers to track users’ whereabouts and tastes in products, if the check-in location is a store or restaurant; however, this feature also allows users to meet when they find themselves in the same vicinity. Because the loss of privacy is not always manifested, many users may be unwilling to forfeit the benefits until the consequences hit home.

This issue will be troublesome for the courts because it is not susceptible to a beloved bright-line rule. The extent to which privacy is invaded is the extent to which user information is impermissibly used to intrude on the user’s persona and communications. This in turn varies directly with the capacity and creativity of the intruder to deduce a greater whole from the sum of the data the user sends to the server. Thus, the question for the court becomes whether the data is being used in an impermissible way beyond the purposes for which the user voluntarily provided it. This will in turn give rise to problems of transcendental nonsense, as questions such as “voluntariness” and “permissibility” could fall prey to purely legal operational definitions.

Lastly, there is a practical problem against which the courts will clash in attempting to resolve such problems through legal channels. One of Holmes’ statements about the law describes it as a habit; the rules of our fathers and neighbors will carry forth by momentum. It takes a gradual chipping away of the existing understanding of privacy to effect a change that recognizes such a right on the internet, a place without a physical location in which that right can be violated. The law is not a speedy tool of change. Only in 2010 did the federal legal system acknowledge a right to privacy in private e-mail, ruling in US v. Warshak that the government must obtain a search warrant before seizing such e-mail. Even then, the court acknowledged this right based on the similarities between e-mail and traditional mail. If the right to privacy that is being violated by the data-gathering social tools of the 21st century is to be recognized at law, it will likely have to be analogized to another, established, right to privacy. This is particularly difficult due to the quickly evolving nature of technology and social tools; the rights they violate may not resemble any particular set of established rights for a long enough period of time for a law to protect them. Perhaps the efficient solution is technological rather than legal, but until some safeguard is implemented, the servers will continue to hone their predictive powers unchecked.


-- By KirillLevashov - 13 Feb 2012


Revision 7r7 - 22 Jan 2013 - 20:10:38 - IanSullivan
Revision 6r6 - 21 May 2012 - 07:05:54 - KirillLevashov
Revision 5r5 - 23 Apr 2012 - 15:30:05 - KirillLevashov
Revision 4r4 - 21 Apr 2012 - 22:47:48 - KirillLevashov
Revision 3r3 - 15 Apr 2012 - 16:14:41 - EbenMoglen
Revision 2r2 - 02 Mar 2012 - 19:03:17 - KirillLevashov
Revision 1r1 - 13 Feb 2012 - 06:49:56 - KirillLevashov
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM