Law in Contemporary Society

Google knows roughly who you are, roughly what you care about, roughly who your friends are. What else will it know?

The Infrastructure

          We tell our search engines things that we would never tell our closest friends, assuming that we are cloaked in the web’s anonymity. With Google’s expanding ability to collect, keep, and analyze data, this asset may be evaporating. Google’s new privacy policy applies uniformly to products like Gmail, Calendar, and Google search. On its face, the unification promotes transparency by limiting the number of privacy policies that Google users must review to inform themselves of the rules to which they are subjected. However, the new policy also allows the services to share user data with each other. This creates a communal pool of user data that can be used to personalize the user’s experience, but also to identify the user’s interests, concerns, and true identity (if the last of these has not already been explicitly provided to Gmail during the sign-up process).

          Google’s ability to collect this data is based on the array of services it provides, free of charge, to its users. It is a central hub to which millions of users connect like so many spokes, seeking to reach out to each other via Gmail or Google Docs (which require an account to access), or to sort through information using Google search (which does not require an account to access, though the pull of improved personalized results may encourage users to log in). In exchange for these services, millions of users are willing to forfeit their control over the flow of their data, and allow Google to act as a centralized data repository. Because the services can now share data with each other under the new privacy policy, a rich tapestry of each user’s activity can be created and stored on Google’s servers.

The Problem

          The rate and direction of technological development make it difficult to predict what voluntarily provided data could be used for in the future. Users may expect the online information they provide to be divorced from their flesh and blood, but that is no longer the case. Being careless with information on Facebook can now get teachers and fired. In a telling statement about how Google may treat the data it collects, then-CEO Eric Schmidt quipped, “if you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.” This is a troubling proposition for those of us who have cruised web pages out of sheer curiosity, and now have to worry about how that search history could be interpreted, and what the consequences could be. In 2009, searching “airport security” or “homemade plastic” on Google could land you on a terrorist watch list.

          Landing on government watch lists was the result of using one Google service incautiously. With the synergistic effect of data sharing between services, the data users produce could become even more dangerous in the future. Consider the potential uses of this data if an insurance company were able to get its hands on it. If a user has been searching “cancer symptoms” and purchasing painkillers or energy supplements through Google’s one-click Checkout service, the insurance company may promptly raise the user’s insurance rates. Google Checkout, after all, requires an account, which requests real-name identification. It is a possibility that Google is currently not so invidious. However, it is worth asking—what does the information that I provide to Google say about me?

The Solution The Questions

          People’s voluntary provision of data is protected by the First Amendment freedom to exchange information. If a user wants to use a Google service, the price is the bit of information that Google then knows about that user. As long as this is voluntary, any sort of government regulation on the exchange is likely to face stern backlash on constitutional grounds. These include such solutions as “Do Not Track” options, much like the “Do Not Call” lists for telemarketers. In the case of the telemarketers, however, the element of initial positive voluntariness on the part of the user was lacking.

          The European Union has proposed, among other things, a right to be forgotten. This would allow people to have their data deleted if there is no "legitimate ground" for retaining it. This puts some power back in the hands of who have already provided data to the repository, but with a principle as fluid as “legitimate ground,” Google is likely to find a justification for retaining most of its data. This could one day evolve into a reasonably successful solution, but it will require the articulation of a standard that punishes only impermissible use of voluntarily provided data. This is a question with no simple answer: at what point does the voluntarily provided data become impermissibly used? What property rights are there in data that voluntarily dropped into a depository, even if the user did not know it?

          By the time users get their insurance raised for Googling “breathing problems,” it will be too late to prevent the abuse of user-provided data. But a successful solution cannot prevent Google from functioning—the exchange of information is critical in our society, and the concept of restricting it in order to protect the user is unsettlingly Orwellian. Currently, searching symptoms on Google can even be empowering to patients who can then walk into a doctor’s office feeling better informed about their condition. A solution that begins to make any progress will have to be one that prevents the flow of information from being used against the users who create it. Is this possible to do, without shutting down the search and sharing services we treasure? And in a disparate, disconnected group of users, where will we get the unified political will?

Navigation

Webs Webs

r5 - 23 Apr 2012 - 15:30:05 - KirillLevashov
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM