Computers, Privacy & the Constitution

View   r2  >  r1  ...
WillPalmerSecondPaper 2 - 29 Jun 2015 - Main.MarkDrake
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="SecondPaper"
>
>
META TOPICPARENT name="OldPapers"
 

Big Data and Predictive Policing

-- By WillPalmer - 15 May 2015


WillPalmerSecondPaper 1 - 15 May 2015 - Main.WillPalmer
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondPaper"

Big Data and Predictive Policing

-- By WillPalmer - 15 May 2015

Section I: What is Predictive Policing

Predictive policing is a method of law enforcement that relies on the use of data to predict when and where crimes are likely to occur. Also known as “proactive policing,” it serves a wide array of functions, from informing police departments’ general resource allocations, to supplementing an officer’s reasonable suspicion that a suspect is engaging or will imminently engage in criminal activity. The use of big data to predict crimes has been in place for some time. For instance, the city of Memphis, Tennessee has used a program known as Blue CRUSH (Crime Reduction Utilizing Statistical History) to target likely areas of violent and property crimes, resulting in a 25 percent reduction from 2006-2013. Viktor Mayer-Schonberger & Kenneth Cukier, Big Data, 158 (Eamon Dolan / Houghton Mifflin Harcourt) 2013. The federal government has also rolled out two similar programs in recent years, but ultimately discontinued both due to privacy concerns. Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 360-62 (2015).

Section II: Constitutional Issues

The use of big data as the basis for a policeman’s “reasonable suspicion” that a person is associated with criminal activity implicates the Fourth Amendment’s protections against unreasonable searches and seizures. The legal standard for determining the constitutionality of a police stop is one of “reasonable suspicion.” Terry v. Ohio, 392 U.S. 1, (1968). This standard must be objective and based on specific and articulable facts, and is reviewed by courts under a “totality of circumstances” test, which weighs all relevant factors. Andrew Guthrie Ferguson, Big Data and Predictive Reasonable Suspicion, 163 U. Pa. L. Rev. 327, 339 (2015). Already, this analysis involves a staggering amount of information. For instance, the federal government’s Multi-State Anti-Terrorism Information Exchange) MATRIX program, which has since been discontinued, allowed an officer to see, with a single search, a person’s criminal history, credit information, driver’s license information, vehicle registration, arrests, utility connections, UCC filings, concealed weapons permits, FAA aircraft and pilots licenses, hunting and fishing licenses, professional licenses, and voter registration records. Id. at 361.

Granted, it is true that big data also has the power to curb certain abuses of police stops; for instance, by creating more reliable predictions of who is engaging in criminal activity, it could be used to curb racial profiling and to hold police accountable for their decision-making. Moreover, to the extent that publicly available data can be used to determine whether a person is connected to a crime that is already occurred does not seem constitutionally suspect.

What is troubling, however, is the use of such data to stop and detain individuals based on a predicted, future crime -- i.e., where no crime has occurred and where the police officer has not observed any evidence of ongoing criminal activity. Today, “reasonable suspicions based on prediction remains the stuff of science fiction. Police have begun to predict areas of heightened criminal activity and may predict likely troublemakers . . . but predictive analytics cannot yet tell police whom to stop for a crime not yet committed.” Id. at 384-85 (emphasis added). Yet, barring some human intervention, it seems inevitable that at some point, the amount of information available on any given individual will suffice to justify an officer’s reasonable suspicion that a suspect is about to engage in criminal activity, even when that officer has not personally observed anything to justify that suspicion.

Section III: How Machine-Generated Data Exacerbates these Constitutional Issues

The question we should be asking ourselves is whether there are certain types of data or predictive tools that we want to categorically exclude from such analysis. Obviously a police officer should be able to prevent a crime from occurring, when they observe individuals acting in a manner that suggests imminent criminal activity; Terry v. Ohio held that an officer was justified in detaining and searching three men based on reasonable suspicion, derived from observations of their physical movements, that they were about to rob a store. Terry v. Ohio, 392 U.S. 1, 5-7 (1968). And certain information, such as known criminal record, may be relevant to such a determination, within reason.

Yet, new developments in technology are poised to remove the human element in the decision-making process. In fact, this is already beginning to happen. The U.S. Department of Homeland Security has developed a program known as FAST (Future Attribute Screening Technology), which uses a person’s “vital signs, body language, and other physiological patterns” to identify suspected terrorists. Viktor Mayer-Schonberger & Kenneth Cukier, Big Data, 159 (Eamon Dolan / Houghton Mifflin Harcourt) 2013. While the program is reportedly 70 percent accurate, such a percentage is sufficiently high to justify a reasonable suspicion, and advances in technology will only increase the accuracy of these predictions.

Professor Moglen stated in class that, since the majority of human actions are unconscious, then the majority of the information captured by devices in the twenty-first century will be about our unconscious. Which raises two troubling questions: first, if a computer, looking at biometric data in conjunction with other data, can determine whether an individual is about to commit a crime with greater accuracy than a police officer can, can we justifiably automate the decision-making behind police stops? Second, assuming the answer to the first question is yes, can we then justify convictions for future crimes?

The first question implicates free will -- i.e, there is always the possibility, however small, that a person will deviate from their predicted behavior and not commit a crime, despite all predictive evidence to the contrary. However, the same can be said of a human predictions -- the police officer in Terry v. Ohio was reasonably certain the men were about to commit a crime, but it is always possible that some intervening event, including their own autonomous choice, could have prevent any crime from ever occurring. Many, if not most people, would concede the reasonableness of such a search, notwithstanding the possibility that a crime may never have occurred absent police intervention. Yet, it seems less likely that people would be equally accepting of an automated process that predicted, based on unconscious behavior, that a person is going to commit a crime, even if that process is more accurate than the police officer’s. If computers can record and process our unconscious behavior and make highly accurate correlative predictions based on such behavior, are we comfortable handing over that decision-making to an automated process even though (1) we cannot “know” (because of its sheer size) all the information that the decision relies and (2) neither we nor the computer can establish or understand the actual chains of causality behind such processes, only the high degree of correlation between certain factors and the commission of a crime?

The second question may seem absurd, because one cannot be punished for a crime one did not commit, but we already use predictive data to determine sentencing and parole. The use of big data in sentencing is merely a logical extension of the use of actuarial tables already used in federal sentencing guidelines, which include criminal history as a factor in determining the likelihood of recidivism. As for parole, more than half of all U.S. states’ parole boards already use predictions based on data analysis. Viktor Mayer-Schonberger & Kenneth Cukier, Big Data, 158 (Eamon Dolan / Houghton Mifflin Harcourt) 2013. Even though these predictive tools are merely one element that inform parole boards’ decisions, it seems very likely that, absent some rule to the contrary, their role in parole boards’ decision-making process will only increase in proportion to their predictive power.

Section IV: Conclusion

On what basis can we reject predictive technology that is more accurate and less biased than any human police officer ever could be? Unlike a policeman’s judgment, however, predictive policing based on big data is not limited in what information it can consider. The practical, physical limitations on a police officer’s judgment left room for the possibility that free will exists; the introduction of big data into the analysis leaves ever less room for such a possibility. To take a stand and say that certain data and predictive tools should be categorically excluded from such constitutional questions amounts to a moral objection, insofar as it asserts the primacy of free will in the absence of much evidence to the contrary.

Yet such a stand might be a necessary one. As Professor Moglen noted, we currently have the ability to segment children by intelligence by around age ten, and we could, with reasonable accuracy, assign people’s roles in society based on their intelligence and learning capacity early in life. Yet, undoubtedly, most people would object to such a sweeping denial of children’s ability to make choices, change and grow as individuals. Likewise, the resistance to predictive policing requires a similar, moral objection, if we wish to resist the allure of predictive tools that become more accurate, and more reliable, every day.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 2r2 - 29 Jun 2015 - 15:36:10 - MarkDrake
Revision 1r1 - 15 May 2015 - 17:23:44 - WillPalmer
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM