Law in the Internet Society

View   r2  >  r1  ...
AndreaRuedasFirstEssay 2 - 11 Nov 2024 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"

Who Does Surveillance Impact the Most?

Line: 24 to 24
 The events at Indiana, in which the automation of welfare made it impossible to continue receiving benefits and barred recipients from fixing algorithmic mistakes, make it clear that low-income communities of color “are targeted by new tools of digital poverty management” (Eubanks 11). While we could excuse these impacts as unintended consequences, they are not and have never been unforeseeable. History points to a long tale of the surveillance of poor, disabled, of color bodies through non-technological and modern systems which act as “forces for control, manipulation, and punishment” (9). The lack of accessibility to automated systems only complicates the ability of marginalized people to self-advocate. In the Indiana welfare system, people could not find someone to help them regain benefits and were left without medication, without food, and without housing, showcasing that when we give all the power to technology and not to people, we strip our society of the ability to protest and revert its effects.
Added:
>
>
Perhaps "who is most impacted" is not analytically ideal framing. It seems to imply a comparison among impacts, but none is performed and you aren't actually in the impact-measuring business. The essence of your point, I think, is that software is used in complex social structures to automate decision-making, that these automated decisions can have absolutely important effects on individual lives, and that the more vulnerable people are, or the more their social contexts put them at the disposal of such large, complex systems (of employment, health care, social support, incarceration, etc.) the more likely they are to be mistreated as a consequence of those automated decisions.

This is undoubtedly true. We actually don't need too much Deleuze or Foucault to establish this point from the perspective of a US lawyer's training in social realism.It is also true, of course, when the same system make non-automated decisions based on less information and more individual idiosyncrasy, bias, or hostility in the decision-maker. The focus in the existing draft shifts unsteadily around this insight, from surveillance to "algorithmic" decision-making. More clarity would be helpful: where are we concerned about the "improvement" resulting from more data collection, and where about the use of software rather than human attention to make decisions?

Another good route to improvement is to think about law a little more. When governmental systems make automated decisions, their responsibility to deliver procedural due process does not decrease. Termination of welfare benefits without a hearing in Goldberg v, Kelly violates the Due Process Clause whether the "algorithm" in use is one-step long, or results from the output of some complex model fed with all available data about the recipient. If the courts are committed to the principle applied in Matthews v. Eldridge, weighing the risks of erroneous determinations and the degree of harm likely to be caused by error, including the degree of difficulty involved in challenging erroneous decisions, against the government's abiding interest in efficient decision-making, can we fashion constitutional law to protect vulnerable people better?

It is in this context—the search for better remedies—that the value of French theory reaches its minimum, and closer attention to legal detail is of maximum value The present draft seems to me to rely heavily on broken reeds. How does consent to data collection function as a significant check on power when the consent is extracted by bureaucratic fiat as a condition of receiving healthcare, or school enrollment, or basic sustenance? How would environmental law function if workers or families could "give consent" to living and working in poisoned conditions? What is the value of a "right to correct" one or another detail in a dataset containing hundreds of thousands or millions of points about you, too large for you effectively to analyze without specialized skills and tools and all subject to being ignored, amplified, reweighted, or automatically corrected by the operation of software you cannot see and do not have the necessary knowledge to debug?

The value of this draft is that it brings us in sight of these central questions. If we are agreed that one of law's fundamental purposes is to protect us against failures of due process in proportion precisely to our vulnerability to injustice, how should it go about doing so in this realm?

 

Sources Cited

Deleuze, Gilles. 1992. “Postscript on the Societies of Control.” October 59: 3-7.


AndreaRuedasFirstEssay 1 - 20 Oct 2024 - Main.AndreaRuedas
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"

Who Does Surveillance Impact the Most?

-- By AndreaRuedas - 20 Oct 2024

From photographic tracking to fraud alerts to risk assessment in prisons, technologies which use algorithms as decision-makers for life altering situations are continuously increasing in use. Virginia Eubanks’ discussion of how racism and classism plague automated welfare systems demonstrates that surveillance systems and modern technology negatively infiltrates the lives of those most marginalized (2017). Her findings point to the intersectional oppressions which can be and are perpetuated by contemporary processes of surveillance because they are based on prejudiced datasets that ultimately impact systems in everyday technologies. To understand how people are continuously marginalized by modern surveillance technologies, we must understand the intent behind the creation of discipline and control systems, the lack of agency given to living individuals in algorithmic decision processes and the lack of access to corrective measures and its impact on current oppression.

The Progression of Systems of Power and Control

In the 18th century, there was a shift from sovereign to disciplinary power - one that focused on order, observation, and hierarchies (Foucault 198). While its first use was to control the spread of the Black plague, disciplined systems became a mechanism of power centered “around the abnormal individual, to brand him and to alter him” (199). These were the beginnings of disciplinary institutions, such as mental asylums and prisons, which were based on targeting and reforming the “abnormal” while marking and continuing their exclusion (200). Surveillance mechanisms were created and used to discipline and exclude - biased towards those who did not fit the status quo.

In the present, we live in “societies of control” where disciplinary power interacts with control power - a control based on numerical classifications that is everywhere, fluid and seemingly ever-lasting (Deleuze 4-6). This type of control uses current and previous information to haunt a person forever because it creates dividuation in which there is not one holistic, humane representation of a person but many digital versions that can be based on real, yet singular moments as much as on false assumptions or biases in datasets (6). These systems of control permeate almost every aspect of our public and private life which is facilitated by their use of technological algorithms that sort, classify, and judge. For example, prisons, a disciplinary invention that already targets the poor and disabled, the “abnormal,” combine with control systems, such as risk assessments to create what Deleuze calls a “new monster” that exacerbates previous discrimination to result in racialized and classist mass incarceration.

As described by Solove, one of the main concerns with modern surveillance is about its use of information processing for discipline and control; mechanisms that Deleuze and Foucault hypothesized were based on subjugation of undesired, undervalued groups (2015). Solove identifies the main issues with data processes that place already marginalized identities at a higher risk of being negatively affected. Aggregation, which strips privacy and autonomy away from individuals. Exclusion, which denies people the knowledge of data use and the ability to correct if wrong, which is already extremely difficult without the appropriate resources and mobility. And last, secondary use that distributes personal information without consent and combines with distortion, to potentially create false representations of the self.

The Monster Blinds Us and Ties Our Hands

These four issues contribute to a Kafkaesque reality in which data collection and technological implementation is dangerous because of its known existence but mysterious usage. The collective ignorance behind collection methods and applications lead to preventional changes in our thoughts and behaviors - without being aware of what is known about us and how it will be used, we are afraid to assert our civil rights and do not demand accountability. This curb in our desire to advocate for human and civil rights, allows for current and past injustices to continue.

A Case Study of the Automation of Welfare

Eubanks’ analysis of the demise of the public benefits system in the United States, concurs with Foucault, Deleuze, and Solovan: “We all inhabit this new regime of digital data, but we don’t all experience it in the same way” (Eubanks 5). The discipline and control systems outlined by Foucault and Deleuze impact marginalized people in one of the four modern ways that Solovan defined. Eubanks further claims that this ultimately impacts particular groups, such as poor people, people of color, and women, the most. Why? Because they encounter more data collection systems within their lives - international borders, welfare systems, and highly policed neighborhoods. Access to basic human needs such as housing, food, or healthcare becomes more complicated to attain for groups who exist at the margins of society and now have to interact with systems designed to exclude.

The events at Indiana, in which the automation of welfare made it impossible to continue receiving benefits and barred recipients from fixing algorithmic mistakes, make it clear that low-income communities of color “are targeted by new tools of digital poverty management” (Eubanks 11). While we could excuse these impacts as unintended consequences, they are not and have never been unforeseeable. History points to a long tale of the surveillance of poor, disabled, of color bodies through non-technological and modern systems which act as “forces for control, manipulation, and punishment” (9). The lack of accessibility to automated systems only complicates the ability of marginalized people to self-advocate. In the Indiana welfare system, people could not find someone to help them regain benefits and were left without medication, without food, and without housing, showcasing that when we give all the power to technology and not to people, we strip our society of the ability to protest and revert its effects.

Sources Cited

Deleuze, Gilles. 1992. “Postscript on the Societies of Control.” October 59: 3-7.

Eubanks, Virginia. 2017. Introduction and Chapter 2 in Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St Martin’s Press.

Foucault, Michel. 1975. Chapter 3 in Discipline and Punish: The Birth of the Prison. Vintage.

Solove, Daniel J. 2015. “Why Privacy Matters Even if You Have ‘Nothing to Hide.” The Chronicle of Higher Education.


Revision 2r2 - 11 Nov 2024 - 15:19:23 - EbenMoglen
Revision 1r1 - 20 Oct 2024 - 22:21:07 - AndreaRuedas
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM