Computers, Privacy & the Constitution

View   r4  >  r3  ...
GorbeaFFirstPaper 4 - 06 May 2024 - Main.GorbeaF
Line: 1 to 1
 
META TOPICPARENT name="FirstPaper"
Line: 6 to 6
 -- By GorbeaF - 29 Feb 2024
Changed:
<
<
There is a growing concern with the use of artificial intelligence technologies (AI) in everyday life. The general population believes that AI will eliminate jobs and ultimately replace individuals through automatization. Others believe that AI is a useful tool that will revolutionize the world. I believe AI can be used as a tool; however, in a context-specific analysis, I am worried that it may destroy the thin line between ethics and police force. As technology increases, so do its negative and positive uses to it. The individual user of AI may not perceive a threat regarding their confidential information or data; consequently, they may not care or realize the harm. However, when a government agency, which has access to and creates millions of data points in databases, authorizes the use of AI technology, it poses a great risk to the whole community. Can there be a balance between the positive and negative effects? The International Association of Chiefs of Police (AICP) argued that “AI will be a game-changer for law enforcement…”. The use of AI, ML, and LLMs in policing activities threatens to destroy the limited public trust in police forces; transparency, accountability, and bias may be some of the most pressing issues.
>
>

Society's evolution

 
Added:
>
>
Society has drastically evolved in the past years. It has grown a new programable limb that captures and stores every action we take, transforming it into data, learning from it, and evolving in secret. These past years are akin to a once-distant printing press revolution, where a product overwhelmed society with new business models, opportunities, and threats. This analogy between a revolution in the 15th century and software seems far-fetched; however, the proliferation of ideas once revered now faces heightened control and surveillance by the limb and those who leverage it. This limb has become the most crucial constituent of our digital society. As such, everyone and everything must evolve with it and help it grow and expand. Our digital society’s evolution consequently involves a cultural shift from privacy concern to privacy ignorance. Should that same evaluation apply to the evolution of policing in this new society?
 
Changed:
<
<

Policing with AI

>
>
It’s curious how we evaluate policing agencies that “infringe” on our privacy and then turn a blind eye to international conglomerates that exploit that same privacy. Disregarding the different purposes, scope, and regulations, do we trust Meta more than the police? Do we know for sure if the PRISM program ended or if it only changed designation? Suppose the following hypothetical was true. A new user of an iPhone needs to agree to specific T&Cs to be able to use it. Upon starting the device, an option appears: agree to T&C where everything is shared with “X” company or request a refund for the device. Why would I click decline when I need the device to live comfortably at this age? Compared with: suppose the same language appears, but we modify “X” company to NSA, do we request a refund? The same core problem exists; nothing has changed in both hypos. However, the human mind will recognize police surveillance as negative; why?
 
Changed:
<
<
Efficiency and workload are the fundamental arguments behind police agencies' use of technology, and their use has expanded throughout various periods. According to the EDPT Report from 1998, the workload crime imposes on the police has increased fivefold since 1960, and their resources have not caught up. The police have used every technological advancement available to them to fulfill their jobs. For example, fingerprinting in the 1900s, crime laboratories in the 1920s, and in the 1930s, the use of the two-way radio and vehicles marked breakthroughs that allowed faster response times and comprehensive investigations. According to the EDPT Report, the 1970s marked a new age in policing with the introduction of computers into policing. The following years were welcomed with CCTV, DNA profiling (in conjunction with the addition of the nation's automated fingerprint identification database), and cell phones. Now, police agencies are leveraging the use of facial recognition software, AI, and ML to sort, review, and summarize colossal amounts of data and "deduct patterns of behavior". Police departments partner up with third-party vendors that supply them with an array of different technologies. For example, according to Police1 public information, on the software front, the most used programs are Cellebrite Pathfinder and LEMA, which focus on managing digital investigations.
>
>

A new challenge

 
Changed:
<
<
As we can note, the conjunction between technology and police functions has grown exponentially for decades, and this rampant growth brings forth new issues with data usage and safeguarding concerns. Police agencies enter into agreements with third-party vendors for top-of-the-line products that may not be ready or safe to use. For instance, PredPol was recently canceled for discrimination problems. Therefore, we must answer whether these technologies outweigh the negative considerations they bring along, such as privacy violations, accountability issues, and discrimination events.
>
>
Police agencies have and will continue to leverage the limb and any possible alteration to it to their need, and they rightfully should. As software evolves, police should too. However, our evaluations of their actions haven’t – we distrust them. The new available software capabilities will be a game changer for the police; Cellebrite Pathfinder and Lema are just the tip of the iceberg. However, the underlying problems (bias, discrimination, unaccountability, and no transparency) still appear. Nothing will make us trust them without completely rooting out the problems that have festered for decades. The system is broken. Rather than allowing agencies to leverage the limb, we should push for a complete overhaul of all the surveillance agencies and, thus, allow the digital society to complete another phase of its cultural evolution –allowing the agencies to evolve correctly.
 
Added:
>
>
Continuing on the current path keeps us at odds with our agencies. The underlying issue we keep referring to will fester in new dimensions once we train our models on existing police encounters and surveillance tactics. The snowball will keep growing. PredPol perfectly exemplifies this.
 
Deleted:
<
<

Privacy concerns

 
Changed:
<
<
The question proposed in the aforementioned paragraph seems simple to answer; absolutely! The police should and must use all the available resources to prevent, investigate, and solve crime. However, we must delve into the specifics and address the very relevant privacy concerns attached to the use of these technologies -which I will refer to together as “tech.” By allowing the police to rely heavily on the tech that offers a wide array of programs and services that manage the investigation and gather knowledge, we are placing an immeasurable amount of confidential and private information into algorithms and databases of corporate entities, which we may know nothing about. Granted, police agencies will most likely use this for a good cause. However, presumably by contract terms, the entities behind the curtain will be granted the power to use, manipulate, sell, and profit from all the data and information that is run through their software, including all the crime and evidence statistics that may be generated therein. Still, assuming that all parties act in good faith, it seems far-fetched to believe that the information will be secure and not used in any way.
>
>

The next chapter

 
Changed:
<
<

Accountability and transparency

>
>
The digital society has evolved to a stage where we do not care what we accept and how it affects us. The Social Dilemma by Jeff Orlowski portrays a rather true story; nonetheless, we do not care. This digital society will only exponentially evolve. Fortunately, we still have humans alive who were not forged by smartphones. The rest of us find it annoying and time-wasting having to scroll and click when it comes to privacy or T&Cs. Privacy is nonexistent, except for a few notable hardware examples; we might accept that PRISM still has a backdoor to our Macs or Windows. Surveillance agencies only waste time and resources on superfluous paperwork.
 
Changed:
<
<
The second issue we face is accountability and transparency. Who will be responsible for the AI in real-time crime centers that sort, review, and analyze information that ultimately conjures predictions and suspects? The use of AI in these scenarios brings nothing but unintelligent probability guesses based on information fed to the AI. "This poses a serious risk of automatization and an accountability gap". How are these algorithms going to be evaluated? How are they going to be trained? How is the oversight going to work? The only possible way for police agencies to build the general public trust in using these technologies is to disclose how they work. However, public disclosure of the functionality and decision process - showing how innocent individuals become suspects by probability - is only going to be detrimental to their main goal: crime prevention.
>
>
Instead of devising ways to leverage the limb into safe, reliable, and transparent police agencies, we should focus our energy and resources on a peer-to-peer police agency system that deletes the underlying issues.
 
Added:
>
>
Nothing is perfect, and neither would this be. The limb should be empowered with complete access to everything on the net (assuming it doesn’t). We should create different limbs trained by different countries’ distinct and individualized cultures. They should be connected to a peer-to-peer network where they can monitor each other to ensure adherence to whatever regulatory and ethical standards we decide to subject them to. They can each detect faults in each other and potentially address them. Decentralized decision-making will ultimately remove limb corruption. Heightened efficiency, transparency, and accountability would be beneficial and more useful to our digital society. Police interactions would be reviewed and analyzed by the corresponding software in a specific jurisdiction and reprimanded accordingly.
 
Added:
>
>
This surveillance state could be a near-future situation. The limb keeps expanding each day into different sectors and industries. Unfortunately, I believe our actions will be reactive rather than proactive. We need to care about our privacy, who controls it, and how they use it before it’s too late. Accepting those T&Cs may be one of the most essential decisions we make.
 
Changed:
<
<

Bias and discrimination

>
>

Proactiveness

The peer-to-peer limb could increase and secure the public’s mistrust of police agencies. Providing security services that hold law enforcement accountable could be the solution to our underlying issue dilemma. If we place our trust in international conglomerates, why couldn’t we shift that same trust to an AI entity? How would it be different? Acceptance and misunderstanding would pose barriers that would only be achievable by extensive education.

The evolution of our digital society directly relates to the evolution of our core values. Once it begins to evolve, a domino effect will obligate the evolution of the rest, including police agencies’ core values. There is no prevention, so let's be proactive rather than reactive.

 
Deleted:
<
<
The most serious problem agencies will encounter is training their models. For AI and/or ML to function and be leveraged as a tool, they must be trained appropriately and thus rely on past information to provide accurate or predictive results. The issue is that past police actions concerning civil liberties and rights have not been great. Ultimately, the best way would be to start from scratch and feed new cases and faces to the AI to detect any suspect through CCTV; however, applying this starts to sound like a surveillance state.
 
Deleted:
<
<
It is a matter of time for governmental agencies to deploy the real power of AI and related technologies to their everyday functions. There may be some good faith uses and ideas; however, privacy, utilization, and accountability concerns may not outweigh the ultimate benefits. Agencies should withhold the temptation to use the technology until every negative concern is fully addressed.
 


Revision 4r4 - 06 May 2024 - 17:41:04 - GorbeaF
Revision 3r3 - 27 Apr 2024 - 13:42:29 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM