Law in the Internet Society

View   r1
EsmeraldaHernandezFirstEssay 1 - 25 Oct 2024 - Main.EsmeraldaHernandez
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

A Bumbling Giant is Eating my Data: The "Hunger for Necessecities" and the Justification for Dispossession

-- By EsmeraldaHernandez - 25 Oct 2024

Google AI Told Me It’s Okay to Eat One Rock a Day

In May 2024, Google announced that it would be rolling out AI overviews, an artificial intelligence powered search feature that would provide users with “information [they could] trust in the blink of an eye.” This feature would create a brief (and compulsory) AI-generated response with each search. This response would provide a summary of the information found across top search results.

When this rollout happened, Google users immediately noticed inconsistencies across the responses. Users reported bizarre overviews that suggested that cheese would stick better if users add glue to their pizza or that eating at least one small rock per day is recommended for people. Other results. These answers come as the result of a ‘hallucination’ from the AI. The glue-on-pizza suggestion, at least, appeared to be sourced from a twelve year old sarcastic Reddit post. It was evident that Google’s AI Overview was incapable of filtering out joke posts and sarcastic comments on the internet at that time.

There’s no question as to why Reddit posts were heavily referenced in Google Overview search results. In early 2024, Reddit signed a $60 million contract with Google to share its content in order to train Google artificial intelligence models. The truth is, nothing can train this so-called artificial intelligence quite like raw, human data. And there is no fountain of data more plentiful than social media. This piece aims to understand why social media users seem to gloss over such blatant abuses of privacy, especially when this particular ‘product’ continuously provides absolutely no value.

The 'Parasite with the Mind of God' Just Got a Little Bolder

AI chatbots and search engines cannot produce real thoughts (obviously). They can, however, mimic human speech—speech that comes from text that has been scraped from the internet. As AI becomes more powerful, users become more and more aware of their unwilling role in training this world-consuming program, especially with GPT-4 and Google Gemini, which use publicly available information that is scraped from the internet. Most recently, LinkedIn? 's recent User Agreement and Privacy Policy update, taking effect on November 20, announced that it had been using user data to train its AI without the consent of its users. Similarly, a new privacy policy from X, which is taking effect on November 15, allows the site to share user data with third-party collaborators in order to train their AI models, a policy that does not make clear exactly how to opt out of this data sharing. Equally as concerning are the environmental effects that come with AI usage. Overall, this seems like a raw deal for users. So why do we keep agreeing to it?

Time is Money and We Need Money

Soshana Zuboff discusses the dawn of the personal digital assistant in The Age of Surveillance Capitalism. There, Zuboff noted that Hal Varian, Google’s chief economist, recognized that the “needs of second-modernity individuals [would] subvert any resistance to the rendition of personal experience as the quid pro quo for the promise of a less stressful and more effective life.” Google Now’s predictive search function allowed the “search engine [to] come to you, a parallel to today’s promise that its new AI Overview function would “do the work for you.” That is, if all the A.I. that websites have crammed into their features do their job, users will no longer need to click on links, read a full post, or even do their own research. Users are being handed this convenience and are being fed whatever companies decide they want to feed their users in exchange (and what it chooses to feed users is unusable garbage, with tidbits of truth mixed in).

Zuboff writes (using the examples of china, textiles, and the ever present Model T) that luxuries of one generation then become the necessities of the next, a fundamental feature of the evolution of capitalism. Personal assistants were seen as a need, according to Varian, who assumed that the middle class and poor would ask themselves “What do rich people have now?” Varian even made it clear that the need for a digital assistant would be so visible that “everyone [would] expect to be tracked and monitored, since the advantages, in terms of convenience, safety and services [would] be so great.” (254-259)

Time is a commodity now more than ever. Before the times of industry, life was less rigid, determined by the day-to-day and seasonal needs. With capitalism came the rigid work schedule and need for punctuality, creating a need for punctuality and scheduling. For many, life is planned to the hour, if not the minute, and members of the working class find themselves at the mercy of the hourly wage. Phrases like “waste of time” and “against the clock” are commonplace, and they identify the preciousness of this intangible concept.

Time is money, as they say. The upper-class already has plenty of money, and they have plenty of time. In order to save themselves the time that would be spent reading an article or searching for sources, a majority of people are content with having their social media data scraped in order to facilitate these AI Overviews and “all-knowing” chatbots. The days of writing your own emails and doing your own research can be gone, just like that. Nevermind that some of the answers will be wrong or plainly ridiculous, the “hunger for new necessities” justifies it all. As with the development of the personal assistant, people will be content with their social media data being used to train the AI beast, just because the quick answers it provides will save valuable time.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 1r1 - 25 Oct 2024 - 18:57:21 - EsmeraldaHernandez
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM