Law in the Internet Society

View   r37  >  r36  >  r35  >  r34  >  r33  >  r32  ...
TWikiGuestFirstEssay 37 - 09 Nov 2024 - Main.XuanyiLee
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
Clifton Martin
>
>
Xuanyi Lee
 L6160 Law in the Internet Society
Deleted:
<
<
Grindr: A Revolutionary App or A Disease to the LGBT Community?
 
Added:
>
>

‘POFMA’ and the Internet Society in Singapore

 
Deleted:
<
<
Introduction:
 
Deleted:
<
<
Previously you could consider yourself lucky if you met anyone at a club or bar as a gay or bisexual man. In the LGBT community, there was no clear way for men to meet one another; however, today’s phone apps have revolutionized dating for the general public. Grindr, a dating app meant to connect male identifying folks of the LGBT community, lets men locate other Grindr users who are nearby. According to the app’s creator, Joel Simkhai, Grindr is for “guys meeting guys” and it’s meant to help gay men establish relationships, whether that be friendship, dates, or sex. Despite the creator’s intentions, generally most men are using Grindr for casual sex. Therefore, Grindr’s culture of casual sex is problematic as it reinforces an inaccurate, generalized view commonly held by members outside of the LGBT community that queer men are more sexually promiscuous.
 
Deleted:
<
<
Origin, Function, and Use of Grindr:
 
Deleted:
<
<
Grindr is a smartphone application that utilizes GPS technology to locate other gay men that are in proximity – regardless of your geographic location. Since launching in 2009, the app has been downloaded over 10 million times, is available in 192 countries, and has more than 2.6 million users that have collectively exchanged more than 70 million chat messages. Over the past 15 years, Grindr has quickly grown into the world’s largest social networking app for gay, bisexual, trans, and queer people. The app is not limited to men who are “out of the closet”; men who are questioning their sexuality and/or identify as “discreet” or “closeted” can use the app as well.
 
Changed:
<
<
Each Grindr user has a profile with personal information, focusing on physical features like their height, weight, ethnicity, and body type. A user’s profile also displays their relationship status, current HIV status, and their “tribe”. A tribe is a filter that lets users identify themselves with a specific group within the gay community like clean-cut, twink, bear, and geek. These preferences let users specify their searches and find their preferred type of man. These features let men easily find what they are looking for, but they also contribute to the app’s overtly sexual nature since the filtering is done primarily by physical preference.
>
>

Introduction:

 
Deleted:
<
<
How Grindr Perpetuates Gay Stereotypes
 
Deleted:
<
<
Outside and even within the LGBT community, there’s an inaccurate but established stereotype that queer men are more promiscuous and heavily active in today’s “hook-up culture.” Hook-up culture both encourages and normalizes sexual encounters without a long-term commitment or emotional attachment. Grindr and its users have created its own culture of hooking up. And individuals outside of the LGBT community are already apt to believe that gay men have higher levels of casual sex – especially after 1980s HIV/AIDS epidemic which initiated a great deal of the gay, sexual stereotypes that exist today.
 
Changed:
<
<
However, not every gay or bisexual man is sexually active, let alone sexually promiscuous, which disproves the largest misconception behind these stereotypes. Though, gay, bisexual, and queen men’s actual use for Grindr further pushes this inaccurate stereotype when the app’s societal influence could instead be used to shatter this myth. Grindr also has some additional features that seem to inadvertently encourage casual sex amongst its users. For example, the instant messaging feature helps in creating Grindr’s hook up culture. In their messages, users can send pictures that tend to go beyond the typical selfie and are usually sexually explicit. The slang popularized by Grindr messaging has also helped in forming the app’s culture of casual sex. Some of the lingo is words like “host”, which is asking if the individual can host the sex partner(s) at his home or “safe”, a way to see if the person wants to use a condom or another safe sex method. At the end of the day, the frequent and popular use of Grindr and its features have allowed a culture of hooking up to permeate the LGBT community and thrive. The fact that the app is mostly used exclusively for casual sexual behaviors inaccurately implies that homosexual men are more promiscuous.
>
>
Despite its economic success, Singapore’s issues with media freedom highlight a glaring problem with the island-nation’s rapid development. In 2024, Singapore ranked 126th out of 180 nations in Reporters without Borders (RSF) Press Freedom Index. One legal tool instrumental to this ranking is the Protection from Online Falsehoods and Manipulation Act (POFMA).
 
Deleted:
<
<
The Issue and What the App Should Do:
 
Changed:
<
<
While Grindr has created easy access for meeting gay men in the area, it has simultaneously made obtaining long-term relationship more challenging. The possibilities of a relationship typically seem promising for users as the application provides such easy access to other men who are nearby. However, due to the popular use of Grindr to find casual sex, a great deal of men has found that these meetings don’t really go anywhere and that the app is inefficient means for finding a relationship, leaving those craving a long-term relationship extremely disappointed.
>
>

History of POFMA

 
Deleted:
<
<
Grindr certainly has revolutionized physical interaction among gay men as it allows them to easily filter through and find sexual partners. Although Grindr serves to connect gay men with one another, its actual use goes beyond a networking outlet to an app with a thriving culture of casual sex. This reality further strengthens the social belief that exists both inside and outside of the LGBT community that homosexual men are hypersexual and promiscuous. To a certain extent, Grindr does pose benefits for the gay community as it truly does connect gay, bisexual, and queer men with one another. However, the negative social impact and stigma that is associated with the LGBT community continues to exist due to Grindr’s use and popularity does make the app slightly problematic. Even though Simkhai can’t control all of Grindr’s consumers and their intentions for using it, he can control the impact it creates for the rest of the public, and he should consider the implications that his app has for the LGBT community in doing that. There’s a need to eliminate the established stereotypes about gay men that have existed for so long, rather than perpetuate it.
 
Added:
>
>
The concept of POFMA was first discussed in Singapore’s Parliament in 2017. Minister of Law K Shanmugan stated that misinformation was an issue in Singapore for which current laws offered “limited remedies”. Shanmugan cited cases where misinformation had interfered with political and international affairs, stating that fake news must be assumed to be an “offensive weapon by foreign agencies and foreign governments” with the objective of getting “into the public mind, to destabilise the public, to psychologically weaken them”.
 
Changed:
<
<
Sources: “About Grindr.” App - Privacy Policy, www.grindr.com/about/.
>
>
In 2019, POFMA was first tabled in Parliament. The stated objective of the Act was “to prevent the electronic communication in Singapore of false statements of fact, to suppress support for and counteract the effects of such communication, to safeguard against the use of online accounts for such communication and for information manipulation, to enable measures to be taken to enhance transparency of online political advertisements, and for related matters”.
 
Changed:
<
<
Beck, Julie. “The Rise of Dating-App Fatigue.” The Atlantic, Atlantic Media Company, 27 Oct. 2016, www.theatlantic.com/health/archive/2016/10/the-unbearable-exhaustion-of-dating-apps/505184/.
>
>
Notably, the ruling People’s Action Party held a supermajority in Parliament. As such, following a straightforward voting process, POFMA passed with 72 to 8 votes – all 8 opposition Members of Parliament voted against the Act.
 
Deleted:
<
<
Engle, Clyde. “10 Things I Learned About Gay Hook-Up Culture From My Day On Grindr.” Elite Daily, Elite Daily, 17 Dec. 2018, www.elitedaily.com/dating/gay-hook-up-culture-grindr/1354315
 
Changed:
<
<
Salemo, Robert. “Twenty Questions for Grindr Creator Joel Simkhai.” Xtra, 28 July 2011, www.dailyxtra.com/twenty-questions-for-grindr-creator-joel-simkhai-33729
>
>

Substance of POFMA

 
Changed:
<
<
Tadich, Paul. “The IPhone Revolutionized Gay Hookup Culture.” Motherboard, VICE, 27 June 2017, www.motherboard.vice.com/en_us/article/bj84b8/iphone-anniversary-grindr-gay-hookup-culture
>
>
The primary tool of POFMA is the ability for online statements to be issued with “directions”. POFMA is operated by an agency within the Infocomm Media Development Authority in Singapore, which is a statutory board established under the Singapore Ministry of Digital Development and Information.

A “correction direction” does not necessarily require an online statement to be removed, instead the direction will state the necessary adjustment that must be made to ‘correct’ the false statement. However, in more “serious” cases, a “stop communication” or “disabling” direction may be issued instead. Stop communication directions instruct the statement-maker to remove access to the false statement within a specified time. A disabling direction disables access to an entire online site in Singapore. Supposedly, directions can only be issued if “a false statement of fact has been or is being communicated in Singapore through the Internet; and it is in the public interest to issue the direction”. Non-compliance with POFMA directions result in fines and/or jail time for offenders.

POFMA orders are subject to judicial review if offenders wish to appeal. In Online Citizen, the Singapore Court of Appeal established a five-step framework to determine if a POFMA correction direction should be set aside in a judicial review process. The Online Citizen Pte Ltd v Attorney-General [2021] SGCA 96. Firstly, the reviewing court must identify the false statement targeted by the direction. Secondly, the court must determine if the POFMA-wielding minister’s interpretation of the statement is reasonably objective. Thirdly, the court must determine if the statement is indeed a “statement of fact”, and not merely an opinion. Fourthly, the court must determine if the statement is indeed “false”. Finally, the court must consider if the statement has been or is being communicated in Singapore. The Court also established that the burden of proof lies with the POFMA offender – adding another hurdle to avoiding liability under POFMA. Notably, the POFMA appeal process remains financially and administratively burdensome.

Assessing the effects of POFMA

POFMA has been deployed against rival political parties such as the Singapore Democratic Party (SDP) and small, independent alternative media outlets such as Jom Media. As often seen in these cases, the resource imbalance between the POFMA user and POFMA offender is massive. As such, a POFMA direction can be seen as a crushing blow to the operability and credibility of alternative media sources or opposition political parties.

One central concern about POFMA is the Act’s likely effect in stifling legitimate political discourse. The substantive legal language in POFMA and POFMA’s appeal process is arguably vague and relatively broad. For instance, what is seen to be in the “public interest” ranges from matters of public health, national security, and the maintenance of public confidence in governmental institutions. It is thus likely that media actors in Singapore will develop a culture of self-censorship in order to play the safe side in complying with POFMA, given the onerous nature of being issued with POFMA directions. The threat of POFMA alone is likely to act as a deterrent in publishing alternative, diverse viewpoints on a range of public issues in Singapore, contributing to the development of a one-track political discourse culture.

Conclusion

Singapore’s implementation of POFMA highlights a difficult balancing act between the safeguarding against misinformation and preservation of a fundamental freedom of expression. The broad powers wielded by the government in POFMA along with the difficulty of accessing judicial oversight poses serious questions to media freedom in Singapore. The international condemnation and scrutiny surrounding POFMA underscores the importance of transparency and accountability in checking governmental power. The rise of POFMA raises questions around the direction of Singapore’s meteoric development in the digital information age – as the country matures into a true first-world nation, it must address how much it believes in democratic principles and media freedom.

Sources cited: https://rsf.org/en/country/singapore

https://web.archive.org/web/20180927125021/https://www.channelnewsasia.com/news/singapore/government--seriously-considering--how-to-deal-with-fake-news-sh-8712436

https://sso.agc.gov.sg/Act/POFMA2019?TransactionDate=20191001235959

https://www.channelnewsasia.com/singapore/falsehoods-freedom-speech-and-burden-proof-key-findings-apex-courts-landmark-pofma-judgment-2230541


TWikiGuestFirstEssay 36 - 24 Oct 2024 - Main.CliftonMartin
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Added:
>
>
Clifton Martin L6160 Law in the Internet Society
 
Changed:
<
<
Jasmine Bovia Law and the Internet Society

PSRs, RAIs, and the Fight Against AI

>
>
Grindr: A Revolutionary App or A Disease to the LGBT Community?
 

Introduction:

Changed:
<
<
Although Artificial Intelligence models have existed in some form since the 1950s, 2022 marked the beginning of what has now become known as the “AI Boom”, a term used to describe the rapid expansion of Artificial Intelligence usage into the mainstream. This technological boom, spurred by the use of large-language models like ChatGPT? and Meta Platforms, has become increasingly observable not only in the public sphere, but in a number of professional fields such as journalism, medicine, and, notably, law. This paper seeks to examine the potentially negative consequences of AI usage on the legal sector, specifically the judiciary. Further, it suggests some preliminary measures to limit, if not completely curb, the role AI plays in judgment.
>
>
Previously you could consider yourself lucky if you met anyone at a club or bar as a gay or bisexual man. In the LGBT community, there was no clear way for men to meet one another; however, today’s phone apps have revolutionized dating for the general public. Grindr, a dating app meant to connect male identifying folks of the LGBT community, lets men locate other Grindr users who are nearby. According to the app’s creator, Joel Simkhai, Grindr is for “guys meeting guys” and it’s meant to help gay men establish relationships, whether that be friendship, dates, or sex. Despite the creator’s intentions, generally most men are using Grindr for casual sex. Therefore, Grindr’s culture of casual sex is problematic as it reinforces an inaccurate, generalized view commonly held by members outside of the LGBT community that queer men are more sexually promiscuous.
 
Added:
>
>
Origin, Function, and Use of Grindr:
 
Changed:
<
<
AI and the Judiciary:
>
>
Grindr is a smartphone application that utilizes GPS technology to locate other gay men that are in proximity – regardless of your geographic location. Since launching in 2009, the app has been downloaded over 10 million times, is available in 192 countries, and has more than 2.6 million users that have collectively exchanged more than 70 million chat messages. Over the past 15 years, Grindr has quickly grown into the world’s largest social networking app for gay, bisexual, trans, and queer people. The app is not limited to men who are “out of the closet”; men who are questioning their sexuality and/or identify as “discreet” or “closeted” can use the app as well.
 
Changed:
<
<
While the usage of Artificial Intelligence within the entire legal sphere has been met with rightful controversy, AI’s effect on the judiciary is especially troubling. According to the American Bar Association, numerous states have begun incorporating AI models into the judicial practice as an evaluation tool meant to aid in the generation of Pre-Sentence Reports (PSRs). Risk Assessment Tools are one specific class of AI model that rely on fact patterns and outcomes of previous cases to calculate metrics such as recidivism potential for criminal defendants. These metrics play an increasingly instrumental role in PSRs and, consequently, the sentencing outcomes of criminal cases. Sentencing courts have become increasingly reliant on these AI models to disastrous effect; already, the use of this software in PSR generation has been the subject of legal challenges on Due Process grounds. An investigative article published by ProPublica? highlighted one of the glaring issues with state judiciaries’ use of AI tools in criminal cases. Although limited data currently exists on these AI models, studies are beginning to show that risk assessment tools perpetuate racial bias in their assessments. The risk recidivism software COMPAS, developed by the for-profit company Equivant, serves as a shining example; Black defendants were almost twice as likely as white defendants to be wrongfully labeled as having a “high-risk” of recidivism. On the flipside, white defendants were much more likely than Black defendants to be incorrectly considered at “low-risk” of reoffense. This is far from the only problem with Artificial Intelligence models like COMPAS. Another potential issue with sentencing courts’ use of these tools is one inherent to their very nature. Artificial intelligence learns by constantly adapting its output to expanding data sets. These ever-evolving algorithms could mean skewed results for defendants as more data becomes available; the machine’s determination of a fair sentence for a defendant one day can, in theory, be completely different from its determination of a fair sentence for a future defendant with an identical fact pattern. Even further, the American Bar Association correctly posits that the use of computer-generated evaluations for determining matters such as recidivism risk removes the necessary human aspect of sentencing. Where human judges are better able to see beyond fact patterns and take more nuanced views of the defendants in front of them, AI software can only see the numbers, resulting in distressingly clinical results. With these problems in mind, it is understandable to see why the use of AI tools within the judiciary remains controversial.
>
>
Each Grindr user has a profile with personal information, focusing on physical features like their height, weight, ethnicity, and body type. A user’s profile also displays their relationship status, current HIV status, and their “tribe”. A tribe is a filter that lets users identify themselves with a specific group within the gay community like clean-cut, twink, bear, and geek. These preferences let users specify their searches and find their preferred type of man. These features let men easily find what they are looking for, but they also contribute to the app’s overtly sexual nature since the filtering is done primarily by physical preference.
 
Added:
>
>
How Grindr Perpetuates Gay Stereotypes
 
Changed:
<
<
Preliminary Measures:
>
>
Outside and even within the LGBT community, there’s an inaccurate but established stereotype that queer men are more promiscuous and heavily active in today’s “hook-up culture.” Hook-up culture both encourages and normalizes sexual encounters without a long-term commitment or emotional attachment. Grindr and its users have created its own culture of hooking up. And individuals outside of the LGBT community are already apt to believe that gay men have higher levels of casual sex – especially after 1980s HIV/AIDS epidemic which initiated a great deal of the gay, sexual stereotypes that exist today.
 
Added:
>
>
However, not every gay or bisexual man is sexually active, let alone sexually promiscuous, which disproves the largest misconception behind these stereotypes. Though, gay, bisexual, and queen men’s actual use for Grindr further pushes this inaccurate stereotype when the app’s societal influence could instead be used to shatter this myth. Grindr also has some additional features that seem to inadvertently encourage casual sex amongst its users. For example, the instant messaging feature helps in creating Grindr’s hook up culture. In their messages, users can send pictures that tend to go beyond the typical selfie and are usually sexually explicit. The slang popularized by Grindr messaging has also helped in forming the app’s culture of casual sex. Some of the lingo is words like “host”, which is asking if the individual can host the sex partner(s) at his home or “safe”, a way to see if the person wants to use a condom or another safe sex method. At the end of the day, the frequent and popular use of Grindr and its features have allowed a culture of hooking up to permeate the LGBT community and thrive. The fact that the app is mostly used exclusively for casual sexual behaviors inaccurately implies that homosexual men are more promiscuous.
 
Changed:
<
<
Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics. Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds? Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice. These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself.
>
>
The Issue and What the App Should Do:
 
Added:
>
>
While Grindr has created easy access for meeting gay men in the area, it has simultaneously made obtaining long-term relationship more challenging. The possibilities of a relationship typically seem promising for users as the application provides such easy access to other men who are nearby. However, due to the popular use of Grindr to find casual sex, a great deal of men has found that these meetings don’t really go anywhere and that the app is inefficient means for finding a relationship, leaving those craving a long-term relationship extremely disappointed.
 
Changed:
<
<
Sources: Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism
>
>
Grindr certainly has revolutionized physical interaction among gay men as it allows them to easily filter through and find sexual partners. Although Grindr serves to connect gay men with one another, its actual use goes beyond a networking outlet to an app with a thriving culture of casual sex. This reality further strengthens the social belief that exists both inside and outside of the LGBT community that homosexual men are hypersexual and promiscuous. To a certain extent, Grindr does pose benefits for the gay community as it truly does connect gay, bisexual, and queer men with one another. However, the negative social impact and stigma that is associated with the LGBT community continues to exist due to Grindr’s use and popularity does make the app slightly problematic. Even though Simkhai can’t control all of Grindr’s consumers and their intentions for using it, he can control the impact it creates for the rest of the public, and he should consider the implications that his app has for the LGBT community in doing that. There’s a need to eliminate the established stereotypes about gay men that have existed for so long, rather than perpetuate it.
 
Deleted:
<
<
Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019.
 
Changed:
<
<
State v. Loomis, 371 Wis. 2d 235 (2016)
>
>
Sources: “About Grindr.” App - Privacy Policy, www.grindr.com/about/.
 
Changed:
<
<
Angwin, Julia, et al. “Machine Bias.” ProPublica? , 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
>
>
Beck, Julie. “The Rise of Dating-App Fatigue.” The Atlantic, Atlantic Media Company, 27 Oct. 2016, www.theatlantic.com/health/archive/2016/10/the-unbearable-exhaustion-of-dating-apps/505184/.
 
Changed:
<
<
18 USC Sec. 3552(d)
>
>
Engle, Clyde. “10 Things I Learned About Gay Hook-Up Culture From My Day On Grindr.” Elite Daily, Elite Daily, 17 Dec. 2018, www.elitedaily.com/dating/gay-hook-up-culture-grindr/1354315
 
Added:
>
>
Salemo, Robert. “Twenty Questions for Grindr Creator Joel Simkhai.” Xtra, 28 July 2011, www.dailyxtra.com/twenty-questions-for-grindr-creator-joel-simkhai-33729
 
Added:
>
>
Tadich, Paul. “The IPhone Revolutionized Gay Hookup Culture.” Motherboard, VICE, 27 June 2017, www.motherboard.vice.com/en_us/article/bj84b8/iphone-anniversary-grindr-gay-hookup-culture

TWikiGuestFirstEssay 35 - 22 Oct 2023 - Main.JasmineBovia
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
_You did say to make it bad.
 
Added:
>
>
Jasmine Bovia Law and the Internet Society
 
Added:
>
>
PSRs, RAIs, and the Fight Against AI
 
Deleted:
<
<

Introduction

 
Added:
>
>
Introduction:
 
Added:
>
>
Although Artificial Intelligence models have existed in some form since the 1950s, 2022 marked the beginning of what has now become known as the “AI Boom”, a term used to describe the rapid expansion of Artificial Intelligence usage into the mainstream. This technological boom, spurred by the use of large-language models like ChatGPT? and Meta Platforms, has become increasingly observable not only in the public sphere, but in a number of professional fields such as journalism, medicine, and, notably, law. This paper seeks to examine the potentially negative consequences of AI usage on the legal sector, specifically the judiciary. Further, it suggests some preliminary measures to limit, if not completely curb, the role AI plays in judgment.
 
Deleted:
<
<
"What is a VPN?"
 
Changed:
<
<
When the average consumer inputs this Google search, the first thing that pops up isn’t the Google dictionary result. In fact, it’s not even an option on the page. What does pop up is a link to an article written by NordVPN? , better known as the sponsor of any YouTube? video with over 10,000 views. In a world where multiple competitors have been offering the same product for years, a relatively unbiased definition should be simple enough to find. The fact that a popular brand’s attempt to sell you a VPN pops up before you even know what it is demonstrates a much larger problem. Rather than empowering people with the tools to fully take control of their own privacy, companies like Nord, Express, and Surfshark jump to charge consumers high prices for much, much less privacy than they could easily get on their own. Over the course of this essay, I will discuss what VPNs can/should do, and then discuss why many paid VPN services fail to offer the promised protections.
>
>
AI and the Judiciary:
 
Added:
>
>
While the usage of Artificial Intelligence within the entire legal sphere has been met with rightful controversy, AI’s effect on the judiciary is especially troubling. According to the American Bar Association, numerous states have begun incorporating AI models into the judicial practice as an evaluation tool meant to aid in the generation of Pre-Sentence Reports (PSRs). Risk Assessment Tools are one specific class of AI model that rely on fact patterns and outcomes of previous cases to calculate metrics such as recidivism potential for criminal defendants. These metrics play an increasingly instrumental role in PSRs and, consequently, the sentencing outcomes of criminal cases. Sentencing courts have become increasingly reliant on these AI models to disastrous effect; already, the use of this software in PSR generation has been the subject of legal challenges on Due Process grounds. An investigative article published by ProPublica? highlighted one of the glaring issues with state judiciaries’ use of AI tools in criminal cases. Although limited data currently exists on these AI models, studies are beginning to show that risk assessment tools perpetuate racial bias in their assessments. The risk recidivism software COMPAS, developed by the for-profit company Equivant, serves as a shining example; Black defendants were almost twice as likely as white defendants to be wrongfully labeled as having a “high-risk” of recidivism. On the flipside, white defendants were much more likely than Black defendants to be incorrectly considered at “low-risk” of reoffense. This is far from the only problem with Artificial Intelligence models like COMPAS. Another potential issue with sentencing courts’ use of these tools is one inherent to their very nature. Artificial intelligence learns by constantly adapting its output to expanding data sets. These ever-evolving algorithms could mean skewed results for defendants as more data becomes available; the machine’s determination of a fair sentence for a defendant one day can, in theory, be completely different from its determination of a fair sentence for a future defendant with an identical fact pattern. Even further, the American Bar Association correctly posits that the use of computer-generated evaluations for determining matters such as recidivism risk removes the necessary human aspect of sentencing. Where human judges are better able to see beyond fact patterns and take more nuanced views of the defendants in front of them, AI software can only see the numbers, resulting in distressingly clinical results. With these problems in mind, it is understandable to see why the use of AI tools within the judiciary remains controversial.
 
Deleted:
<
<

Body

 
Changed:
<
<
First things first. A VPN, also known as a virtual private network, is a tool that creates a secure connection between two networks, or between a computing device and a network. Typical categories of VPNs include Remote access, host-to-network configuration, site-to-site, and extranet-based site-to-site VPNs. Illustrations liken a VPN to a secure underground tunnel between your computer and the websites you want to reach, keeping your information more secret than it would be if it traveled through the open-air, aboveground internet.
>
>
Preliminary Measures:
 
Deleted:
<
<
The part of the YouTube? video you skip usually describes two main benefits of having or using a VPN. First and foremost, paid VPNs promise their users access to content they couldn’t otherwise receive based on their location. In ads targeted at Americans, plucky YouTubers? usually show skits of themselves watching shows that are unavailable in certain countries. This also makes VPNs a sensible, one-time purchase for people traveling abroad. Students doing a semester in China might purchase a VPN so that they can stay up to date on their favorite TV shows and movies, using the services they also already pay for (Netflix, Hulu, etc). This also implies that a VPN might be a good tool for people who are based in countries that block more content to do the same thing . However, this isn’t the main advertising ‘hook’ VPNs use. Companies like ExpressVPN? and NordVPN? usually make claims about privacy. Typically, this involves a story about a hypothetical person walking into an airport, completing a bank transaction on the free public wifi, and getting their banking information stolen during an ARP spoofing attack. Very Sad and scary, but ExpressVPN? can help. Similarly, companies also claim that a VPN can stop an Internet Service Provider from reading up on all of the sites you visit to sell your data and create targeted advertising. Purchasers, they claim, can rest easy knowing that their anonymous Reddit posts and weird 3 a.m. google searches are safe from any prying eyes that might use them for nefarious or uncomfortable purposes. Sufficiently scared and a little intrigued about what shows are available outside the US, a consumer may fork over $8 per month for security and a little convenience.
 
Changed:
<
<
The problem with many of these claims is that, while potentially true, none of this stops the VPN company from doing all of the things a consumer is worried some anomalous ‘bad guy’ might do, the personal data isn’t that much more secure than simply staying on ‘padlocked secure’ HTTPS sites, and if someone really wants that cartoon that releases at different times in different countries a couple of months earlier, they can find it more easily, safely and cheaply through torrent than they would using a streamer’s website.
>
>
Barring an absolute moratorium on the use of AI tools in the judiciary, which would be difficult to enforce in practice, there are mitigating measures that may be taken to minimize the negative impacts of risk assessment instruments (RAIs) on the sentencing process. For one, regulation could look like limiting what factors go into determining matters like risk recidivism in defendants. Currently, tools like COMPAS utilize information relating to a defendant’s identity when calculating risk factors– including their race, sex, and age. To avoid integrating the same biases that plague the current sentencing process into the RAI algorithms, developers should be explicitly required to exclude these demographics. Further, developing companies of RAIs should be required to publicize what considerations go into their pre-sentencing reports and risk assessments. The confidential nature of RAIs has already been the subject of legal challenge; in Loomis v. Wisconsin, a defendant raised arguments against the COMPAS software for, inter alia, not reporting what data went into the generation of his risk assessment, making it impossible to challenge the instrument’s accuracy and validity. His point was entirely valid; if pre-sentencing reports are to be made accessible to parties of a case, why should other investigative tools, like the risk assessment algorithms that help generate such reports, not be made available and open to scrutiny and potential challenge on due process grounds? Lastly, software developers should be required to analyze the algorithmic outputs of the software that they create, and publish both their process and results. In order for there to be greater transparency and scrutiny in the judiciary’s use of AI, all stakeholders need to hold equal responsibility, and accountability, for potential failings and shortcomings of the risk assessment tools. Allowing developers to gain financially from the use of their algorithms in the sentencing process without any actual stake in the outcomes will work to disincentivize them from ensuring that their models are accurate, reliable, and nondiscriminatory. While the ultimate responsibility of case outcomes should lie with the government, any party that has a stake in criminal cases should bear at least some accountability for the execution, or lack thereof, of justice. These solutions are only launching points for a longer conversation around the use of AI in the criminal justice system. There remains a larger discussion about the use of AI by police, as well as the privacy considerations that plague the integration of artificial intelligence in government as a whole. These preliminary regulations would, however, work to address the issue of AI in the judiciary pending more substantive changes. With the acceleration of the AI boom, the unregulated usage of these so-called “risk assessment tools” will only become more of a risk in-and-of-itself.
 
Deleted:
<
<
Sure, one could say that they’d rather roll the dice with a paid VPN service that they’ve researched before buying and trust. The problem is that, much like a simple, clear, and useful definition, unbiased research on which VPNs are best is hard to find. Companies buy review websites, and around ten minutes into a YouTube? review search, you’ll start to find channels begging you NOT to buy a VPN. Furthermore, even the largest VPN services, like ExpressVPN? , have been bought by companies with a history of collaboration with ad-injection malware companies. Furthermore, by sending your data to a VPN company, you simply trust an anomalous bag guy with venture capital firm money, along with some of your own.
 
Changed:
<
<
If this is true, why would so many companies be allowed to make these misleading claims? The law, after all, should stop blatantly false advertisements from reaching mainstream audiences. [FTC’s role in false advertising: We have cases like Federal Trade Commission v. Bunte Bros, Inc.and, more recently, Static Control v. Lexmark that should protect us from puffed upp claims of a product’s worth.]
>
>
Sources: Hillman, Noel L. “The Use of Artificial Intelligence in Gauging the Risk of Recidivism.” American Bar Association, 1 Jan. 2019, www.americanbar.org/groups/judicial/publications/judges_journal/2019/winter/the-use-artificial-intelligence-gauging-risk-recidivism
 
Added:
>
>
Garrett, Brandon, and John Monahan. “Assessing Risk: The Use of Risk Assessment in Sentencing .” Bolch Judicial Institute at Duke Law, vol. 103, no. 2, 2019.
 
Added:
>
>
State v. Loomis, 371 Wis. 2d 235 (2016)
 
Changed:
<
<

Conclusion

>
>
Angwin, Julia, et al. “Machine Bias.” ProPublica? , 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
 
Changed:
<
<
However, courts can be slow to adapt to the use of new technology, so it’s possible that we won’t see any meaningful legislation on the claims or use of popular VPN services for some time. In summary, while companies claim that VPNs can give you access to better content and protect your data from harmful attacks and prying eyes, they aren’t worthwhile purchases for the safety-conscious consumer. Because the court system likely won’t kick in to stop VPNs that are not useful, and, in some cases, actively harming your computer, consumers should take matters into their own hands.
>
>
18 USC Sec. 3552(d)
 
Deleted:
<
<
A few recommendations for better alternatives:
 
Deleted:
<
<
[Rcommendations for secure browsers from Prof Moglen]. [Article on how to torrent].

Sources https://www.youtube.com/watch?v=WVDQEoe6ZWY (IS THIS A GOOD SOURCE?) https://en.wikipedia.org/wiki/Virtual_private_network https://www.nytimes.com/2021/10/06/technology/personaltech/are-vpns-worth-it.html

 \ No newline at end of file

TWikiGuestFirstEssay 34 - 13 Oct 2023 - Main.EdenEsemuede
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.
>
>
_You did say to make it bad.
 
Deleted:
<
<
The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age”, this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the "home" benefits offered by the tax collection authority". Prof. Werbach however believes that western's depiction of the SCS is is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.
 
Deleted:
<
<
As noted by Yuval Noah Harari, free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized. It shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crises, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness. These model are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior.
 
Changed:
<
<
The risk of discrimination highlighted by these researchers became painfully real in the Netherlands. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The risk indicators used by the system included having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Autoriteit Persoonsgegevens, the Dutch Data Protection Authority, for breaching several GDPR rules. In particular, the Authority found that the Tax Administration had no legal basis for processing the personal data used as risk indicators. Under the GDPR, personal data processing is allowed only if one of the legal bases listed in Article 6 applies.
>
>

Introduction

 
Deleted:
<
<
In the hyper-regulated European Union, the GDPR has attempted to address these issues by introducing Article 22, which allows individuals to opt out of "automated decision making, including profiling, and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual). Additionally, the proposed EU AI Act aims to place serious limitations on "AI systems providing social scoring of natural persons for general purposes by public authorities." These limitations prohibit social scoring systems from leading to detrimental or unfair treatment in unrelated social contexts or based on unjustified or disproportionate criteria.
 \ No newline at end of file
Added:
>
>

"What is a VPN?"

When the average consumer inputs this Google search, the first thing that pops up isn’t the Google dictionary result. In fact, it’s not even an option on the page. What does pop up is a link to an article written by NordVPN? , better known as the sponsor of any YouTube? video with over 10,000 views. In a world where multiple competitors have been offering the same product for years, a relatively unbiased definition should be simple enough to find. The fact that a popular brand’s attempt to sell you a VPN pops up before you even know what it is demonstrates a much larger problem. Rather than empowering people with the tools to fully take control of their own privacy, companies like Nord, Express, and Surfshark jump to charge consumers high prices for much, much less privacy than they could easily get on their own. Over the course of this essay, I will discuss what VPNs can/should do, and then discuss why many paid VPN services fail to offer the promised protections.

Body

First things first. A VPN, also known as a virtual private network, is a tool that creates a secure connection between two networks, or between a computing device and a network. Typical categories of VPNs include Remote access, host-to-network configuration, site-to-site, and extranet-based site-to-site VPNs. Illustrations liken a VPN to a secure underground tunnel between your computer and the websites you want to reach, keeping your information more secret than it would be if it traveled through the open-air, aboveground internet.

The part of the YouTube? video you skip usually describes two main benefits of having or using a VPN. First and foremost, paid VPNs promise their users access to content they couldn’t otherwise receive based on their location. In ads targeted at Americans, plucky YouTubers? usually show skits of themselves watching shows that are unavailable in certain countries. This also makes VPNs a sensible, one-time purchase for people traveling abroad. Students doing a semester in China might purchase a VPN so that they can stay up to date on their favorite TV shows and movies, using the services they also already pay for (Netflix, Hulu, etc). This also implies that a VPN might be a good tool for people who are based in countries that block more content to do the same thing . However, this isn’t the main advertising ‘hook’ VPNs use. Companies like ExpressVPN? and NordVPN? usually make claims about privacy. Typically, this involves a story about a hypothetical person walking into an airport, completing a bank transaction on the free public wifi, and getting their banking information stolen during an ARP spoofing attack. Very Sad and scary, but ExpressVPN? can help. Similarly, companies also claim that a VPN can stop an Internet Service Provider from reading up on all of the sites you visit to sell your data and create targeted advertising. Purchasers, they claim, can rest easy knowing that their anonymous Reddit posts and weird 3 a.m. google searches are safe from any prying eyes that might use them for nefarious or uncomfortable purposes. Sufficiently scared and a little intrigued about what shows are available outside the US, a consumer may fork over $8 per month for security and a little convenience.

The problem with many of these claims is that, while potentially true, none of this stops the VPN company from doing all of the things a consumer is worried some anomalous ‘bad guy’ might do, the personal data isn’t that much more secure than simply staying on ‘padlocked secure’ HTTPS sites, and if someone really wants that cartoon that releases at different times in different countries a couple of months earlier, they can find it more easily, safely and cheaply through torrent than they would using a streamer’s website.

Sure, one could say that they’d rather roll the dice with a paid VPN service that they’ve researched before buying and trust. The problem is that, much like a simple, clear, and useful definition, unbiased research on which VPNs are best is hard to find. Companies buy review websites, and around ten minutes into a YouTube? review search, you’ll start to find channels begging you NOT to buy a VPN. Furthermore, even the largest VPN services, like ExpressVPN? , have been bought by companies with a history of collaboration with ad-injection malware companies. Furthermore, by sending your data to a VPN company, you simply trust an anomalous bag guy with venture capital firm money, along with some of your own.

If this is true, why would so many companies be allowed to make these misleading claims? The law, after all, should stop blatantly false advertisements from reaching mainstream audiences. [FTC’s role in false advertising: We have cases like Federal Trade Commission v. Bunte Bros, Inc.and, more recently, Static Control v. Lexmark that should protect us from puffed upp claims of a product’s worth.]

Conclusion

However, courts can be slow to adapt to the use of new technology, so it’s possible that we won’t see any meaningful legislation on the claims or use of popular VPN services for some time. In summary, while companies claim that VPNs can give you access to better content and protect your data from harmful attacks and prying eyes, they aren’t worthwhile purchases for the safety-conscious consumer. Because the court system likely won’t kick in to stop VPNs that are not useful, and, in some cases, actively harming your computer, consumers should take matters into their own hands.

A few recommendations for better alternatives:

[Rcommendations for secure browsers from Prof Moglen]. [Article on how to torrent].

Sources https://www.youtube.com/watch?v=WVDQEoe6ZWY (IS THIS A GOOD SOURCE?) https://en.wikipedia.org/wiki/Virtual_private_network https://www.nytimes.com/2021/10/06/technology/personaltech/are-vpns-worth-it.html

 \ No newline at end of file

TWikiGuestFirstEssay 33 - 13 Oct 2023 - Main.LudovicoColetti
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.

The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age”, this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the "home" benefits offered by the tax collection authority". Prof. Werbach however believes that western's depiction of the SCS is is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.

Changed:
<
<
As noted by Yuval Noah Harari, free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized. It shouldn't come as a surprise then that western's versions of social credit experiare being made mainly by private corporations, especially in the financial sector.
>
>
As noted by Yuval Noah Harari, free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized. It shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crises, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness. These model are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior.

The risk of discrimination highlighted by these researchers became painfully real in the Netherlands. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The risk indicators used by the system included having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Autoriteit Persoonsgegevens, the Dutch Data Protection Authority, for breaching several GDPR rules. In particular, the Authority found that the Tax Administration had no legal basis for processing the personal data used as risk indicators. Under the GDPR, personal data processing is allowed only if one of the legal bases listed in Article 6 applies.

In the hyper-regulated European Union, the GDPR has attempted to address these issues by introducing Article 22, which allows individuals to opt out of "automated decision making, including profiling, and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual). Additionally, the proposed EU AI Act aims to place serious limitations on "AI systems providing social scoring of natural persons for general purposes by public authorities." These limitations prohibit social scoring systems from leading to detrimental or unfair treatment in unrelated social contexts or based on unjustified or disproportionate criteria.

 \ No newline at end of file

TWikiGuestFirstEssay 32 - 13 Oct 2023 - Main.LudovicoColetti
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
We post in our virtual canvas, thinking that we purposely and consciously decide the content of those posts, but our mental capacity does not drive those actions. The parasite induces them. The parasite give us tools to share, to edit pictures, and to post, but the underlying reality is that we post the only thing we own: our time.
>
>
In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.
 
Changed:
<
<
Nonetheless, our time is relative. We are drained of freedom of decision every time we click, swipe, and accept terms and conditions from the “free” services we use on the internet. I want to emphasize the quotations marks that I placed besides free. The latter, as we tend to think that we decide over what we post and with whom we share and interact. The reality is different. We lie to ourselves when we say that we control our virtual personalities.
>
>
The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age”, this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the "home" benefits offered by the tax collection authority". Prof. Werbach however believes that western's depiction of the SCS is is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.
 
Changed:
<
<
The parasite is the only controlling who we are. We feed the parasite with our posts - even when we overthink their content-. We give the parasite control over our capacity to decide. The parasite knows our steps, knows our fertility cycle (it even predicts it), knows our sleep cycle, and suggests what to eat, buy, and like. All these “suggestions” are inductions.

I have realized that even when we reflect on our virtual accounts, controlling and limiting our virtual content Is not enough. In other words, we waste our time trying to curate the life we want to share. We are not curating or deciding. In the end, it is the parasite that grows. It is asfixitiang roots over our brains.

We are not curating for those who benefit from our engagement (social media platforms, stores, advertisement) or those who follow us and want a glimpse of our life. The paradox here is that every time we feed my virtual profile, we deprive ourselves of the ability to keep things private. And with this, once again, making bigger and stronger parasites.

I ask myself: What is the purpose of keeping things private? And my answer is that privacy buys me time to reflect, think and create. Privacy protects how the piece of information about you has been obtained. Marmor's words: "it is about the how, not the what, that is known about you." The latter, as "our ability to control [how] we present ourselves to others is inherently limited."

From that stance, privacy gives us time, and therefore protects time, on when to disclose or reveal something. The underlying issue with the parasite is that it is the curator of our profiles, and in that filtration journey that we gave rise to, we have lost our ability to choose how others are using our time (and life).

Feeding the profiles consumes time. We post because the likes, comments, and virtual interactions affirm or reinforce the virtual being we choose to share with our selected community. We believe we have control over what and who we share. Still, the reality is that every click diminishes freedom, extinguishes privacy, and deprives us of the only thing we own: time.

The outcome is our inability to reflect and pause because our consciousness of time is limited by immediacy, neediness, and over-exposure. And the worst part is that the idea of being infinite humans, in the microcosmic stance, is vanished by the constant of self-reinvention instead of self-expansion.

The parasite is playing the game of “letting us choose.” What we have to realize is that every choice makes the parasite stronger. The parasite is using us to increase profits, is triggering our decisions in the way that serves the parasite’s ends. The verbs to share and post, which are the core of the interactions on the platforms, withered integration, insertion, and social construction. We handed our privacy in exchange for a fake sense of control. We gave our time, memories, and the idea of integrity in exchange for a false self-made identity that lacks authenticity and freedom—a self-imposed view by the parasite.

Humanity has dealt with the eternity/infinity question since we articulate ideas. To overcome the fact that our nature is limited by time, people used to write, paint, have children, and teach. By switching the idea of “eternity” towards platforms that hold and “save” our memories, our approach to eternity is rotten. To lay down this, I want to recall when Don Quixote found out, in his conversation with Sansón Carrasco, that his adventures were a topic of discussion among the students at the University of Salamanca. For him, being public, discussed, and remembered was an outcome, not a decisive purpose. He didn't act to be a topic. By the course of his actions, he became a character and, as a result, a subject of discussion. The lions, the windmills, and the galley slaves' adventures were public, and some read those actions as insanity, others as geniality.

Our desire for control shows that our aim to be remembered is vague because we rely solely on feeding the parasite. If we aim to change this reality, we need to cut ties with the parasite. Disable our social media accounts, the trackings the apps have over our lives. We need to stop posting on those platforms that instrumentalize our time to deprive us of individuality.

>
>
As noted by Yuval Noah Harari, free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized. It shouldn't come as a surprise then that western's versions of social credit experiare being made mainly by private corporations, especially in the financial sector.

TWikiGuestFirstEssay 31 - 10 Jan 2022 - Main.NataliaNegret
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
>
>
We post in our virtual canvas, thinking that we purposely and consciously decide the content of those posts, but our mental capacity does not drive those actions. The parasite induces them. The parasite give us tools to share, to edit pictures, and to post, but the underlying reality is that we post the only thing we own: our time.
 
Added:
>
>
Nonetheless, our time is relative. We are drained of freedom of decision every time we click, swipe, and accept terms and conditions from the “free” services we use on the internet. I want to emphasize the quotations marks that I placed besides free. The latter, as we tend to think that we decide over what we post and with whom we share and interact. The reality is different. We lie to ourselves when we say that we control our virtual personalities.
 
Added:
>
>
The parasite is the only controlling who we are. We feed the parasite with our posts - even when we overthink their content-. We give the parasite control over our capacity to decide. The parasite knows our steps, knows our fertility cycle (it even predicts it), knows our sleep cycle, and suggests what to eat, buy, and like. All these “suggestions” are inductions.
 
Changed:
<
<

A Growing Need to Protect Privacy in an Era of Growing Willingness to Give it Up

>
>
I have realized that even when we reflect on our virtual accounts, controlling and limiting our virtual content Is not enough. In other words, we waste our time trying to curate the life we want to share. We are not curating or deciding. In the end, it is the parasite that grows. It is asfixitiang roots over our brains.
 
Added:
>
>
We are not curating for those who benefit from our engagement (social media platforms, stores, advertisement) or those who follow us and want a glimpse of our life. The paradox here is that every time we feed my virtual profile, we deprive ourselves of the ability to keep things private. And with this, once again, making bigger and stronger parasites.
 
Changed:
<
<

The Advent of Privacy Challenges

>
>
I ask myself: What is the purpose of keeping things private? And my answer is that privacy buys me time to reflect, think and create. Privacy protects how the piece of information about you has been obtained. Marmor's words: "it is about the how, not the what, that is known about you." The latter, as "our ability to control [how] we present ourselves to others is inherently limited."
 
Changed:
<
<
Those of us born in the 90s remember the in-between; the shift of people carrying cellphones, to people carrying cellphones that could connect to the internet. Of one being able to use a bulky computer in a stationary place, to carrying around a laptop that let us take our work anywhere. To the only “social” being face-to-face meetings, to social being a word that finds its place before “media.”
>
>
From that stance, privacy gives us time, and therefore protects time, on when to disclose or reveal something. The underlying issue with the parasite is that it is the curator of our profiles, and in that filtration journey that we gave rise to, we have lost our ability to choose how others are using our time (and life).
 
Changed:
<
<
We look at our current debates with privacy and think, “this is because of the internet revolution.” But in fact, right to privacy is alluded to from the very advent of our nation. The U.S. Constitution, as interpreted by the Supreme Court, recognizes a right to privacy in multiple amendments. Further, the first article addressing the privacy was by Justice Louis Brandeis in his 1890 Harvard Law Review article, stemming from the advent of photography and newspaper invasion into individuals’ homes. 1948 Saw the U.N. Declaration of Human Rights address privacy, and soon after in 1960, legal Scholar William Prosser “outlined four torts that would allow someone whose privacy was violated…to sue the perpetrator for damages.” (1)
>
>
Feeding the profiles consumes time. We post because the likes, comments, and virtual interactions affirm or reinforce the virtual being we choose to share with our selected community. We believe we have control over what and who we share. Still, the reality is that every click diminishes freedom, extinguishes privacy, and deprives us of the only thing we own: time.
 
Added:
>
>
The outcome is our inability to reflect and pause because our consciousness of time is limited by immediacy, neediness, and over-exposure. And the worst part is that the idea of being infinite humans, in the microcosmic stance, is vanished by the constant of self-reinvention instead of self-expansion.
 
Added:
>
>
The parasite is playing the game of “letting us choose.” What we have to realize is that every choice makes the parasite stronger. The parasite is using us to increase profits, is triggering our decisions in the way that serves the parasite’s ends. The verbs to share and post, which are the core of the interactions on the platforms, withered integration, insertion, and social construction. We handed our privacy in exchange for a fake sense of control. We gave our time, memories, and the idea of integrity in exchange for a false self-made identity that lacks authenticity and freedom—a self-imposed view by the parasite.
 
Added:
>
>
Humanity has dealt with the eternity/infinity question since we articulate ideas. To overcome the fact that our nature is limited by time, people used to write, paint, have children, and teach. By switching the idea of “eternity” towards platforms that hold and “save” our memories, our approach to eternity is rotten. To lay down this, I want to recall when Don Quixote found out, in his conversation with Sansón Carrasco, that his adventures were a topic of discussion among the students at the University of Salamanca. For him, being public, discussed, and remembered was an outcome, not a decisive purpose. He didn't act to be a topic. By the course of his actions, he became a character and, as a result, a subject of discussion. The lions, the windmills, and the galley slaves' adventures were public, and some read those actions as insanity, others as geniality.
 
Changed:
<
<

The Modern Issues

In the past, such concerns were largely driven by individuals not having control over the actions of others—of the press taking photos, of the government invading their homes. However, in today’s age the concern is individuals’ own ignorance or willingness to forgo privacy for service. In an era of programmatic, targeted advertising, it’s easy to give up our names, ages, emails, and phone numbers, for the convenience and range of services that make life easier, often with the added allure of such services being free.

Earlier this month, former Facebook employee France Haugen released files revealing the results of the company’s internal research results regarding the impact of Instagram on teenage girls. A key statistic that has been highlighted in the media is that “32 perfect of teen girls said that when they felt bad about their bodies, Instagram made them feel worse” (2). One solution addresses that children under thirteen aren’t even supposed to be making accounts, because data collection on children under that age goes against our country’s privacy laws. Yet, I know many of my classmates signed up for Facebook before they were thirteen with fake birthdays. Facebook also mentioned a potential to create “Instagram Kids.”

Similarly, humans invariably offer up their data. Sometimes due simply to being unaware of what they’re revealing by doing so (as with the military base that was revealed when soldiers decided to compete with each other, uploading their fitness tracker data in the process and creating a map of their exercise route). In other ways, we do so for convenience, as with the FreeStyle? Libre sensors that have been using AI to recommend personalized diets based on individual’s glucose levels (4).

Attempts at Solving The Issue

Apple created a lot of buzz (and some very creative advertising campaigns) when they released a pop-up window that notifies users that an app is tracking their data, allowing users to prevent the app from doing so. (3) Many small businesses and apps were upset by the change, arguing that this was how they allowed users to access their services for free. Facebook responded saying that it was attempting to create a method of advertising that doesn’t rely on user data (3). But is it really that easy to dismantle a $350 billion digital industry? These companies have different views of how much they should roll back such advertising.

While BigTech? attempts to revamp their own privacy systems, can and should users do more to take privacy into their own hands? I’m positive that many people would rather use an app for free than pay to remove advertising (as evidences by the numerous app-store complaints when apps roll out pay-for-no-ads versions of their products). There has been a growing industry of products that market themselves as shirking ads (for example Brave, the private web browser), but how many people choose to use this service?

Furthermore, what is the state of media literacy in our country? One of the first ways we can protect young children who will undeniably sign up for these enticing social media services is to inform them about what they give up in exchange for access to endless streams of videos, 150-word posts, and their friends’ photos.

In the long run, I would argue that this education is a must if we’re to convince people to pay for subscription fees in lieu of paying for such services with their data.

(1) https://safecomputing.umich.edu/privacy/history-of-privacy-timeline

(2) https://www.nytimes.com/2021/10/13/parenting/instagram-teen-girls-body-image.html

(3) https://www.nytimes.com/2021/09/16/technology/digital-privacy.html

(4) https://www.theguardian.com/lifeandstyle/2021/oct/05/intimate-data-can-a-person-who-tracks-their-steps-sleep-and-food-ever-truly-be-free

>
>
Our desire for control shows that our aim to be remembered is vague because we rely solely on feeding the parasite. If we aim to change this reality, we need to cut ties with the parasite. Disable our social media accounts, the trackings the apps have over our lives. We need to stop posting on those platforms that instrumentalize our time to deprive us of individuality.

TWikiGuestFirstEssay 30 - 01 Nov 2021 - Main.RochishaTogare
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"

Changed:
<
<

>
>

A Growing Need to Protect Privacy in an Era of Growing Willingness to Give it Up

 
Changed:
<
<

:

>
>

The Advent of Privacy Challenges

 
Added:
>
>
Those of us born in the 90s remember the in-between; the shift of people carrying cellphones, to people carrying cellphones that could connect to the internet. Of one being able to use a bulky computer in a stationary place, to carrying around a laptop that let us take our work anywhere. To the only “social” being face-to-face meetings, to social being a word that finds its place before “media.”
 
Added:
>
>
We look at our current debates with privacy and think, “this is because of the internet revolution.” But in fact, right to privacy is alluded to from the very advent of our nation. The U.S. Constitution, as interpreted by the Supreme Court, recognizes a right to privacy in multiple amendments. Further, the first article addressing the privacy was by Justice Louis Brandeis in his 1890 Harvard Law Review article, stemming from the advent of photography and newspaper invasion into individuals’ homes. 1948 Saw the U.N. Declaration of Human Rights address privacy, and soon after in 1960, legal Scholar William Prosser “outlined four torts that would allow someone whose privacy was violated…to sue the perpetrator for damages.” (1)
 
Deleted:
<
<

 
Added:
>
>

The Modern Issues

 
Added:
>
>
In the past, such concerns were largely driven by individuals not having control over the actions of others—of the press taking photos, of the government invading their homes. However, in today’s age the concern is individuals’ own ignorance or willingness to forgo privacy for service. In an era of programmatic, targeted advertising, it’s easy to give up our names, ages, emails, and phone numbers, for the convenience and range of services that make life easier, often with the added allure of such services being free.

Earlier this month, former Facebook employee France Haugen released files revealing the results of the company’s internal research results regarding the impact of Instagram on teenage girls. A key statistic that has been highlighted in the media is that “32 perfect of teen girls said that when they felt bad about their bodies, Instagram made them feel worse” (2). One solution addresses that children under thirteen aren’t even supposed to be making accounts, because data collection on children under that age goes against our country’s privacy laws. Yet, I know many of my classmates signed up for Facebook before they were thirteen with fake birthdays. Facebook also mentioned a potential to create “Instagram Kids.”

Similarly, humans invariably offer up their data. Sometimes due simply to being unaware of what they’re revealing by doing so (as with the military base that was revealed when soldiers decided to compete with each other, uploading their fitness tracker data in the process and creating a map of their exercise route). In other ways, we do so for convenience, as with the FreeStyle? Libre sensors that have been using AI to recommend personalized diets based on individual’s glucose levels (4).

Attempts at Solving The Issue

Apple created a lot of buzz (and some very creative advertising campaigns) when they released a pop-up window that notifies users that an app is tracking their data, allowing users to prevent the app from doing so. (3) Many small businesses and apps were upset by the change, arguing that this was how they allowed users to access their services for free. Facebook responded saying that it was attempting to create a method of advertising that doesn’t rely on user data (3). But is it really that easy to dismantle a $350 billion digital industry? These companies have different views of how much they should roll back such advertising.

While BigTech? attempts to revamp their own privacy systems, can and should users do more to take privacy into their own hands? I’m positive that many people would rather use an app for free than pay to remove advertising (as evidences by the numerous app-store complaints when apps roll out pay-for-no-ads versions of their products). There has been a growing industry of products that market themselves as shirking ads (for example Brave, the private web browser), but how many people choose to use this service?

Furthermore, what is the state of media literacy in our country? One of the first ways we can protect young children who will undeniably sign up for these enticing social media services is to inform them about what they give up in exchange for access to endless streams of videos, 150-word posts, and their friends’ photos.

In the long run, I would argue that this education is a must if we’re to convince people to pay for subscription fees in lieu of paying for such services with their data.

(1) https://safecomputing.umich.edu/privacy/history-of-privacy-timeline

(2) https://www.nytimes.com/2021/10/13/parenting/instagram-teen-girls-body-image.html

(3) https://www.nytimes.com/2021/09/16/technology/digital-privacy.html

(4) https://www.theguardian.com/lifeandstyle/2021/oct/05/intimate-data-can-a-person-who-tracks-their-steps-sleep-and-food-ever-truly-be-free

  \ No newline at end of file

TWikiGuestFirstEssay 29 - 26 Oct 2021 - Main.KatharinaRogosch
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
A Growing Concern For Privacy
 
Deleted:
<
<

What

 
Added:
>
>

 
Deleted:
<
<
In the modern world of technology, where internet mammoths such as Google and Facebook, collect large amounts of personal data, the regulation of the collection of such data is essential. The interconnected relationship between data and individuals’ privacy over their own data needs to be examined to understand whether the current framework can achieve its own aims. This requires a two-set analysis: first, an examination of the regulation of data privacy and whether the standards imposed actually result in said protection; and secondly, an evaluation as to whether privacy should be protected by other means that it currently is.
 
Changed:
<
<
To aid in this analysis, the European General Data Protection Regulation (hereinafter “GDPR”) will be examined. This is due to the fact that it is one of the strictest data protection laws enacted worldwide, and an examination of such a strict privacy and data-protection standard should provide clarity as to whether adequate privacy protections have been achieved.
>
>

:

 
Deleted:
<
<

General Data Protection Regulation:

 
Deleted:
<
<
Within the European Union data protection is secured and regulated by the General Data Protection Regulation. The GDPR aims “to give citizens and residents of the European Union and the European Economic Area control over their personal data, and to simplify the regulatory environment for international business by fully harmonizing the national data laws of its member states”. However, the GDPR does not only concern privacy, rather its objectives relate to the “fundamental rights and freedoms of natural persons” surrounding the “processing and free movement of personal data”. Consequently, the GDPR also aims to address the rising power of Big Data practices and the “economic imbalance between [these companies] on one hand and consumers on the other”.
 
Changed:
<
<
The GDPR addresses the power imbalance between data controllers, who derive significant commercial benefit from the use of data, and users who bear significant harms associated with the usage of their own data. The legislation does this by placing explicit consent and anonymization techniques at the core of data processing. However, by focusing on these two specific aspects, the European legislators construct “structural regulatory spaces that fail to understand the ongoing challenges in delivering acceptable and effective regulation”. By exclusively concentrating on consent and anonymization techniques as a way to ensure data privacy and security, the GDPR fails to address not only the issues these concepts create but also how these should be implemented by app developers.
>
>

 
Deleted:
<
<
There are two issues created by the GDPR regulation, and that consequently significantly affect individual users’ privacy and data. Firstly, by using individuals’ consent as the gatekeeper to the legal processing of data, the GDPR places heavy emphasis on internet platforms themselves to fulfill the necessary GDPR standards. While simply obtaining users’ consent to the processing of their personal data does not make the processing of such data lawful, the fact that it is up to internet organizations themselves to implement adequate privacy standards says very little in terms of the protection that such standards afford in reality. Secondly, the GDPR stipulates that when data is anonymized, the need for explicit consent of the processing of the collected data is no longer required. At its core, by placing emphasis on anonymization techniques, the GDPR aims to reduce harmful forms of identification by preventing the singling out of natural persons and their personal information. However, as Narayanan and Shmitikov’s Paper on De-anonymization of Large Datasets and Oberhauses’s article on anonymous browsing data underline, de-anonymization of large data sets is standard industry practice for a majority of internet platforms.

Is the GDPR the right standard for privacy protection?

As outlined above, there are several issues associated with using the GDPR as the standard for privacy protection, the two biggest ones being treating consent as the standard for privacy, and the ability to de-anonymize data. Despite these issues, there are a number of benefits associated with using GDPR as the standard for data protection, namely that it functions in what Profesor Moglen as part of his “The Union, May it Be Preserved” speech in a transactional sphere. While Professor Moglen sees this as a problematic quality of the GDPR, the fact that the GDPR functions as a transaction where users consent to collection and usage of their data as a “transaction” for which they receive the benefit of accessing internet platforms means that the regulation can easily be implemented by any type of corporation. The issue with the GDPR is that the standards of implementation are too lax, and upon drafting the GDPR in 2018 the impact of de-anonymization technologies was not sufficiently considered. One could argue that if amendments were implemented into the GDPR that would tackle the issues of de-anonymization technologies the current privacy issues would be adequately addressed. However, such an argument fails to address the fundamental power imbalance created by internet platforms such as Google, Yahoo, and Facebook, where individual users are not given a choice as to how their data is processed.

Instead of working within the confines of the GDPR as it exists currently, Professor Moglen argues that we need to challenge our basic assumption that privacy and our data is part of the “transaction”. To some extent this idea has merit, in that why should our own personal data be a transactional token by which our privacy is achieved? In this sense, Professor Moglen’s definition of privacy as “ecological” and “relational among people” rather than an issue of individual consent is one that seems to provide a stricter standard of privacy protection. While an ecological conception of privacy could provide a much stricter standard of individuals’ data protection, the means of achieving such protection are less concrete. Namely, what standard of privacy is going to be the baseline to which all protection is measured (if an ecological protection of privacy is adopted akin to environmental protection)?

 

\ No newline at end of file


TWikiGuestFirstEssay 28 - 25 Oct 2021 - Main.RochishaTogare
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Added:
>
>
A Growing Concern For Privacy
 
Changed:
<
<

Does the GDPR adequately protect individuals' privacy?

>
>

What

 

In the modern world of technology, where internet mammoths such as Google and Facebook, collect large amounts of personal data, the regulation of the collection of such data is essential. The interconnected relationship between data and individuals’ privacy over their own data needs to be examined to understand whether the current framework can achieve its own aims. This requires a two-set analysis: first, an examination of the regulation of data privacy and whether the standards imposed actually result in said protection; and secondly, an evaluation as to whether privacy should be protected by other means that it currently is.


TWikiGuestFirstEssay 27 - 23 Oct 2021 - Main.KatharinaRogosch
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
The Matrix: A Non-Fiction
>
>
 
Deleted:
<
<
Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust. In the past century, a new form of surveillance has emerged.As Shoshana Zuboff described it, surveillance Capitalism depends on exploiting and controlling human nature. Companies like Google and Facebook extract information from us, and employ it to re-design our behavior with the goal of profit maximization. Quite simply, these companies are using technology not only to invade our privacy, but also to gradually, slightly, imperceptibly change our own behavior and perception. The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Furthermore, surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking. An example of that would be Russia's use of Facebook and Twitter in 2018 by creating accounts and spreading polarizing misinformation, in order to manipulate Americans into casting their votes for Donald Trump.
 
Changed:
<
<
The Matrix is a science fiction movie about humans trapped in a simulation being controlled by machines, while other humans play with the switches. In the world of surveillance capitalism, is the Matrix still a science fiction?
>
>

Does the GDPR adequately protect individuals' privacy?

 
Deleted:
<
<
Why do we "trust this device"?
 
Changed:
<
<
It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions.We do not fear this surveillance because the tools it uses are attractive objects, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence.The tools that are surveilling us are purposefully designed in a way that attracts us to it. They use the human innate attraction to beauty.The nature of tech today has made the user experience and user interface design more important than ever before. The products are far more elegant than what they used to be. They focus on colors, shapes, clicks, feel, and ease of use to make the product more appealing to the senses. We also perceive these elegant tools as harmless immovable objects, incapable of threatening our existence.
>
>
In the modern world of technology, where internet mammoths such as Google and Facebook, collect large amounts of personal data, the regulation of the collection of such data is essential. The interconnected relationship between data and individuals’ privacy over their own data needs to be examined to understand whether the current framework can achieve its own aims. This requires a two-set analysis: first, an examination of the regulation of data privacy and whether the standards imposed actually result in said protection; and secondly, an evaluation as to whether privacy should be protected by other means that it currently is.
 
Changed:
<
<
Furthermore, we are told that these tools are there to serve us, giving us a sense of control. Meanwhile we have become preys to these tools which are designed to intentionally get us addicted and stripping us of actual control.Stanford University has a persuasive design lab which purpose is to teach the art of persuasion to its engineers and product designers including strategies such as placing ‘hot triggers’ in the path of motivated users. Such hot triggers would be colourful icons which glow with a light pulse when notifications remain unread, a smartwatch poking you ensuring you don't miss an update, or a "next episode" box on Netflix. Even though we know it's time to go to bed, we don't do turn off the tv and let the next episodes play automatically. The timer that they place before the next episode automatically plays is placed there to give us a sense of control. We, the users of email, social media, health apps, and smartphones are in a continuous state of distraction. Without knowing why, we find ourselves on social media and unintentionally jumping from one platform to the other.
>
>
To aid in this analysis, the European General Data Protection Regulation (hereinafter “GDPR”) will be examined. This is due to the fact that it is one of the strictest data protection laws enacted worldwide, and an examination of such a strict privacy and data-protection standard should provide clarity as to whether adequate privacy protections have been achieved.
 
Deleted:
<
<
Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative. Google, Facebook, Apple, et al want to render the choices they want us to make easier, and the choices they don’t want us to make harder, or inexistent.These tools are the new norm and we do not fear what we know, or think we know. We genuinely that we cannot function, keep track of our events, find a date, find a job, have a social life, listen to music, stay healthy without these tools. We have thus surrendered into a fascist way of thinking where we don't question things if they are working.
 
Changed:
<
<
Getting Out of the Matrix
>
>

General Data Protection Regulation:

 
Deleted:
<
<
We need to start by being aware of the reality of things. This attractive "object" has taken on the form of a physiological nervous system capable of creeping into our conscious and subconscious mind and manipulate our behavior. This "being" is a frightening threat to humans and our freedom of thought which should activate our defense mechanism and response. We must educate ourselves and those around us that there are other alternatives. We can use technology which allows us to live freely.Most importantly, we must educate the generations which are growing up believing that this physiological nervous system is their security blanket. We must teach them how to code and they fight it from the inside.
 
Changed:
<
<
Once we are alert and aware, we must take actual control by push backing instead of being pushed around. We refuse to be the submissive, passive, engaged victims of these tools. We can start by not swiping up for advertisements, turning off all notifications, not watching another episode, and gradually decreasing our interaction with it. Our time, attention, and freedom of choice are invaluable and we must protect them.Get out of the Matrix.
>
>
Within the European Union data protection is secured and regulated by the General Data Protection Regulation. The GDPR aims “to give citizens and residents of the European Union and the European Economic Area control over their personal data, and to simplify the regulatory environment for international business by fully harmonizing the national data laws of its member states”. However, the GDPR does not only concern privacy, rather its objectives relate to the “fundamental rights and freedoms of natural persons” surrounding the “processing and free movement of personal data”. Consequently, the GDPR also aims to address the rising power of Big Data practices and the “economic imbalance between [these companies] on one hand and consumers on the other”.

The GDPR addresses the power imbalance between data controllers, who derive significant commercial benefit from the use of data, and users who bear significant harms associated with the usage of their own data. The legislation does this by placing explicit consent and anonymization techniques at the core of data processing. However, by focusing on these two specific aspects, the European legislators construct “structural regulatory spaces that fail to understand the ongoing challenges in delivering acceptable and effective regulation”. By exclusively concentrating on consent and anonymization techniques as a way to ensure data privacy and security, the GDPR fails to address not only the issues these concepts create but also how these should be implemented by app developers.

There are two issues created by the GDPR regulation, and that consequently significantly affect individual users’ privacy and data. Firstly, by using individuals’ consent as the gatekeeper to the legal processing of data, the GDPR places heavy emphasis on internet platforms themselves to fulfill the necessary GDPR standards. While simply obtaining users’ consent to the processing of their personal data does not make the processing of such data lawful, the fact that it is up to internet organizations themselves to implement adequate privacy standards says very little in terms of the protection that such standards afford in reality. Secondly, the GDPR stipulates that when data is anonymized, the need for explicit consent of the processing of the collected data is no longer required. At its core, by placing emphasis on anonymization techniques, the GDPR aims to reduce harmful forms of identification by preventing the singling out of natural persons and their personal information. However, as Narayanan and Shmitikov’s Paper on De-anonymization of Large Datasets and Oberhauses’s article on anonymous browsing data underline, de-anonymization of large data sets is standard industry practice for a majority of internet platforms.

Is the GDPR the right standard for privacy protection?

As outlined above, there are several issues associated with using the GDPR as the standard for privacy protection, the two biggest ones being treating consent as the standard for privacy, and the ability to de-anonymize data. Despite these issues, there are a number of benefits associated with using GDPR as the standard for data protection, namely that it functions in what Profesor Moglen as part of his “The Union, May it Be Preserved” speech in a transactional sphere. While Professor Moglen sees this as a problematic quality of the GDPR, the fact that the GDPR functions as a transaction where users consent to collection and usage of their data as a “transaction” for which they receive the benefit of accessing internet platforms means that the regulation can easily be implemented by any type of corporation. The issue with the GDPR is that the standards of implementation are too lax, and upon drafting the GDPR in 2018 the impact of de-anonymization technologies was not sufficiently considered. One could argue that if amendments were implemented into the GDPR that would tackle the issues of de-anonymization technologies the current privacy issues would be adequately addressed. However, such an argument fails to address the fundamental power imbalance created by internet platforms such as Google, Yahoo, and Facebook, where individual users are not given a choice as to how their data is processed.

Instead of working within the confines of the GDPR as it exists currently, Professor Moglen argues that we need to challenge our basic assumption that privacy and our data is part of the “transaction”. To some extent this idea has merit, in that why should our own personal data be a transactional token by which our privacy is achieved? In this sense, Professor Moglen’s definition of privacy as “ecological” and “relational among people” rather than an issue of individual consent is one that seems to provide a stricter standard of privacy protection. While an ecological conception of privacy could provide a much stricter standard of individuals’ data protection, the means of achieving such protection are less concrete. Namely, what standard of privacy is going to be the baseline to which all protection is measured (if an ecological protection of privacy is adopted akin to environmental protection)?


TWikiGuestFirstEssay 26 - 22 Oct 2021 - Main.NathalieNoura
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
The Matrix: A Non-Fiction
Added:
>
>
 Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust. In the past century, a new form of surveillance has emerged.As Shoshana Zuboff described it, surveillance Capitalism depends on exploiting and controlling human nature. Companies like Google and Facebook extract information from us, and employ it to re-design our behavior with the goal of profit maximization. Quite simply, these companies are using technology not only to invade our privacy, but also to gradually, slightly, imperceptibly change our own behavior and perception. The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Furthermore, surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking. An example of that would be Russia's use of Facebook and Twitter in 2018 by creating accounts and spreading polarizing misinformation, in order to manipulate Americans into casting their votes for Donald Trump.

TWikiGuestFirstEssay 25 - 22 Oct 2021 - Main.NathalieNoura
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
The Matrix: A Non-Fiction Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust.

TWikiGuestFirstEssay 24 - 22 Oct 2021 - Main.NathalieNoura
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
The Matrix: A Non-Fiction Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust. In the past century, a new form of surveillance has emerged.As Shoshana Zuboff described it, surveillance Capitalism depends on exploiting and controlling human nature. Companies like Google and Facebook extract information from us, and employ it to re-design our behavior with the goal of profit maximization. Quite simply, these companies are using technology not only to invade our privacy, but also to gradually, slightly, imperceptibly change our own behavior and perception.
Changed:
<
<
The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking.
>
>
The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Furthermore, surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking. An example of that would be Russia's use of Facebook and Twitter in 2018 by creating accounts and spreading polarizing misinformation, in order to manipulate Americans into casting their votes for Donald Trump.
 The Matrix is a science fiction movie about humans trapped in a simulation being controlled by machines, while other humans play with the switches. In the world of surveillance capitalism, is the Matrix still a science fiction?

Why do we "trust this device"?

Changed:
<
<
It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions. So why is this type of surveillance not associated with fear and distrust ? We do not fear it because the tools are attractive objects, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence. This response can then be tempered by a conscious realization of the situation. These platforms surveilling our every move and mood are purposefully designed in a way that attracts us to it, like humans are attracted to beauty. It is not violent nor ugly. We perceive it as an immovable object, incapable of threatening our existence. Meanwhile this "object" is taking on a physiological nervous system capable of its own capable of creeping into our conscious and subconscious mind.This nervous system is learning how to manipulate the human mind and take advantage of our insecurities. No wonder Google, Facebook, Apple, ByteDance? and all other companies pour millions of dollars in product design. Furthermore, this nervous system is marketed as tools that are there to serve us and make our lives easier and more efficient, giving us a false sense of control. Meanwhile we have become preys to this instrument. We, the users of email, social media, health apps, and smartphones in general are in a continuous state of distraction. Without knowing why, we find ourselves on social media and unintentionally jump from one platform to the other, and before you know it more than an hour has gone by.We cannot stop. Everyday a new tool emerges and we dive right into it. An app for monitoring our steps and calorie count goes on the market. We buy into it, soon enough we are no longer able to monitor ourselves and our health without that app. We are clearly not in control. Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative.These tools are the new norm and we do not fear what we know, or think we know. We start believing that we cannot function, keep track of our events, find a date, find a job, have a social life, listen to music, stay healthy without these tools. We have surrendered into a fascist way of thinking in that don't question things if the trains run on time.
>
>
It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions.We do not fear this surveillance because the tools it uses are attractive objects, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence.The tools that are surveilling us are purposefully designed in a way that attracts us to it. They use the human innate attraction to beauty.The nature of tech today has made the user experience and user interface design more important than ever before. The products are far more elegant than what they used to be. They focus on colors, shapes, clicks, feel, and ease of use to make the product more appealing to the senses. We also perceive these elegant tools as harmless immovable objects, incapable of threatening our existence.
 
Changed:
<
<
Push Back We need to start seeing things the way they really are. The internet is a physiological nervous system. It is a being on its own that can be used by companies and governments to surveil, control, and alter our behavior. This is a real frightening violent threat to human beings and our freedom of thought. As such, we need to treat the beautiful tools which are at our disposal, but not under our control, as a hostile agent with a mind of its own. We must activate our defense mechanism. We cannot put the genie back in the bottle, nor should we. Technology in itself is neutral. First we must fear it and treat it as if we were being followed around the clock by a being that wants to destroy us. It should be that dramatic in order to activate our defense mechanism. Since we are not going to get rid of this physiological nervous system, we must internalize the fear and control it. We are not its submissive, passive, engaged victims. We can start by taking baby steps like not swipe up, turning off all notifications, decreasing our interaction with it. An even better option would be going cold turkey by getting rid of all of the tools that are surveilling us for profit or control, and replacing them with different ones. Personally, I'm a baby steps kind of person. Second, we must educate ourselves and those around us that there are other alternatives being tools that do not collect, store, and process data. Finally and most importantly, we must educate the generations which are growing up believing that this physiological nervous system is their security blanket. We must teach them how to code! They will understand this creature, how it works, and fight it with its own tools.
>
>
Furthermore, we are told that these tools are there to serve us, giving us a sense of control. Meanwhile we have become preys to these tools which are designed to intentionally get us addicted and stripping us of actual control.Stanford University has a persuasive design lab which purpose is to teach the art of persuasion to its engineers and product designers including strategies such as placing ‘hot triggers’ in the path of motivated users. Such hot triggers would be colourful icons which glow with a light pulse when notifications remain unread, a smartwatch poking you ensuring you don't miss an update, or a "next episode" box on Netflix. Even though we know it's time to go to bed, we don't do turn off the tv and let the next episodes play automatically. The timer that they place before the next episode automatically plays is placed there to give us a sense of control. We, the users of email, social media, health apps, and smartphones are in a continuous state of distraction. Without knowing why, we find ourselves on social media and unintentionally jumping from one platform to the other.

Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative. Google, Facebook, Apple, et al want to render the choices they want us to make easier, and the choices they don’t want us to make harder, or inexistent.These tools are the new norm and we do not fear what we know, or think we know. We genuinely that we cannot function, keep track of our events, find a date, find a job, have a social life, listen to music, stay healthy without these tools. We have thus surrendered into a fascist way of thinking where we don't question things if they are working.

Getting Out of the Matrix We need to start by being aware of the reality of things. This attractive "object" has taken on the form of a physiological nervous system capable of creeping into our conscious and subconscious mind and manipulate our behavior. This "being" is a frightening threat to humans and our freedom of thought which should activate our defense mechanism and response. We must educate ourselves and those around us that there are other alternatives. We can use technology which allows us to live freely.Most importantly, we must educate the generations which are growing up believing that this physiological nervous system is their security blanket. We must teach them how to code and they fight it from the inside.

Once we are alert and aware, we must take actual control by push backing instead of being pushed around. We refuse to be the submissive, passive, engaged victims of these tools. We can start by not swiping up for advertisements, turning off all notifications, not watching another episode, and gradually decreasing our interaction with it. Our time, attention, and freedom of choice are invaluable and we must protect them.Get out of the Matrix.


TWikiGuestFirstEssay 23 - 22 Oct 2021 - Main.NathalieNoura
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
The Matrix: A Non-Fiction Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust. In the past century, a new form of surveillance has emerged.As Shoshana Zuboff described it, surveillance Capitalism depends on exploiting and controlling human nature. Companies like Google and Facebook extract information from us, and employ it to re-design our behavior with the goal of profit maximization. Quite simply, these companies are using technology not only to invade our privacy, but also to gradually, slightly, imperceptibly change our own behavior and perception. The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking.
Changed:
<
<
The Matrix is a “science fiction” movie about humans trapped in a simulation being controlled by machines, while other humans play with the switches. In the world of surveillance capitalism, is the Matrix still a science fiction?
>
>
The Matrix is a science fiction movie about humans trapped in a simulation being controlled by machines, while other humans play with the switches. In the world of surveillance capitalism, is the Matrix still a science fiction?
 
Changed:
<
<
Why are we not scared of surveillance? It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions. So why do we not fear this surveillance? We do not fear it because the tools are attractive, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence. This response can then be tempered by a conscious realization of the situation. These platforms surveilling our every move and mood are purposefully designed in a way that attracts us to it, like humans are attracted to beauty. It is not violent nor ugly. We perceive it as an object, not a being threatening our existence. All the while this object is taking on a physiological nervous system interconnected.. Google, Facebook, Apple, ByteDance? and all other companies pour millions of dollars in product design. Moreover, they market themselves as tools that are there to serve us, giving us a false sense of control. Meanwhile we have become preys to this instrument. Users of email, social media, health apps, and smartphones in general are in a continuous state of distraction. Without knowing why we find ourselves on social media and unintentionally jump from one platform to the other. Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative. We feel we will be isolated from society, incapable of monitoring our health, lost, unable to find a date, etc. We have surrendered into a fascist way of thinking that no one will question things if the trains run on time. We got used to it, so why change it.
>
>
Why do we "trust this device"? It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions. So why is this type of surveillance not associated with fear and distrust ? We do not fear it because the tools are attractive objects, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence. This response can then be tempered by a conscious realization of the situation. These platforms surveilling our every move and mood are purposefully designed in a way that attracts us to it, like humans are attracted to beauty. It is not violent nor ugly. We perceive it as an immovable object, incapable of threatening our existence. Meanwhile this "object" is taking on a physiological nervous system capable of its own capable of creeping into our conscious and subconscious mind.This nervous system is learning how to manipulate the human mind and take advantage of our insecurities. No wonder Google, Facebook, Apple, ByteDance? and all other companies pour millions of dollars in product design. Furthermore, this nervous system is marketed as tools that are there to serve us and make our lives easier and more efficient, giving us a false sense of control. Meanwhile we have become preys to this instrument. We, the users of email, social media, health apps, and smartphones in general are in a continuous state of distraction. Without knowing why, we find ourselves on social media and unintentionally jump from one platform to the other, and before you know it more than an hour has gone by.We cannot stop. Everyday a new tool emerges and we dive right into it. An app for monitoring our steps and calorie count goes on the market. We buy into it, soon enough we are no longer able to monitor ourselves and our health without that app. We are clearly not in control. Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative.These tools are the new norm and we do not fear what we know, or think we know. We start believing that we cannot function, keep track of our events, find a date, find a job, have a social life, listen to music, stay healthy without these tools. We have surrendered into a fascist way of thinking in that don't question things if the trains run on time.
 
Deleted:
<
<
The Way Out We need to start seeing things the way they really are. The internet is a physiological nervous system. It is a being on its own that can be used by companies and governments to surveil, control, and alter our behavior. This is a real frightening violent threat to human beings and our freedom of thought. As such, we need to treat the beautiful tools which are at our disposal but not under our control as a hostile agent with a structure and mind of its own, which activates our defense mechanism. The idea is not to put the genie back in the bottle, nor to completely shun these tools. Rather, the idea is to
 \ No newline at end of file
Added:
>
>
Push Back We need to start seeing things the way they really are. The internet is a physiological nervous system. It is a being on its own that can be used by companies and governments to surveil, control, and alter our behavior. This is a real frightening violent threat to human beings and our freedom of thought. As such, we need to treat the beautiful tools which are at our disposal, but not under our control, as a hostile agent with a mind of its own. We must activate our defense mechanism. We cannot put the genie back in the bottle, nor should we. Technology in itself is neutral. First we must fear it and treat it as if we were being followed around the clock by a being that wants to destroy us. It should be that dramatic in order to activate our defense mechanism. Since we are not going to get rid of this physiological nervous system, we must internalize the fear and control it. We are not its submissive, passive, engaged victims. We can start by taking baby steps like not swipe up, turning off all notifications, decreasing our interaction with it. An even better option would be going cold turkey by getting rid of all of the tools that are surveilling us for profit or control, and replacing them with different ones. Personally, I'm a baby steps kind of person. Second, we must educate ourselves and those around us that there are other alternatives being tools that do not collect, store, and process data. Finally and most importantly, we must educate the generations which are growing up believing that this physiological nervous system is their security blanket. We must teach them how to code! They will understand this creature, how it works, and fight it with its own tools.

TWikiGuestFirstEssay 22 - 22 Oct 2021 - Main.NathalieNoura
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
As we learn to adapt to the internet society that captures almost all parts of our existence, the fair distribution of power and control in this new age remains strikingly imbalanced. Not only a handful of mega corporations in the West monopolize this new continuum of our existence, but they also effectively prohibit billions of people from accessing this unprecedent source of knowledge and intellect. Trillions of dollars concentrated in a few companies are spent to solidify the perception that the “internet” itself is Google, Facebook, Twitter, and a few others. People are constantly kept in the dark by these companies and effectively prohibited from learning about the alternatives. Their data is continuously harvested for reasons unknown. As such, the most vital question that needs to be resolved as we are further connected to each other by the minute is that how we are to democratize the internet. Traditional colonialism may be for the most part dead, but the few privileged classes in the West still has an immensely tight grip on the rest of humanity. Christopher Wylie’s “Mindf*ck: Cambridge Analytica and the Plot to Break America” clearly illustrated how data collected by Facebook in exchange for free access to its platform allowed private companies to carry massive psychological campaigns on entire societies. They first use smaller datasets to test their influence on smaller societies located in one of the former colonies, where their actions can go completely unchecked. And Wylie shows that the immense data that Facebook harvests is then used in the Western societies that allow these companies to safely exist in the first place. This is not the first time in recent history that a source of power is concentrated in the hands of the few. However, this is certainly the first time in history that a few companies have complete power as to determine how we think and act. In this regard, the power concentration of today is a lot scarier than the monopoly Standard Oil had in the early 20th Century. A few companies in the world have a dominion over our minds. And this is simply unacceptable. It is not possible to be free in the age of the internet where the central mechanism of how we are connected to one another is owned and manipulated by a few entities. The mission is clear: we need to break these monopolies and intervene in this market failure. But how can we achieve this mountain of a task? The challenge lies in the fact that we live in broken societies; we are controlled by crooked politicians who worship money and power. The monopolies of our current age have immense financial resources, and they have certainly enough of it to influence our political agendas. Assuming that “We the People” can define our destiny as a society, we need to take immediate action in how we democratize this next medium of human existence. First and foremost, we cannot simply allow hundreds of years of inequalities to persist in this new age. We have to accept knowledge as a basic human right and provide all tools and basic knowledge necessary to allow people to make the best use of what the web offers to them. This does not mean that we should destroy all differences in wealth and power overnight; however, it means that the next generation of humans must be allowed to have equal opportunities in their access to knowledge and intellectual growth. Even though it goes against the long-established patent rights, it is in the government’s, and the people’s, best interest to nationalize certain patents and allow free access to the use of them. Most groundbreaking scientific patents are a result of billions of dollars of government investment, meaning they were directly funded by the taxpayers. We cannot allow risks to be collectivized while allowing profits to be privatized. It is in the right of the people to seize control of what belongs to them and allow the new generation to truly write their own destinies. A seizure and emancipation of patents would allow cheap reproduction of hardware and software around the world. This would necessarily mean that hundreds of millions of people may get their hands-on machines that would fully allow them to truly connect with the web. However, such a scenario on its own does not guarantee that people would be truly freed. The bright and able minds of the world have to come together to write user-friendly online manuals and perhaps digital classes to teach ordinary people of all ages how to best utilize the machines they have. Governments are for the most part incredibly inapt in equipping their people with the necessary knowledge to bring about social mobility. As such, even though perhaps we can trust our power as the people to free some patents, we cannot rely solely on the government to educate people in how to use computers for attaining knowledge. I believe this will necessarily be a collective effort of dedicated idealists around the world. Another issue that needs to be resolved in democratizing the internet is the monopolies in existence. Will they simply wither away if people know that they can use safe alternatives? I do not believe that it will be the case; human beings are strange creatures after all, and the “sexy” services and hardware provided by the 21st Century monopolies are highly attractive to them. They do not even realize that they are being kept in the dark; their minds are controlled by people unknown to us, and they keep living in an exquisite Truman Show. As such, do we proceed with breaking Facebook and Google into smaller pieces? And even if we can, how are we to deal with the Chinese internet giants? Does the humanity need to live in completely segregated online worlds? Or can we unilaterally breakdown the CCP’s monopoly over the Chinese internet? The problems are relatively easy to identify, but the solutions seem to be a lot more complicated.
>
>
The Matrix: A Non-Fiction Surveillance is defined by the Merriam-Webster dictionary as a close watch kept over someone or something (as by a detective). Traditionally, it was an invasion of privacy with the goal of exposing illicit activities.Surveillance was always associated with feelings of fear, anxiety, stress, and distrust. In the past century, a new form of surveillance has emerged.As Shoshana Zuboff described it, surveillance Capitalism depends on exploiting and controlling human nature. Companies like Google and Facebook extract information from us, and employ it to re-design our behavior with the goal of profit maximization. Quite simply, these companies are using technology not only to invade our privacy, but also to gradually, slightly, imperceptibly change our own behavior and perception. The technologies being used target our conscious and subconscious behavior and emotions. They tap into our desires and manipulate us into reacting the way they want us to. We have willfully surrendered our free will and agreed to be manipulated and engaged rather than actively making free undirected choices. Their goal is to automate us.Surveillance capitalism is even being used by governments to control societies as a whole by affecting elections, suppressing opposition, and directing the population to adopt the government's way of thinking.

The Matrix is a “science fiction” movie about humans trapped in a simulation being controlled by machines, while other humans play with the switches. In the world of surveillance capitalism, is the Matrix still a science fiction?

Why are we not scared of surveillance? It is no longer a secret that these tools are being used to surveil us and modify our behavior in order to maximize company profits. Yet somehow, even when Mark Zuckerberg is testifying before the Senate post the Cambridge Analytica scandal, Facebook was still making billions. So why do we not fear this surveillance? We do not fear it because the tools are attractive, give us a false of control, and have embedded themselves into our existence. Fear is an emotional response largely motivated by what we perceive threatens our existence. This response can then be tempered by a conscious realization of the situation. These platforms surveilling our every move and mood are purposefully designed in a way that attracts us to it, like humans are attracted to beauty. It is not violent nor ugly. We perceive it as an object, not a being threatening our existence. All the while this object is taking on a physiological nervous system interconnected.. Google, Facebook, Apple, ByteDance? and all other companies pour millions of dollars in product design. Moreover, they market themselves as tools that are there to serve us, giving us a false sense of control. Meanwhile we have become preys to this instrument. Users of email, social media, health apps, and smartphones in general are in a continuous state of distraction. Without knowing why we find ourselves on social media and unintentionally jump from one platform to the other. Finally, these tools are embedded into our daily lives and we have relied on them enormously that we are unable to envision an alternative. We feel we will be isolated from society, incapable of monitoring our health, lost, unable to find a date, etc. We have surrendered into a fascist way of thinking that no one will question things if the trains run on time. We got used to it, so why change it.

The Way Out We need to start seeing things the way they really are. The internet is a physiological nervous system. It is a being on its own that can be used by companies and governments to surveil, control, and alter our behavior. This is a real frightening violent threat to human beings and our freedom of thought. As such, we need to treat the beautiful tools which are at our disposal but not under our control as a hostile agent with a structure and mind of its own, which activates our defense mechanism. The idea is not to put the genie back in the bottle, nor to completely shun these tools. Rather, the idea is to

 \ No newline at end of file

TWikiGuestFirstEssay 21 - 17 Oct 2021 - Main.NuriCemAlbayrak
Line: 1 to 1
Changed:
<
<
META TOPICPARENT name="WebPreferences"
Hegemony on the Internet and How Majoritarianism Can Exacerbate It

A Bloomberg article narrates a story of Anasuya Sengupta, an Indian activist, who wanted to create a Wikipedia page for a prominent British-Nigerian human-rights activist, Bisi Adeleye-Fayemi. Adeleye-Fayemi is well-known in activist circles for having helped in ending the Liberian Civil War. But she did not exist on Wikipedia, “which meant that as far as many people were concerned, she didn’t exist at all.” Sengupta decided to write a Wikipedia page on Adeleye-Fayemi. She cited several articles from the Nigerian press and clicked “publish.” But a few minutes later, a Wikipedia editor deleted her entry on grounds that the entry was trifling. Sengupta eventually convinced Wikipedia editors to include the entry, but only after a former chair of the Wikimedia foundation—who happened to be sitting next to her at a conference—intervened on her behalf.

This isn’t a one-off incident. North Americans and Europeans make up for less than a quarter of the population on the internet, but control most of its information. For example, most content on the internet relating to Africa is written by North-American and European men. Thenmozhi Soundarajan, a prominent Dalit activist based out of New York—who is engaged in publishing Dalit history on Wikipedia—talks about having to use a white male username on Wikipedia to have more articles approved. This is the western hegemony of the internet where white men act as gatekeepers of online knowledge. As Thenmozi puts it, this hegemonic gatekeeping is an impediment “to reclaim[ing] the agency of a mass of people who have historically remained peripheral in the consciousness of the academia and the state, and to bring forth their stories of resistance, resilience, and heroism. In the words of Woodson, ‘If a race has no history, if it has no worth-while tradition, it becomes a negligible factor in the thought of the world, and it stands in danger of being exterminated.’” Thenmozhi’s organization Equality Labs is one of many online groups led by women of color from marginalized communities across the world which challenge this hegemony by sharing their knowledge and histories with the online community. But in India, they face another form of hegemony. One more violent and well organized: this is the Indian right wing.

The Indian right wing is organized, tech savvy, and determined to maintain its hegemonic control over South-Asian society. With over 560 million users—not including the number of Indians living abroad—India is the second largest online market in the world. In the run up to India’s 2014 general election, the Bharatiya Janata Party (BJP)—then the primary opposition party—and its leader Narendra Modi saw India’s internet penetration as a propaganda opportunity. They created an army of online volunteers who used social media to change Indian’s perception of Modi from being a political pariah—responsible for a genocide in his home state of Gujarat—to being a shrewd technocrat and a messiah in the waiting. This social-media strategy helped the BJP win with a landslide. What began as an election strategy has grown “into a sophisticated machine that includes a huge ‘troll army’ of paid and voluntary supporters who help spread the party’s message on platforms like Facebook, WhatsApp, and Twitter, instantly reaching millions of people.” This troll army has normalized bigotry and hate towards minorities and Dalits. It has weaponized social media websites to spread disinformation and hate.

In early 2020, Facebook’s employees in-charge of policing hate speech began to get concerned with Facebook posts of an Indian politician named T. Raja Singh who had called for Rohingya refugees to be shot, called Muslisms traitors, and threatened to raze mosques. This clearly violated Facebook’s claimed community standards. In fact, Facebook has taken down numerous white supremacist pages on the grounds that their posts could lead to violence in the real world. No one had any doubts that Singh was trying to instigate people to violently attack Muslims who were protesting India’s new racist citizenship laws which Modi's government had enacted in 2019. But, the company’s top public-policy executive, Ankhi Das—whose job involves lobbying the Indian government on behalf of Facebook—opposed applying hate speech rules to Singh and other Hindu-nationalist groups. She claimed that curbing hate speech from Hindu-nationalist groups and BJP politicians would affect Facebook’s business prospects in India. A few months later, these hate posts instigated a pogrom in Delhi where Mulisms were killed, raped, tortured, had their property set on fire, and made homless. While Das and Facebook were clear collaborators in the Delhi massacre, Das was not wrong. India’s government has a vindictive attitude towards companies and organizations that do not tow its line.

This combination of market power and political power gives India’s right wing the power to silence speech that it doesn’t agree with. Sangapali Aruna, who runs an organization which leverages technology to empower Dalits, was in a conversation with Twitter’s CEO, Jack Dorsey. She was talking about women’s safety on twitter following an incident where she was the victim of doxing. At the end of the conversation, Dorsey stood with women activists for a picture. They handed him a poster which read “Smash Brahmanical Patriarchy.” When this picture went online, the backlash from Hindu nationalists and supporters of the caste system was swift and overwhelming. Fearing its loss of market share in India and action from the Indian government, Twitter apologized for that picture.

The Indian right wing has understood the power of the internet in propagating ideas. It is focused as much on using its brute strength to censor people on the internet as it is on disseminating ideas of Hindu supremacy. Quora, a platform which used to be used by techies to ask questions and give answers has evolved into a platform for Hindu-nationalist discussions and where Hindu nationalists can shape people’s perception on issues that matter to them. Many of the “answers” are outright lies. While Wikipedia has blacklisted a few Hindu-nationalist English “news” websites, the Hindu right wing is trying to gain a hold of Wikipedia in vernacular Indian languages where the most prominent scholars tend to be upper-caste Hindus who have an affinity towards Hindu nationalism and can censor information that can threaten caste hierarchies and Hindu dominance.

As we try to re-democratize the internet, we need to be aware of social imbalances which exclude people from enjoying the freedoms of the internet and political majoritarianism which can threaten to capture a nascent internet democracy.

>
>
META TOPICPARENT name="FirstEssay"
As we learn to adapt to the internet society that captures almost all parts of our existence, the fair distribution of power and control in this new age remains strikingly imbalanced. Not only a handful of mega corporations in the West monopolize this new continuum of our existence, but they also effectively prohibit billions of people from accessing this unprecedent source of knowledge and intellect. Trillions of dollars concentrated in a few companies are spent to solidify the perception that the “internet” itself is Google, Facebook, Twitter, and a few others. People are constantly kept in the dark by these companies and effectively prohibited from learning about the alternatives. Their data is continuously harvested for reasons unknown. As such, the most vital question that needs to be resolved as we are further connected to each other by the minute is that how we are to democratize the internet. Traditional colonialism may be for the most part dead, but the few privileged classes in the West still has an immensely tight grip on the rest of humanity. Christopher Wylie’s “Mindf*ck: Cambridge Analytica and the Plot to Break America” clearly illustrated how data collected by Facebook in exchange for free access to its platform allowed private companies to carry massive psychological campaigns on entire societies. They first use smaller datasets to test their influence on smaller societies located in one of the former colonies, where their actions can go completely unchecked. And Wylie shows that the immense data that Facebook harvests is then used in the Western societies that allow these companies to safely exist in the first place. This is not the first time in recent history that a source of power is concentrated in the hands of the few. However, this is certainly the first time in history that a few companies have complete power as to determine how we think and act. In this regard, the power concentration of today is a lot scarier than the monopoly Standard Oil had in the early 20th Century. A few companies in the world have a dominion over our minds. And this is simply unacceptable. It is not possible to be free in the age of the internet where the central mechanism of how we are connected to one another is owned and manipulated by a few entities. The mission is clear: we need to break these monopolies and intervene in this market failure. But how can we achieve this mountain of a task? The challenge lies in the fact that we live in broken societies; we are controlled by crooked politicians who worship money and power. The monopolies of our current age have immense financial resources, and they have certainly enough of it to influence our political agendas. Assuming that “We the People” can define our destiny as a society, we need to take immediate action in how we democratize this next medium of human existence. First and foremost, we cannot simply allow hundreds of years of inequalities to persist in this new age. We have to accept knowledge as a basic human right and provide all tools and basic knowledge necessary to allow people to make the best use of what the web offers to them. This does not mean that we should destroy all differences in wealth and power overnight; however, it means that the next generation of humans must be allowed to have equal opportunities in their access to knowledge and intellectual growth. Even though it goes against the long-established patent rights, it is in the government’s, and the people’s, best interest to nationalize certain patents and allow free access to the use of them. Most groundbreaking scientific patents are a result of billions of dollars of government investment, meaning they were directly funded by the taxpayers. We cannot allow risks to be collectivized while allowing profits to be privatized. It is in the right of the people to seize control of what belongs to them and allow the new generation to truly write their own destinies. A seizure and emancipation of patents would allow cheap reproduction of hardware and software around the world. This would necessarily mean that hundreds of millions of people may get their hands-on machines that would fully allow them to truly connect with the web. However, such a scenario on its own does not guarantee that people would be truly freed. The bright and able minds of the world have to come together to write user-friendly online manuals and perhaps digital classes to teach ordinary people of all ages how to best utilize the machines they have. Governments are for the most part incredibly inapt in equipping their people with the necessary knowledge to bring about social mobility. As such, even though perhaps we can trust our power as the people to free some patents, we cannot rely solely on the government to educate people in how to use computers for attaining knowledge. I believe this will necessarily be a collective effort of dedicated idealists around the world. Another issue that needs to be resolved in democratizing the internet is the monopolies in existence. Will they simply wither away if people know that they can use safe alternatives? I do not believe that it will be the case; human beings are strange creatures after all, and the “sexy” services and hardware provided by the 21st Century monopolies are highly attractive to them. They do not even realize that they are being kept in the dark; their minds are controlled by people unknown to us, and they keep living in an exquisite Truman Show. As such, do we proceed with breaking Facebook and Google into smaller pieces? And even if we can, how are we to deal with the Chinese internet giants? Does the humanity need to live in completely segregated online worlds? Or can we unilaterally breakdown the CCP’s monopoly over the Chinese internet? The problems are relatively easy to identify, but the solutions seem to be a lot more complicated.

TWikiGuestFirstEssay 20 - 10 Oct 2020 - Main.ConradNoronha
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<
A New Journalism
  • Moving Beyond Institutionalism
  • Lessons from Free Software
  • Blunting the Tools of Surveillance Capitalism _ * Moving beyond the social media platforms
    • Enabling secure news capture_
  • Conclusion
>
>
Hegemony on the Internet and How Majoritarianism Can Exacerbate It
 
Added:
>
>
A Bloomberg article narrates a story of Anasuya Sengupta, an Indian activist, who wanted to create a Wikipedia page for a prominent British-Nigerian human-rights activist, Bisi Adeleye-Fayemi. Adeleye-Fayemi is well-known in activist circles for having helped in ending the Liberian Civil War. But she did not exist on Wikipedia, “which meant that as far as many people were concerned, she didn’t exist at all.” Sengupta decided to write a Wikipedia page on Adeleye-Fayemi. She cited several articles from the Nigerian press and clicked “publish.” But a few minutes later, a Wikipedia editor deleted her entry on grounds that the entry was trifling. Sengupta eventually convinced Wikipedia editors to include the entry, but only after a former chair of the Wikimedia foundation—who happened to be sitting next to her at a conference—intervened on her behalf.
 
Changed:
<
<

A New Journalism

>
>
This isn’t a one-off incident. North Americans and Europeans make up for less than a quarter of the population on the internet, but control most of its information. For example, most content on the internet relating to Africa is written by North-American and European men. Thenmozhi Soundarajan, a prominent Dalit activist based out of New York—who is engaged in publishing Dalit history on Wikipedia—talks about having to use a white male username on Wikipedia to have more articles approved. This is the western hegemony of the internet where white men act as gatekeepers of online knowledge. As Thenmozi puts it, this hegemonic gatekeeping is an impediment “to reclaim[ing] the agency of a mass of people who have historically remained peripheral in the consciousness of the academia and the state, and to bring forth their stories of resistance, resilience, and heroism. In the words of Woodson, ‘If a race has no history, if it has no worth-while tradition, it becomes a negligible factor in the thought of the world, and it stands in danger of being exterminated.’” Thenmozhi’s organization Equality Labs is one of many online groups led by women of color from marginalized communities across the world which challenge this hegemony by sharing their knowledge and histories with the online community. But in India, they face another form of hegemony. One more violent and well organized: this is the Indian right wing.
 
Added:
>
>
The Indian right wing is organized, tech savvy, and determined to maintain its hegemonic control over South-Asian society. With over 560 million users—not including the number of Indians living abroad—India is the second largest online market in the world. In the run up to India’s 2014 general election, the Bharatiya Janata Party (BJP)—then the primary opposition party—and its leader Narendra Modi saw India’s internet penetration as a propaganda opportunity. They created an army of online volunteers who used social media to change Indian’s perception of Modi from being a political pariah—responsible for a genocide in his home state of Gujarat—to being a shrewd technocrat and a messiah in the waiting. This social-media strategy helped the BJP win with a landslide. What began as an election strategy has grown “into a sophisticated machine that includes a huge ‘troll army’ of paid and voluntary supporters who help spread the party’s message on platforms like Facebook, WhatsApp, and Twitter, instantly reaching millions of people.” This troll army has normalized bigotry and hate towards minorities and Dalits. It has weaponized social media websites to spread disinformation and hate.
 
Changed:
<
<
It is time to free journalism. I refer not to journalism the institution, that venerable fourth estate whose wreckage lies all around us. Rather I speak of journalism as an endeavor, an iterative process of collective knowledge gathering, synthesis, and distribution.
>
>
In early 2020, Facebook’s employees in-charge of policing hate speech began to get concerned with Facebook posts of an Indian politician named T. Raja Singh who had called for Rohingya refugees to be shot, called Muslisms traitors, and threatened to raze mosques. This clearly violated Facebook’s claimed community standards. In fact, Facebook has taken down numerous white supremacist pages on the grounds that their posts could lead to violence in the real world. No one had any doubts that Singh was trying to instigate people to violently attack Muslims who were protesting India’s new racist citizenship laws which Modi's government had enacted in 2019. But, the company’s top public-policy executive, Ankhi Das—whose job involves lobbying the Indian government on behalf of Facebook—opposed applying hate speech rules to Singh and other Hindu-nationalist groups. She claimed that curbing hate speech from Hindu-nationalist groups and BJP politicians would affect Facebook’s business prospects in India. A few months later, these hate posts instigated a pogrom in Delhi where Mulisms were killed, raped, tortured, had their property set on fire, and made homless. While Das and Facebook were clear collaborators in the Delhi massacre, Das was not wrong. India’s government has a vindictive attitude towards companies and organizations that do not tow its line.
 
Changed:
<
<
The Net has both necessitated and facilitated this reconception of what “journalism” means. Every day, an amalgam of professionals, ordinary citizens, and activists collectively creates the news. They do so via disparate methods and platforms. The results can be powerful—as evidenced by ongoing protests sparked by George Floyd’s murder—but also dizzying, chaotic, and fractured. Journalism’s future as an instrument in service of human learning depends on our ability harness a new, networked press that is simultaneously egalitarian, prodigious, and distracted.
>
>
This combination of market power and political power gives India’s right wing the power to silence speech that it doesn’t agree with. Sangapali Aruna, who runs an organization which leverages technology to empower Dalits, was in a conversation with Twitter’s CEO, Jack Dorsey. She was talking about women’s safety on twitter following an incident where she was the victim of doxing. At the end of the conversation, Dorsey stood with women activists for a picture. They handed him a poster which read “Smash Brahmanical Patriarchy.” When this picture went online, the backlash from Hindu nationalists and supporters of the caste system was swift and overwhelming. Fearing its loss of market share in India and action from the Indian government, Twitter apologized for that picture.
 
Changed:
<
<
The first step is to identify the restraints. They exist in propertarian views of news rooted an obsolete, ad-based business model. But the ends of journalism are also thwarted, ironically, by the very tools that enable the networked press. Smartphones that document police brutality catalogue their user’s every move. News updates are fed through Facebook and Twitter, where they are either drowned out or reduced to a lure for the Parasite, dangling in news feeds only so long as they snap up human attention.
>
>
The Indian right wing has understood the power of the internet in propagating ideas. It is focused as much on using its brute strength to censor people on the internet as it is on disseminating ideas of Hindu supremacy. Quora, a platform which used to be used by techies to ask questions and give answers has evolved into a platform for Hindu-nationalist discussions and where Hindu nationalists can shape people’s perception on issues that matter to them. Many of the “answers” are outright lies. While Wikipedia has blacklisted a few Hindu-nationalist English “news” websites, the Hindu right wing is trying to gain a hold of Wikipedia in vernacular Indian languages where the most prominent scholars tend to be upper-caste Hindus who have an affinity towards Hindu nationalism and can censor information that can threaten caste hierarchies and Hindu dominance.
 
Changed:
<
<
We must forge a networked press that resists the Parasite, rather than mooring us to it. But we also need to reimagine the concept of what journalism means in the age of the Net.

Moving Beyond Institutionalism

I have been part of journalism the institution. I saw the crumbling up close. The old model of for-profit journalism required ad revenue, which required eyeballs. So we chased them—we spent hours pushing stories on social media and perfecting our headline SEO. We still tried to do good work reporting the news. But the stakes were crystal clear. At one reporting job, my salary was tied directly to how many page views my stories got.

I do not mean to say that professional journalists are obsolete. There will always be a need for those trained in the art of storytelling and investigative reporting. Organizations like ProPublica? show that new models of non-profit journalism can produce superior content. We must go farther, however, to move beyond institutionalism to a more dynamic view of what journalism can be. The journalistic creation of knowledge no longer ends once the nightly news clicks off or the newspaper goes to print. Journalism is constantly becoming, and all of us on the Net can have a role in making it so.

Lessons from Free Software

Free software offers an analogue for what journalism can become. Free software produces a better product because subsequent users are empowered to tinker with and improve a program’s source code. Likewise, “free journalism”—enabled through creative commons licensing and collaborative platforms like wikis—can generate more dynamic reportage. News outlets and bloggers are empowered to borrow, organize, and add to information produced on the Net. The results can be powerful. Consider the Tunisian blogger collective Nawaat, which in 2010 curated hundreds of otherwise censored videos during that country’s uprising. Or Global Voices, whose volunteers translate citizen-sourced articles from around the world into more than 50 languages.

Free journalism invites readers into the process of news creation. Imagine new, hybrid journalism platforms that house professional investigative and accountability journalism, while also offering dynamic spaces for citizens to engage in collaborative news creation by posting their own content, or that of others on the Net. Paid or volunteer editors could help sort and verify crowdsourced content to ensure it serves journalistic goals. Most importantly, these sites can be run cheaply, potentially allowing them to be sustained through user contributions rather than through advertising or paywalls.

Blunting the Tools of Surveillance Capitalism

Moving beyond the social media platforms

It is premature to advocate that the networked press immediately disassociate from the tech platforms. Facebook and Twitter remain useful newsgathering tools. We should, however, cease to treat social media platforms as a locus for journalism. The process of knowledge creation requires continuity—the ability to locate, link together, and preserve information culled from disparate sources. From a technical standpoint, Facebook and Twitter are ill-equipped for this. Nor is their attention imperative aligned with the mandate of journalism to serve the public. We must instead build new, collaborative spaces where communities can gather to engage in collaborative journalism. This can include creative commons repositories where citizens can upload media for anybody to use.

The law may also help equal the playing field between journalistic gathering spaces and the tech giants. Despite their precise curation of user news feeds, social media platforms (unlike news outlets) are not considered publishers for the purposes of tort liability. Nonetheless, any Section 230 reform should be approached carefully, since imposing liability for crowdsourced content may hinder the hybrid journalistic models described above.

Enabling secure news capture

Likewise, an immediate retreat from smartphones as a tool for newsgathering is perhaps unlikely. However, law and technology can help prevent those in power from using smartphones to surveil or incriminate newsgatherers who seek to expose corruption or abuse. Open source encryption apps like Signal already exist to enable newsgatherers to securely capture and transmit media. These apps can only do so much, however. Politically, we must continue to push for legislation that safeguards encryption technology and limits how technology firms can track, store, and use mobile data.

Conclusion

I propose an idealized vision for the future of journalism. I am not naïve; this is neither the journalism we have, nor one that we can create overnight. Law and technology can create the conditions in which news content can be securely captured and widely shared. We can equip individuals from a young age with the technical skills to contribute to our networked press. But we must also undertake a more fundamental reconception of journalism as a collectively owned and pursued public good—a process of knowledge sharing that every human can both benefit from and contribute to.

>
>
As we try to re-democratize the internet, we need to be aware of social imbalances which exclude people from enjoying the freedoms of the internet and political majoritarianism which can threaten to capture a nascent internet democracy.

TWikiGuestFirstEssay 19 - 09 Oct 2020 - Main.JohnClayton
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Added:
>
>
A New Journalism
  • Moving Beyond Institutionalism
  • Lessons from Free Software
  • Blunting the Tools of Surveillance Capitalism _ * Moving beyond the social media platforms
    • Enabling secure news capture_
  • Conclusion
 
Deleted:
<
<
The day our internet best friend betrayed us
 
Changed:
<
<
Introduction
>
>

A New Journalism

 
Deleted:
<
<
The rise of the internet best friend
 
Changed:
<
<
The apple, or the loss of innocence
>
>
It is time to free journalism. I refer not to journalism the institution, that venerable fourth estate whose wreckage lies all around us. Rather I speak of journalism as an endeavor, an iterative process of collective knowledge gathering, synthesis, and distribution.
 
Changed:
<
<
The tumor
>
>
The Net has both necessitated and facilitated this reconception of what “journalism” means. Every day, an amalgam of professionals, ordinary citizens, and activists collectively creates the news. They do so via disparate methods and platforms. The results can be powerful—as evidenced by ongoing protests sparked by George Floyd’s murder—but also dizzying, chaotic, and fractured. Journalism’s future as an instrument in service of human learning depends on our ability harness a new, networked press that is simultaneously egalitarian, prodigious, and distracted.
 
Added:
>
>
The first step is to identify the restraints. They exist in propertarian views of news rooted an obsolete, ad-based business model. But the ends of journalism are also thwarted, ironically, by the very tools that enable the networked press. Smartphones that document police brutality catalogue their user’s every move. News updates are fed through Facebook and Twitter, where they are either drowned out or reduced to a lure for the Parasite, dangling in news feeds only so long as they snap up human attention.
 
Changed:
<
<

The day our internet best friend betrayed us

>
>
We must forge a networked press that resists the Parasite, rather than mooring us to it. But we also need to reimagine the concept of what journalism means in the age of the Net.
 
Changed:
<
<

Introduction

>
>

Moving Beyond Institutionalism

 
Changed:
<
<
Fear, suspicion and aversion to pain are the evolutionary senses that have driven our survival as a species. In today’s world, where technological advances including the internet have fundamentally changed the way we live, these senses are no longer adequate to ensure the survival of the human race as we know it. Changes in our minds are occurring without our knowledge or detection because things that should inspire fear are now pleasurable, and things that do violence to our bodies are now painless. This essay seeks to reveal but one such instance of this painless violence in the form of the internet best friend and hopefully, offer a moment of clarity.
>
>
I have been part of journalism the institution. I saw the crumbling up close. The old model of for-profit journalism required ad revenue, which required eyeballs. So we chased them—we spent hours pushing stories on social media and perfecting our headline SEO. We still tried to do good work reporting the news. But the stakes were crystal clear. At one reporting job, my salary was tied directly to how many page views my stories got.
 
Added:
>
>
I do not mean to say that professional journalists are obsolete. There will always be a need for those trained in the art of storytelling and investigative reporting. Organizations like ProPublica? show that new models of non-profit journalism can produce superior content. We must go farther, however, to move beyond institutionalism to a more dynamic view of what journalism can be. The journalistic creation of knowledge no longer ends once the nightly news clicks off or the newspaper goes to print. Journalism is constantly becoming, and all of us on the Net can have a role in making it so.
 
Deleted:
<
<

The rise of the internet best friend

 
Changed:
<
<
The internet best friend, like the internet, began innocently. With the launch of YouTube? in late 2005 under the banner ‘Broadcast Yourself’, a ground-up, egalitarian community was formed where tech-savvy youngsters could make videos about almost anything. This inclusive, fertile, womblike interface lent itself to much experimentation by users and became birthplace of the video blog or ‘vlog’ as we now know it, which effectively transformed the wildly popular early 2000’s blog into video form. These videos, usually of a longer nature, documented the everyday life of the vlogger and could contain anything from exciting adventures to what they were eating for breakfast.
>
>

Lessons from Free Software

 
Changed:
<
<
The vlog was thus born a splice of public and private. On one hand it was a raw, confessional, intimate and incredibly detailed record of everyday life through the sheer detail that could be captured with a single swoop of the camera. And yet it was also performative and entertaining, designed to capture attention. In the abstract, the prospect of watching someone go about their everyday business could not seem more boring - and yet, the vlog form flourished. From a human need perspective, the vlog satisfied viewers’ voyeuristic desires as well as viewers’ innate yearning for human connection. For perhaps the first time, ordinary users of the internet could peer into a stranger’s life and observe not only their trials and tribulations, but also the intimate details of their homes. Indeed, to follow a vlogger was to go through a process of metamorphosis whereby the viewer transforms from momentary voyeur to consistent voyeur to developing a one way, but very real, human connection with the vlogger as they experience the minutia of everyday life, growing up, falling in love, breaking up, getting married and so on ‘together’. In teenage girl parlance, this relationship could only be described as that of a best friend and thus the internet best friend (IBF) was born.
>
>
Free software offers an analogue for what journalism can become. Free software produces a better product because subsequent users are empowered to tinker with and improve a program’s source code. Likewise, “free journalism”—enabled through creative commons licensing and collaborative platforms like wikis—can generate more dynamic reportage. News outlets and bloggers are empowered to borrow, organize, and add to information produced on the Net. The results can be powerful. Consider the Tunisian blogger collective Nawaat, which in 2010 curated hundreds of otherwise censored videos during that country’s uprising. Or Global Voices, whose volunteers translate citizen-sourced articles from around the world into more than 50 languages.
 
Added:
>
>
Free journalism invites readers into the process of news creation. Imagine new, hybrid journalism platforms that house professional investigative and accountability journalism, while also offering dynamic spaces for citizens to engage in collaborative news creation by posting their own content, or that of others on the Net. Paid or volunteer editors could help sort and verify crowdsourced content to ensure it serves journalistic goals. Most importantly, these sites can be run cheaply, potentially allowing them to be sustained through user contributions rather than through advertising or paywalls.
 
Changed:
<
<

The apple, or the loss of innocence

>
>

Blunting the Tools of Surveillance Capitalism

 
Changed:
<
<
The loss of innocence occurs when the vlogger becomes a tool for the amplification and normalisation of the panopticon through which the State subjugates its citizens. The act of surveillance on the vlogger is obvious. The vlogger’s work is the work of being watched. Although the vlogger believes that they are not subject to constant, comprehensive surveillance because they choose what content they publish, surveillance becomes total through the vlogger’s continuous need to produce new content that reveals more and more about their existence.
>
>

Moving beyond the social media platforms

It is premature to advocate that the networked press immediately disassociate from the tech platforms. Facebook and Twitter remain useful newsgathering tools. We should, however, cease to treat social media platforms as a locus for journalism. The process of knowledge creation requires continuity—the ability to locate, link together, and preserve information culled from disparate sources. From a technical standpoint, Facebook and Twitter are ill-equipped for this. Nor is their attention imperative aligned with the mandate of journalism to serve the public. We must instead build new, collaborative spaces where communities can gather to engage in collaborative journalism. This can include creative commons repositories where citizens can upload media for anybody to use.
 
Changed:
<
<
The act of surveillance on the viewer manifests in two different ways. The first is through advertising, a product of the viewer watching while being watched. Indeed, the IBF is now considered to be one of the most versatile and effective advertisements ever made. Having taken on the role of the trusted ‘best friend’, the IBF is able to leverage this relationship of loyalty and trust to be an ultimate source of word-of-mouth advice. Each second of the IBF’s video is either an advertisement for a product that can be found and bought or of the IBF’s general lifestyle which seems within reach if you just purchase X items. Data on an IBF’s viewer engagement is powerful in that it tracks viewers’ purchases through discount codes and has the potential to predict what is happening or will happen in viewers’ lives through analysing the IBFs that the viewer engages with. This phenomenon feeds into the damaging effects of surveillance capitalism and instrumentarian power (Shoshanna Zuboff). The second act of surveillance occurs when viewers normalise the existence of surveillance itself. The process begins with the viewer observing the IBF sharing intimate details of their life, which bears a key behavioural message: that it is now permissible and normal for people at large to know the everyday details of our lives. This act of normalisation then amplifies surveillance, as the viewers emulate this behaviour and become the producers, so too engaging in the production of overly revealing, everyday content. As Foucault wrote, “the panopticon… has a role of amplification… its aim is to strengthen the social forces – to increase production, to develop the economy, spread education, raise the level of public morality; to increase and multiply.” All at once there is more production of content, more surveillance of that content, amplification of surveillance on existing content, and thus multiplying surveillance throughout society. The final stage is when the sheer prevalence of surveillance leads to the viewer’s adoption of the notion that surveillance is merely a side effect of modern life and needs not be challenged or removed.
>
>
The law may also help equal the playing field between journalistic gathering spaces and the tech giants. Despite their precise curation of user news feeds, social media platforms (unlike news outlets) are not considered publishers for the purposes of tort liability. Nonetheless, any Section 230 reform should be approached carefully, since imposing liability for crowdsourced content may hinder the hybrid journalistic models described above.
 
Added:
>
>

Enabling secure news capture

Likewise, an immediate retreat from smartphones as a tool for newsgathering is perhaps unlikely. However, law and technology can help prevent those in power from using smartphones to surveil or incriminate newsgatherers who seek to expose corruption or abuse. Open source encryption apps like Signal already exist to enable newsgatherers to securely capture and transmit media. These apps can only do so much, however. Politically, we must continue to push for legislation that safeguards encryption technology and limits how technology firms can track, store, and use mobile data.
 
Changed:
<
<

The tumor

>
>

Conclusion

 
Deleted:
<
<
The IBF is thus a collection of paradoxes. It is an advertisement, but not; a reality, but also a fantasy; a voyeuristic experience, but also an exercise in surveillance. Viewers are so distracted by what the IBF purports to be and the sheer pleasure of consumption that the silent changes within us go undetected. By the time we realise, the tumor has already formed. Our apathy towards surveillance will be so deeply engrained in our tissues that it will be impossible to remove. All that remains is for mankind to be subjugated by the state, divided into little cells flooded with light.
 \ No newline at end of file
Added:
>
>
I propose an idealized vision for the future of journalism. I am not naïve; this is neither the journalism we have, nor one that we can create overnight. Law and technology can create the conditions in which news content can be securely captured and widely shared. We can equip individuals from a young age with the technical skills to contribute to our networked press. But we must also undertake a more fundamental reconception of journalism as a collectively owned and pursued public good—a process of knowledge sharing that every human can both benefit from and contribute to.

TWikiGuestFirstEssay 18 - 09 Oct 2020 - Main.JulieLi
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Added:
>
>
The day our internet best friend betrayed us
 
Changed:
<
<
Polarization and the Division of Society
>
>
Introduction
 
Changed:
<
<
Social Media and the Masses
>
>
The rise of the internet best friend
 
Changed:
<
<
Process of Polarization and Potential for Progress
>
>
The apple, or the loss of innocence
 
Added:
>
>
The tumor
 
Changed:
<
<

Polarization and the Division of Society

>
>

The day our internet best friend betrayed us

 
Deleted:
<
<
People used to rely on news outlets to know what's happening around the world; now, most of us get our news from social media. For example, we used to read articles to determine 'what' is happening, and then we used to think for ourselves on 'why' this happened and 'how' we should feel about it. Now, with convenience at our fingertips, we are in the midst of a reversal. The politics we are partial to already define for us 'what' has happened. This is a product of the increasing bipolarization and division in our society. To a large degree we already know what we will believe and what we will not accept, establishing a dangerous dichotomy of thought. Along these lines, the convenience of social media - the content of which is continually shaped by unseen forces and algorithms that prey on our technological footprints - has fed into this dichotomy. Now most follow their news to better understand the 'how'--how should we feel? How should we react? What fits the narrative of the rhetoric we've already accepted? And because we share articles that fit in with our beliefs and connect with others who have similar views to our own, social media makes it easy for us to bolster this mindset of finding support for our biases rather than allow new information to broaden our insights. The news we intake becomes recycled based on our previous biases. Here lies the ultimate danger of social media without due regulation: the guided polarization of digital news only exasperates the existing divisions in our society.
 
Added:
>
>

Introduction

 
Changed:
<
<

Social Media and the Masses

>
>
Fear, suspicion and aversion to pain are the evolutionary senses that have driven our survival as a species. In today’s world, where technological advances including the internet have fundamentally changed the way we live, these senses are no longer adequate to ensure the survival of the human race as we know it. Changes in our minds are occurring without our knowledge or detection because things that should inspire fear are now pleasurable, and things that do violence to our bodies are now painless. This essay seeks to reveal but one such instance of this painless violence in the form of the internet best friend and hopefully, offer a moment of clarity.
 
Deleted:
<
<
The idea of our influences directing us toward belief and action is not new. Le Bon, a polymath dedicated to the work on crowd psychology, makes the case that since the dawn of time we have always been under the influence of religious, political, and social illusions (See The Crowd: A Study of the Popular Mind). He states that because the masses have always been under these influences, we are ingrained to seek out an illusion to grasp to under any and all circumstances. He noted that while philosophers in the 19th century have worked greatly to destroy these illusions, they have not been able to provide the masses with any ideal that could effectively sway them. Accordingly, the masses now flock to whichever rhetorician wets their appetites. Le Bon may have written his seminal work at the turn of the 20th century, but his words seem appropriate now more than ever. Social media has become a universal outlet to which we grasp onto our illusions, and refuse to diversify ourselves to viewpoints that differ than our own. Living in this new digital age, we are thereby narrowing our visions of reality and widening the divisions we have from one another and, perhaps even, from truth. Truth itself has become fragmented, relying on the whims of the reader. All the while most of us remain clueless to the puppeteers behind the curtains.
 
Added:
>
>

The rise of the internet best friend

 
Changed:
<
<
It's natural for our experiences to dictate our way of thinking in the Lockean framework of epistemology, but the problem with polarization in social media today is that it leaves little to no room for genuine discourse. What social media offers us is a steady and consistent affirmation from our peers who think similarly to us. Social media is intrinsically designed to connect us with others who will encourage our way of thinking, even if our logic is flawed or our news misguided. In other words, for many social media has made home to a great convenience of getting the assurance we want from others who already agree with us that productive speculation or positive self-doubt becomes a foreign process. Many people then become so encouraged by their opinions that they begin to confuse them for facts. In order to bridge the gaps in our society, we must, at the very least, understand the diverse markup of our communal struggle for survival.
>
>
The internet best friend, like the internet, began innocently. With the launch of YouTube? in late 2005 under the banner ‘Broadcast Yourself’, a ground-up, egalitarian community was formed where tech-savvy youngsters could make videos about almost anything. This inclusive, fertile, womblike interface lent itself to much experimentation by users and became birthplace of the video blog or ‘vlog’ as we now know it, which effectively transformed the wildly popular early 2000’s blog into video form. These videos, usually of a longer nature, documented the everyday life of the vlogger and could contain anything from exciting adventures to what they were eating for breakfast.
 
Added:
>
>
The vlog was thus born a splice of public and private. On one hand it was a raw, confessional, intimate and incredibly detailed record of everyday life through the sheer detail that could be captured with a single swoop of the camera. And yet it was also performative and entertaining, designed to capture attention. In the abstract, the prospect of watching someone go about their everyday business could not seem more boring - and yet, the vlog form flourished. From a human need perspective, the vlog satisfied viewers’ voyeuristic desires as well as viewers’ innate yearning for human connection. For perhaps the first time, ordinary users of the internet could peer into a stranger’s life and observe not only their trials and tribulations, but also the intimate details of their homes. Indeed, to follow a vlogger was to go through a process of metamorphosis whereby the viewer transforms from momentary voyeur to consistent voyeur to developing a one way, but very real, human connection with the vlogger as they experience the minutia of everyday life, growing up, falling in love, breaking up, getting married and so on ‘together’. In teenage girl parlance, this relationship could only be described as that of a best friend and thus the internet best friend (IBF) was born.
 
Deleted:
<
<

Process of Polarization and Potential for Progress

 
Changed:
<
<
Social media and similar digital mediums largely influence our thinking through targeted advertisements. Every time we swallow the mental pill on Facebook, Reddit, and the like, the databases on those sites store our personal and private data to their advantage, keeping close track of what we search and what our interests are. This misuse of our privacy and the self-selective filter bubbles social media creates for us works to keep the masses addicted. We connect with others who have beliefs aligning to our own, we 'like' their posts and share their posts, and without second though allow behemoth companies to track our personal information and internet consumption tendencies. Social media works by continuing to offer us exposure to our interests; unfortunately this is the problem. Since we are more likely to accept ideas that align with our pre-existing beliefs, and thus continue to scroll down our social media feeds, the posts that pop up first on our accounts are the news sources that work with our existing confirmation biases. Under such a system, what should be expected except for a widening of the rifts that divide us?
>
>

The apple, or the loss of innocence

 
Added:
>
>
The loss of innocence occurs when the vlogger becomes a tool for the amplification and normalisation of the panopticon through which the State subjugates its citizens. The act of surveillance on the vlogger is obvious. The vlogger’s work is the work of being watched. Although the vlogger believes that they are not subject to constant, comprehensive surveillance because they choose what content they publish, surveillance becomes total through the vlogger’s continuous need to produce new content that reveals more and more about their existence.
 
Changed:
<
<
If we want our society to progress more efficiently towards unity, we must depolarize our social media. To do this, we must begin by introducing legislation and regulation that prevents companies from providing overly filtered access to misguided illusions. It is not enough to fault the masses alone. If we read more articles from various news sources, share those with friends that hold our current viewpoints and create further connections to others with entirely different perspectives, we may begin to undo the process of polarized information that has so heavily influenced our social media and negatively impacted our society. But to be truly successful, we must target the unseen as much as the obvious.
>
>
The act of surveillance on the viewer manifests in two different ways. The first is through advertising, a product of the viewer watching while being watched. Indeed, the IBF is now considered to be one of the most versatile and effective advertisements ever made. Having taken on the role of the trusted ‘best friend’, the IBF is able to leverage this relationship of loyalty and trust to be an ultimate source of word-of-mouth advice. Each second of the IBF’s video is either an advertisement for a product that can be found and bought or of the IBF’s general lifestyle which seems within reach if you just purchase X items. Data on an IBF’s viewer engagement is powerful in that it tracks viewers’ purchases through discount codes and has the potential to predict what is happening or will happen in viewers’ lives through analysing the IBFs that the viewer engages with. This phenomenon feeds into the damaging effects of surveillance capitalism and instrumentarian power (Shoshanna Zuboff). The second act of surveillance occurs when viewers normalise the existence of surveillance itself. The process begins with the viewer observing the IBF sharing intimate details of their life, which bears a key behavioural message: that it is now permissible and normal for people at large to know the everyday details of our lives. This act of normalisation then amplifies surveillance, as the viewers emulate this behaviour and become the producers, so too engaging in the production of overly revealing, everyday content. As Foucault wrote, “the panopticon… has a role of amplification… its aim is to strengthen the social forces – to increase production, to develop the economy, spread education, raise the level of public morality; to increase and multiply.” All at once there is more production of content, more surveillance of that content, amplification of surveillance on existing content, and thus multiplying surveillance throughout society. The final stage is when the sheer prevalence of surveillance leads to the viewer’s adoption of the notion that surveillance is merely a side effect of modern life and needs not be challenged or removed.

The tumor

The IBF is thus a collection of paradoxes. It is an advertisement, but not; a reality, but also a fantasy; a voyeuristic experience, but also an exercise in surveillance. Viewers are so distracted by what the IBF purports to be and the sheer pleasure of consumption that the silent changes within us go undetected. By the time we realise, the tumor has already formed. Our apathy towards surveillance will be so deeply engrained in our tissues that it will be impossible to remove. All that remains is for mankind to be subjugated by the state, divided into little cells flooded with light.

 \ No newline at end of file

TWikiGuestFirstEssay 17 - 09 Oct 2020 - Main.KjSalameh
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Deleted:
<
<
People used to rely on news outlets to know what's happening around the world; now, most of us get our news from social media. Because we share articles that fit in with our beliefs and connect with others who have similar views to our own, the news we get becomes recycled based on our previous biases. Here lies the ultimate danger of social media: our news has become so polarized that we lack the exposure necessary for an advancement in society.
 
Deleted:
<
<
The idea of our influences directing us toward belief and action is not new. Gustave Le Bon, one of the greatest philosophers dedicated to the work on crowd psychology, makes the case that since the dawn of time we have always been under the influence of religious, political, and social illusions. He states that because the masses have always been under these influences, we are ingrained to seek out an illusion to grasp to under any and all circumstance. He states that philosophers in the 19th century have worked greatly to destroy these illusions, but have not been able to provide the masses with any ideal that could sway them. Due to this, the masses now flock to whichever rhetorician wets their appetites. It seems social media has become a universal outlet to which we grasp onto our illusions, as Le Bon mentions, refusing to diversify ourselves to viewpoints that differ than our own and thereby narrowing our visions of reality and widening the divisions we have from one another and, perhaps even, from truth.
 \ No newline at end of file
Added:
>
>
Polarization and the Division of Society

Social Media and the Masses

Process of Polarization and Potential for Progress

Polarization and the Division of Society

People used to rely on news outlets to know what's happening around the world; now, most of us get our news from social media. For example, we used to read articles to determine 'what' is happening, and then we used to think for ourselves on 'why' this happened and 'how' we should feel about it. Now, with convenience at our fingertips, we are in the midst of a reversal. The politics we are partial to already define for us 'what' has happened. This is a product of the increasing bipolarization and division in our society. To a large degree we already know what we will believe and what we will not accept, establishing a dangerous dichotomy of thought. Along these lines, the convenience of social media - the content of which is continually shaped by unseen forces and algorithms that prey on our technological footprints - has fed into this dichotomy. Now most follow their news to better understand the 'how'--how should we feel? How should we react? What fits the narrative of the rhetoric we've already accepted? And because we share articles that fit in with our beliefs and connect with others who have similar views to our own, social media makes it easy for us to bolster this mindset of finding support for our biases rather than allow new information to broaden our insights. The news we intake becomes recycled based on our previous biases. Here lies the ultimate danger of social media without due regulation: the guided polarization of digital news only exasperates the existing divisions in our society.

Social Media and the Masses

The idea of our influences directing us toward belief and action is not new. Le Bon, a polymath dedicated to the work on crowd psychology, makes the case that since the dawn of time we have always been under the influence of religious, political, and social illusions (See The Crowd: A Study of the Popular Mind). He states that because the masses have always been under these influences, we are ingrained to seek out an illusion to grasp to under any and all circumstances. He noted that while philosophers in the 19th century have worked greatly to destroy these illusions, they have not been able to provide the masses with any ideal that could effectively sway them. Accordingly, the masses now flock to whichever rhetorician wets their appetites. Le Bon may have written his seminal work at the turn of the 20th century, but his words seem appropriate now more than ever. Social media has become a universal outlet to which we grasp onto our illusions, and refuse to diversify ourselves to viewpoints that differ than our own. Living in this new digital age, we are thereby narrowing our visions of reality and widening the divisions we have from one another and, perhaps even, from truth. Truth itself has become fragmented, relying on the whims of the reader. All the while most of us remain clueless to the puppeteers behind the curtains.

It's natural for our experiences to dictate our way of thinking in the Lockean framework of epistemology, but the problem with polarization in social media today is that it leaves little to no room for genuine discourse. What social media offers us is a steady and consistent affirmation from our peers who think similarly to us. Social media is intrinsically designed to connect us with others who will encourage our way of thinking, even if our logic is flawed or our news misguided. In other words, for many social media has made home to a great convenience of getting the assurance we want from others who already agree with us that productive speculation or positive self-doubt becomes a foreign process. Many people then become so encouraged by their opinions that they begin to confuse them for facts. In order to bridge the gaps in our society, we must, at the very least, understand the diverse markup of our communal struggle for survival.

Process of Polarization and Potential for Progress

Social media and similar digital mediums largely influence our thinking through targeted advertisements. Every time we swallow the mental pill on Facebook, Reddit, and the like, the databases on those sites store our personal and private data to their advantage, keeping close track of what we search and what our interests are. This misuse of our privacy and the self-selective filter bubbles social media creates for us works to keep the masses addicted. We connect with others who have beliefs aligning to our own, we 'like' their posts and share their posts, and without second though allow behemoth companies to track our personal information and internet consumption tendencies. Social media works by continuing to offer us exposure to our interests; unfortunately this is the problem. Since we are more likely to accept ideas that align with our pre-existing beliefs, and thus continue to scroll down our social media feeds, the posts that pop up first on our accounts are the news sources that work with our existing confirmation biases. Under such a system, what should be expected except for a widening of the rifts that divide us?

If we want our society to progress more efficiently towards unity, we must depolarize our social media. To do this, we must begin by introducing legislation and regulation that prevents companies from providing overly filtered access to misguided illusions. It is not enough to fault the masses alone. If we read more articles from various news sources, share those with friends that hold our current viewpoints and create further connections to others with entirely different perspectives, we may begin to undo the process of polarized information that has so heavily influenced our social media and negatively impacted our society. But to be truly successful, we must target the unseen as much as the obvious.


TWikiGuestFirstEssay 16 - 09 Oct 2020 - Main.KjSalameh
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Deleted:
<
<
1st draft of the 1st essay
 \ No newline at end of file
Added:
>
>
People used to rely on news outlets to know what's happening around the world; now, most of us get our news from social media. Because we share articles that fit in with our beliefs and connect with others who have similar views to our own, the news we get becomes recycled based on our previous biases. Here lies the ultimate danger of social media: our news has become so polarized that we lack the exposure necessary for an advancement in society.

The idea of our influences directing us toward belief and action is not new. Gustave Le Bon, one of the greatest philosophers dedicated to the work on crowd psychology, makes the case that since the dawn of time we have always been under the influence of religious, political, and social illusions. He states that because the masses have always been under these influences, we are ingrained to seek out an illusion to grasp to under any and all circumstance. He states that philosophers in the 19th century have worked greatly to destroy these illusions, but have not been able to provide the masses with any ideal that could sway them. Due to this, the masses now flock to whichever rhetorician wets their appetites. It seems social media has become a universal outlet to which we grasp onto our illusions, as Le Bon mentions, refusing to diversify ourselves to viewpoints that differ than our own and thereby narrowing our visions of reality and widening the divisions we have from one another and, perhaps even, from truth.

 \ No newline at end of file

TWikiGuestFirstEssay 15 - 07 Oct 2020 - Main.ClaireCaton
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Deleted:
<
<
The Internet Society’s Nuclear Option

In class, we have discussed the importance of privacy and the risks of surveillance in an era of increasingly sophisticated behavior recording, prediction, and manipulation. As a society, we are becoming increasingly entrenched in a burgeoning ecosystem of surveillance capitalism.

Many agree that a fundamental redirect is in order; the broadly unregulated, widespread capture of behavioral data should be restricted or even prohibited worldwide. Ideally, we might even eliminate all previously collected behavioral information

However, as I reflect upon the current state of the Internet Society, I cannot ignore the nonzero possibility that the war to preserve the privacy of behavioral data and prevent sophisticated behavioral influence has already been lost.

Within Google’s servers alone lay my proudest academic works, intimate secrets from my darkest moments, my tasks for the day, my plans for the year, a scatterplot of my social footprint, an extensive record of my movements, and contact information for every human I know. Facebook, Amazon, and Bank of America hold powerful data profiles of me as well. Add to that datasets compiled by the U.S. government and other state entities.

I write this as a relatively well informed, well educated, and concerned citizen. My dismal tale of ignorant surrender and subsequent inaction is all too common. Around the globe various corporate and government entities hold massive troves of personal information regarding billions of humans.

Unfortunately, the deletion of this behavioral data strikes me as a functional impossibility. Such valuable digital information will not be destroyed by force. Considering the power of the parties who hold it and the existential threat that deletion would present, they will not cooperate either. We must also consider the general lack of support for such action at this time and the logistical difficulties inherent in such an effort. Accordingly, I assume that the behavioral data that has been collected will remain indefinitely.

Next, I consider the possibility that we can limit the capture of behavioral data to its present state.

Even if I completely unplug today, I have already leaked extensive information. The power of this data in combination with present-day tools is evident in societal changes as fundamental as declining sex drive and the swaying of national elections.

With such immense value, behavioral-data-driven tools will continue to advance even in the absence of new data collection.

The best-case scenario appears to be an incremental slowdown of behavioral data collection over several years with significant dissent by parties that are unmoved by widespread concern and have sufficient leverage to withstand external pressures (e.g. Communist Party of China).

Considering these dynamics, I am concerned that a data-collection slowdown may be insufficient to eliminate threats of social control. Accordingly, it seems prudent to consider an alternate plan of action in case of continued progression into a surveillance-centric ecosystem.

Society’s current path is one in which the Parasite with the Mind of God is under construction…or simply undergoing perpetual renovations. Theorists such as Ray Kurzweil and Nick Bostrom believe that society is en route to creating superintelligent artificial intelligence, a digital system that is capable of outperforming humanity in all intellectual endeavors. Such a machine strikes me as the natural conclusion of a society in a feedback loop of data capture for observation, analysis, and influence.

Bostrom further claims that superintelligent A.I. “is the last invention that man need ever make” as it may execute any further self-enhancements and will be sufficiently intelligent to thwart attempts at intervention.

If we continue on this path, we must decide who should be in control of this ultimate project and what procedures will guide the decision-making process.

At present, the frontrunners in the race for big data and sophisticated machine learning seem to be Big Tech and national governments. Neither group embodies the goals or procedures that I want guiding such a project of ultimate importance.

Both are shrouded in secrecy and exist within competitive spaces that cultivate racing behavior. “Move fast and break stuff.” “It’s better to ask for forgiveness that to request permission.” As these tools become more powerful and the societal impact more drastic, such behavior becomes increasingly dangerous.

To avoid a future shaped by today’s likely candidates and their inherent flaws, I advocate the establishment of a socialized multinational AI research project that is subject to public input and oversight and is less constrained by capitalist and political forces. A unified global public project strikes me as the best opportunity to cultivate sufficient resources to surpass the efforts of Big Tech and national governments.

Even if such a project were initiated imminently, the hour is late and the competition is fierce. Thus, drastic action must be considered. Legislation granting data portability rights could be extremely helpful, allowing individuals to obtain their personal data from service providers and, in turn, share that information with the socialized project. Similarly, legislation that protects adversarial interoperability in the software industry could catalyze transitions away from predatory products upon which the public has become dependent. If necessary to achieve competitive dominance, further data collection on a consensual basis may be pursued.

While the collection and processing of behavioral information is inherently risky, an international socialized model may greatly reduce the risks of our present private and national models.

I do not advocate any surrender in the fight for privacy. I simply support the development of contingency plans. An arms race is afoot in both the private and public sector with many convinced that surveillance is the key to future dominance. In humanity’s failure to denuclearize, I see an inability of modern society to relinquish powerful tools of control and I fear that digital surveillance may be similarly destined to proliferate.

 \ No newline at end of file
Added:
>
>
1st draft of the 1st essay
 \ No newline at end of file

TWikiGuestFirstEssay 14 - 11 Jan 2020 - Main.JieLin
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
The Internet Society’s Nuclear Option

TWikiGuestFirstEssay 13 - 12 Oct 2019 - Main.AndrewIwanicki
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<
Even before I walk into the apartment where I am babysitting the family is watching me. They’re not home but they see me on the “Ring” and text, “I see the nanny let you in.” Suddenly they appear on their video Alexa without warning and without me answering to explain the bedtime procedures for their 3-year-old. At bedtime she wants to listen to music. Almost immediately her parents have turned it on from their phones. While sitting at a concert 60 blocks south they ignore Billy Joel, instead watching and listening to their daughter and me.
>
>
The Internet Society’s Nuclear Option
 
Changed:
<
<
Constant parent surveillance started in my generation. Friends got busted for lying about their whereabouts when their parents tracked their phones. Sneak in after curfew? Good luck. Your phone, the “Ring,” the cameras inside are the nosiest neighbors. For concerned parents the gadgets of the internet age allow for a type of helicoptering like never before.
>
>
 
Changed:
<
<
What if we told these concerned parents that with a few lines of python anyone can watch? Or that there are websites listing webcams that are set to the default passwords (or without passwords) that anyone on the internet can access?
>
>
In class, we have discussed the importance of privacy and the risks of surveillance in an era of increasingly sophisticated behavior recording, prediction, and manipulation. As a society, we are becoming increasingly entrenched in a burgeoning ecosystem of surveillance capitalism.
 
Added:
>
>
Many agree that a fundamental redirect is in order; the broadly unregulated, widespread capture of behavioral data should be restricted or even prohibited worldwide. Ideally, we might even eliminate all previously collected behavioral information
 
Changed:
<
<

Hacking is Easy

>
>
However, as I reflect upon the current state of the Internet Society, I cannot ignore the nonzero possibility that the war to preserve the privacy of behavioral data and prevent sophisticated behavioral influence has already been lost.
 
Changed:
<
<
Accessing someone’s unsecured webcam isn’t difficult and sites like Shodan and Insecam make this easier. Bots randomly scan for unsecured devices, something that can be done across the entire internet in a matter of hours. If one runs a quick search on Shodan she can find a slew of web servers that use the username and password admin/admin or that can be accessed through a password found by googling “manufacturer default credentials.” These default credentials are conveniently assembled on ispyconnect.com’s “user guide.” Still other cameras can be accessed through known vulnerabilities such as Boa webcams. Boa has a vulnerability that allows you to reset the admin password. In 2015, security firm Rapid tested nine popular baby monitors for security. Eight of the nine got an F, the ninth a D minus. Despite the reporting on this in 2015, nothing has changed.
>
>
Within Google’s servers alone lay my proudest academic works, intimate secrets from my darkest moments, my tasks for the day, my plans for the year, a scatterplot of my social footprint, an extensive record of my movements, and contact information for every human I know. Facebook, Amazon, and Bank of America hold powerful data profiles of me as well. Add to that datasets compiled by the U.S. government and other state entities.
 
Changed:
<
<
There have been accounts of mothers catching hackers hijacking the cameras. One mother noticed her baby monitor moving without anyone controlling it. She realized it was scanning the room and landing on her bed. Everyone who was supposed to have control was in the same room not moving the device. Others reported their baby monitors talking. One particularly disturbing case involves a hacker yelling at babies on baby cams.
>
>
I write this as a relatively well informed, well educated, and concerned citizen. My dismal tale of ignorant surrender and subsequent inaction is all too common. Around the globe various corporate and government entities hold massive troves of personal information regarding billions of humans.
 
Changed:
<
<
If peeping Toms on the internet are watching through baby monitors, what comes next? Surely those who lived in Stalin’s Soviet Union would find bringing a device into your home that anyone can access foolish. Even if you aren’t worried about your own government, there is nothing stopping other countries from peeping too. This can allow for more targeted advertising, election campaigning, perfect price discrimination. Even if governments or companies aren’t themselves watching, the dangers of commodification of personal information are real.
>
>
Unfortunately, the deletion of this behavioral data strikes me as a functional impossibility. Such valuable digital information will not be destroyed by force. Considering the power of the parties who hold it and the existential threat that deletion would present, they will not cooperate either. We must also consider the general lack of support for such action at this time and the logistical difficulties inherent in such an effort. Accordingly, I assume that the behavioral data that has been collected will remain indefinitely.
 
Changed:
<
<
The dangers of these insecure devices goes beyond concerns of creeps or the hypothetical 1984 sounding concerns of the government or companies watching, they can bring down the internet. In 2016 DNS provider Dyn was attacked by Mirai botnets which took down sites including Netflix, Twitter, and Spotify largely using IoT? devices (such as baby monitors) infected with malware. Hackers took complete control of the monitor. Further, baby monitors can grant a hacker access to the home network to get information from computers.
>
>
Next, I consider the possibility that we can limit the capture of behavioral data to its present state.
 
Added:
>
>
Even if I completely unplug today, I have already leaked extensive information. The power of this data in combination with present-day tools is evident in societal changes as fundamental as declining sex drive and the swaying of national elections.
 
Changed:
<
<

The Law

>
>
With such immense value, behavioral-data-driven tools will continue to advance even in the absence of new data collection.
 
Changed:
<
<
As is common with the law and the internet, the law hasn’t caught up with the baby monitors. Some have noted the right to privacy should apply here. What is more of a violation of privacy than someone watching you in your bedroom? Seeming natural applications of existing laws don’t go far enough to solve the problem. While applying peeping Tom laws to those watching over baby monitors could prosecute some people and give some justice to victims, avoiding prosecution wouldn’t be hard and it wouldn’t solve the problem. Security experts have proposed other solutions including regulation of baby monitors, allowing victims to sue the baby monitor companies, and hacking back.
>
>
The best-case scenario appears to be an incremental slowdown of behavioral data collection over several years with significant dissent by parties that are unmoved by widespread concern and have sufficient leverage to withstand external pressures (e.g. Communist Party of China).
 
Changed:
<
<
Security experts have called on the government to get involved by regulating IoT? devices. Mikko Hypponen, chief research officer for F-Secure, for example, compared leaking WiFi? passwords to devices catching on fire: it shouldn’t happen and the government should make sure it doesn’t. Experts have proposed civil and criminal penalties for creating unsecure devices and laws requiring buyers to change the default password before the device can be used. Others, however, believe regulation would be useless because U.S. regulations won’t affect other countries.
>
>
Considering these dynamics, I am concerned that a data-collection slowdown may be insufficient to eliminate threats of social control. Accordingly, it seems prudent to consider an alternate plan of action in case of continued progression into a surveillance-centric ecosystem.
 
Changed:
<
<
Some have proposed allowing victims of baby monitor hacks to sue manufacturers or sellers of the monitors. The Mirai attack shows the widespread hacking of these devices and suggests the possibility of a class action suit. If companies are hit with hefty fines they would be incentivized to send shoddy security for IoT? devices the way of lead paint.
>
>
Society’s current path is one in which the Parasite with the Mind of God is under construction…or simply undergoing perpetual renovations. Theorists such as Ray Kurzweil and Nick Bostrom believe that society is en route to creating superintelligent artificial intelligence, a digital system that is capable of outperforming humanity in all intellectual endeavors. Such a machine strikes me as the natural conclusion of a society in a feedback loop of data capture for observation, analysis, and influence.
 
Changed:
<
<
Still others have proposed a more radical solution: hacking back. Rob Graham, security researcher and hacker, suggested the NSA launch a proactive strike to knock compromised IoT? devices offline. Graham sees this as a solution to U.S. legislation being useless overseas. While that may be true, there are likely other Constitutional concerns with the NSA hacking into people’s devices to knock them offline.
>
>
Bostrom further claims that superintelligent A.I. “is the last invention that man need ever make” as it may execute any further self-enhancements and will be sufficiently intelligent to thwart attempts at intervention.
 
Added:
>
>
If we continue on this path, we must decide who should be in control of this ultimate project and what procedures will guide the decision-making process.
 
Changed:
<
<

Conclusion

>
>
At present, the frontrunners in the race for big data and sophisticated machine learning seem to be Big Tech and national governments. Neither group embodies the goals or procedures that I want guiding such a project of ultimate importance.
 
Deleted:
<
<
This paper discussed the security concerns of hackers accessing baby monitors and what this could mean for commodification of personal data and access by companies and governments as well as widespread attacks. Other concerns with baby monitors go beyond the scope of this paper: children growing up constantly surveilled and the ethics of spying on your babysitter, to name a couple. Parents have begun to worry about sharing about their children on Instagram. A class action suit is currently going against Disney for scraping data from children’s video games. It is time parents become concerned about the safety devices they bring into their homes.
 \ No newline at end of file
Added:
>
>
Both are shrouded in secrecy and exist within competitive spaces that cultivate racing behavior. “Move fast and break stuff.” “It’s better to ask for forgiveness that to request permission.” As these tools become more powerful and the societal impact more drastic, such behavior becomes increasingly dangerous.

To avoid a future shaped by today’s likely candidates and their inherent flaws, I advocate the establishment of a socialized multinational AI research project that is subject to public input and oversight and is less constrained by capitalist and political forces. A unified global public project strikes me as the best opportunity to cultivate sufficient resources to surpass the efforts of Big Tech and national governments.

Even if such a project were initiated imminently, the hour is late and the competition is fierce. Thus, drastic action must be considered. Legislation granting data portability rights could be extremely helpful, allowing individuals to obtain their personal data from service providers and, in turn, share that information with the socialized project. Similarly, legislation that protects adversarial interoperability in the software industry could catalyze transitions away from predatory products upon which the public has become dependent. If necessary to achieve competitive dominance, further data collection on a consensual basis may be pursued.

While the collection and processing of behavioral information is inherently risky, an international socialized model may greatly reduce the risks of our present private and national models.

I do not advocate any surrender in the fight for privacy. I simply support the development of contingency plans. An arms race is afoot in both the private and public sector with many convinced that surveillance is the key to future dominance. In humanity’s failure to denuclearize, I see an inability of modern society to relinquish powerful tools of control and I fear that digital surveillance may be similarly destined to proliferate.

 \ No newline at end of file

TWikiGuestFirstEssay 12 - 08 Oct 2019 - Main.AyeletBentley
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<
Even before I walk into the apartment where I am babysitting the family is watching me. They’re not home but they see me on the “Ring” and text, “I see the nanny let you in.” Suddenly they appear on their video Alexa without warning and without me answering to explain the bedtime procedures for their 3-year-old. At bedtime she wants to listen to “Uncle Moishe.” Almost immediately her parents have turned it on from their phones. While sitting at a concert 60 blocks south they ignore Billy Joel, instead watching and listening to their daughter and me.
>
>
Even before I walk into the apartment where I am babysitting the family is watching me. They’re not home but they see me on the “Ring” and text, “I see the nanny let you in.” Suddenly they appear on their video Alexa without warning and without me answering to explain the bedtime procedures for their 3-year-old. At bedtime she wants to listen to music. Almost immediately her parents have turned it on from their phones. While sitting at a concert 60 blocks south they ignore Billy Joel, instead watching and listening to their daughter and me.
 Constant parent surveillance started in my generation. Friends got busted for lying about their whereabouts when their parents tracked their phones. Sneak in after curfew? Good luck. Your phone, the “Ring,” the cameras inside are the nosiest neighbors. For concerned parents the gadgets of the internet age allow for a type of helicoptering like never before.

TWikiGuestFirstEssay 11 - 08 Oct 2019 - Main.AyeletBentley
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Added:
>
>
Even before I walk into the apartment where I am babysitting the family is watching me. They’re not home but they see me on the “Ring” and text, “I see the nanny let you in.” Suddenly they appear on their video Alexa without warning and without me answering to explain the bedtime procedures for their 3-year-old. At bedtime she wants to listen to “Uncle Moishe.” Almost immediately her parents have turned it on from their phones. While sitting at a concert 60 blocks south they ignore Billy Joel, instead watching and listening to their daughter and me.

Constant parent surveillance started in my generation. Friends got busted for lying about their whereabouts when their parents tracked their phones. Sneak in after curfew? Good luck. Your phone, the “Ring,” the cameras inside are the nosiest neighbors. For concerned parents the gadgets of the internet age allow for a type of helicoptering like never before.

What if we told these concerned parents that with a few lines of python anyone can watch? Or that there are websites listing webcams that are set to the default passwords (or without passwords) that anyone on the internet can access?

Hacking is Easy

Accessing someone’s unsecured webcam isn’t difficult and sites like Shodan and Insecam make this easier. Bots randomly scan for unsecured devices, something that can be done across the entire internet in a matter of hours. If one runs a quick search on Shodan she can find a slew of web servers that use the username and password admin/admin or that can be accessed through a password found by googling “manufacturer default credentials.” These default credentials are conveniently assembled on ispyconnect.com’s “user guide.” Still other cameras can be accessed through known vulnerabilities such as Boa webcams. Boa has a vulnerability that allows you to reset the admin password. In 2015, security firm Rapid tested nine popular baby monitors for security. Eight of the nine got an F, the ninth a D minus. Despite the reporting on this in 2015, nothing has changed.

There have been accounts of mothers catching hackers hijacking the cameras. One mother noticed her baby monitor moving without anyone controlling it. She realized it was scanning the room and landing on her bed. Everyone who was supposed to have control was in the same room not moving the device. Others reported their baby monitors talking. One particularly disturbing case involves a hacker yelling at babies on baby cams.

If peeping Toms on the internet are watching through baby monitors, what comes next? Surely those who lived in Stalin’s Soviet Union would find bringing a device into your home that anyone can access foolish. Even if you aren’t worried about your own government, there is nothing stopping other countries from peeping too. This can allow for more targeted advertising, election campaigning, perfect price discrimination. Even if governments or companies aren’t themselves watching, the dangers of commodification of personal information are real.

The dangers of these insecure devices goes beyond concerns of creeps or the hypothetical 1984 sounding concerns of the government or companies watching, they can bring down the internet. In 2016 DNS provider Dyn was attacked by Mirai botnets which took down sites including Netflix, Twitter, and Spotify largely using IoT? devices (such as baby monitors) infected with malware. Hackers took complete control of the monitor. Further, baby monitors can grant a hacker access to the home network to get information from computers.

The Law

As is common with the law and the internet, the law hasn’t caught up with the baby monitors. Some have noted the right to privacy should apply here. What is more of a violation of privacy than someone watching you in your bedroom? Seeming natural applications of existing laws don’t go far enough to solve the problem. While applying peeping Tom laws to those watching over baby monitors could prosecute some people and give some justice to victims, avoiding prosecution wouldn’t be hard and it wouldn’t solve the problem. Security experts have proposed other solutions including regulation of baby monitors, allowing victims to sue the baby monitor companies, and hacking back.

Security experts have called on the government to get involved by regulating IoT? devices. Mikko Hypponen, chief research officer for F-Secure, for example, compared leaking WiFi? passwords to devices catching on fire: it shouldn’t happen and the government should make sure it doesn’t. Experts have proposed civil and criminal penalties for creating unsecure devices and laws requiring buyers to change the default password before the device can be used. Others, however, believe regulation would be useless because U.S. regulations won’t affect other countries.

Some have proposed allowing victims of baby monitor hacks to sue manufacturers or sellers of the monitors. The Mirai attack shows the widespread hacking of these devices and suggests the possibility of a class action suit. If companies are hit with hefty fines they would be incentivized to send shoddy security for IoT? devices the way of lead paint.

Still others have proposed a more radical solution: hacking back. Rob Graham, security researcher and hacker, suggested the NSA launch a proactive strike to knock compromised IoT? devices offline. Graham sees this as a solution to U.S. legislation being useless overseas. While that may be true, there are likely other Constitutional concerns with the NSA hacking into people’s devices to knock them offline.

Conclusion

This paper discussed the security concerns of hackers accessing baby monitors and what this could mean for commodification of personal data and access by companies and governments as well as widespread attacks. Other concerns with baby monitors go beyond the scope of this paper: children growing up constantly surveilled and the ethics of spying on your babysitter, to name a couple. Parents have begun to worry about sharing about their children on Instagram. A class action suit is currently going against Disney for scraping data from children’s video games. It is time parents become concerned about the safety devices they bring into their homes.

 \ No newline at end of file

TWikiGuestFirstEssay 10 - 07 Oct 2019 - Main.EungyungEileenChoi
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

TWikiGuestFirstEssay 9 - 05 Oct 2019 - Main.KerimAksoy
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

TWikiGuestFirstEssay 8 - 03 Dec 2017 - Main.LizzethMerchan
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Deleted:
<
<
Cuba Disconnected

An island-nation only 90 miles off Florida’s coast, Cuba is among the least connected countries in the world. Attempting to control the minds of its people, the Castro regime has sought to control all flow of information from abroad. Free access to the internet, the world’s largest source of information, is incompatible with the totalitarian model. In 2015, approximately 5 percent of the population had access to the global internet at home, although the Cuban government put that number closer to 35 percent. There is one source of permitted media in Cuba – the State.

Internet access in Cuba has been hampered by poor connectivity, prohibitive internet pricing, censorship, and systematic content blocking. For years, the internet was banned and ownership of most computer software and hardware was prohibited. In 2009, the Obama administration began allowing telecommunications companies to do business in Cuba. In 2013, Venezuela financed and activated a fiber-optic cable between the two countries. These recent developments have created a dual internet system on the island – the global internet, which is inaccessible to most Cubans, and its own intranet, which is cheaper and highly censored. The government’s intranet is more readily available and features restricted content, including the EcuRed? encyclopedia, a Wikipedia-like website that presents the government’s version of the world and history. Until recently, the global internet was only available at state internet parlors and hotels where connecting was slow and prohibitively expensive. In July 2015, however, the state-owned telecom monopoly, ETECSA (Empresa de Telecomunicaciones de Cuba S.A.), began implementing WiFi? hot spots. Today, there are more than 240 public WiFi? spots scattered throughout the country. Although the hot spots have improved internet accessibility, service is slow and the price to connect remains unaffordable – $1.50 per hour in a country where the average monthly salary is $25.

Cuban Solutions

Despite the government’s efforts to restrict and control internet access, Cubans haven’t exactly been sitting around, letting the digital revolution pass them by. The Cuban population has responded to the media blockade with innovative solutions that range from hacking to the creation of an underground internet system. One of their most notable innovations is el paquete semanal, the “Weekly Package.” The paquete is a flash drive loaded with a week’s worth of foreign entertainment – including movies, music videos, and Netflix shows – that is distributed throughout the island. The process is coordinated entirely by an informal market of data traffickers, based both in Havana and the United States. Another, perhaps more sophisticated invention, is a bootleg internet system referred to as the “Streetnet.” The system is a home-grown network created and maintained by Cubans on the island – a network of black market computers, routers, nano-modems, and concealed cables. Connecting to the Streetnet gives thousands of Cubans access to one another for online gaming, messaging, and media sharing.

Although it is undeniable that the Cuban people have been creative and resourceful in bypassing their government’s resistance to internet access, their success is meaningfully limited. The content of the weekly paquete, for example, is primarily recreational in nature – the information disseminated is not political or religious in any way. Perhaps this is why the Cuban government has tolerated the circulation of such prohibited content – as long as the distributed material remains free of any ideologically-threatening information, the regime allows Cubans to get a taste of illicit foreign culture. The existence of these bootleg inventions is convenient for the government because it allows the masses to feel empowered while simultaneously silencing and appeasing them. In their quest to consume and share information, Cuba’s digital revolutionists have attained a restricted sense of “freedom,” one granted and controlled by their government.

 \ No newline at end of file

TWikiGuestFirstEssay 7 - 02 Dec 2017 - Main.LizzethMerchan
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<
The Internet and Capitalist Gain: The Cost of Lunch
>
>
Cuba Disconnected
 
Deleted:
<
<
It was fall 1990. I was a freshman at Syracuse and my high school girlfriend was a freshman at Dartmouth. In one of her letters, she described this system called “ELM” that would allow us to write each other through our university personal computers. I was intrigued and slightly skeptical (I had always thought of myself as the more tech-savvy in the relationship). The next day, an assistant from Syracuse’s personal computer (PC) lab demoed the “ELM” system and I sent my girlfriend an email message - my first experience with the Internet.
 
Changed:
<
<
Columbia Professor Eben Moglen would have us to believe that the Internet’s architects designed it with the altruistic goal of reaching, and then availing education to, every human on earth and that corporations, such as Google and Facebook, have depredated this ideal in pursuit of “capitalist gain.” That might be true. But, it is true also that, without the pursuit of “gain,” the Internet would have never experienced such a colossal expansion in global usage. This “gain” is a quid-pro-quo or cost of corporate contribution to the Internet’s growth.
>
>
An island-nation only 90 miles off Florida’s coast, Cuba is among the least connected countries in the world. Attempting to control the minds of its people, the Castro regime has sought to control all flow of information from abroad. Free access to the internet, the world’s largest source of information, is incompatible with the totalitarian model. In 2015, approximately 5 percent of the population had access to the global internet at home, although the Cuban government put that number closer to 35 percent. There is one source of permitted media in Cuba – the State.
 
Changed:
<
<
The Internet is the world-wide, public network of interconnected computer networks. The modern-day Internet is commonly thought to have descended from the ARPAnet, a network developed by the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA). In February 1958, the U.S. government created DARPA, after being caught off-guard by the Soviet Union’s launch of an Intercontinental Ballistic Missile and the world's first unmanned satellites, Sputnik 1 and 2. In 1962, amidst fears of what might happen if the Soviet Union attacked the nation’s telephone system, a scientist from M.I.T. proposed, as a solution, a “galactic network” of computers that could talk to each other (later referred to as ARPAnet).
>
>
Internet access in Cuba has been hampered by poor connectivity, prohibitive internet pricing, censorship, and systematic content blocking. For years, the internet was banned and ownership of most computer software and hardware was prohibited. In 2009, the Obama administration began allowing telecommunications companies to do business in Cuba. In 2013, Venezuela financed and activated a fiber-optic cable between the two countries. These recent developments have created a dual internet system on the island – the global internet, which is inaccessible to most Cubans, and its own intranet, which is cheaper and highly censored. The government’s intranet is more readily available and features restricted content, including the EcuRed? encyclopedia, a Wikipedia-like website that presents the government’s version of the world and history. Until recently, the global internet was only available at state internet parlors and hotels where connecting was slow and prohibitively expensive. In July 2015, however, the state-owned telecom monopoly, ETECSA (Empresa de Telecomunicaciones de Cuba S.A.), began implementing WiFi? hot spots. Today, there are more than 240 public WiFi? spots scattered throughout the country. Although the hot spots have improved internet accessibility, service is slow and the price to connect remains unaffordable – $1.50 per hour in a country where the average monthly salary is $25.
 
Changed:
<
<
Although a network in name, the Internet is a creature of the computer. During the early computing age, computers were incredibly expensive to produce and operate. An early computer, the Electronic Numerical Integrator Analyzer and Computer (ENIAC), cost $500,000 ($6,781,798 in today’s money), weighed 30 tons, covered nearly 2,000 square feet, and had almost 18,000 vacuum tubes. The pursuit of “gain” motivated “for-profit” corporations to produce smaller, faster and more affordable computers, with more memory and user-friendly software.
>
>
Cuban Solutions
 
Changed:
<
<
In 1948, Bell Laboratories introduced the transistor, an electronic device carrying and amplifying electrical current, but much smaller than the vacuum tube. Ten years later, scientists at Texas Instruments and Fairchild Semiconductor invented the integrated-circuit, incorporating the computer’s electrical parts into a single silicon chip.

In 1971, an Intel engineer developed the microprocessor, one of the most significant advancements in computer technology. Before this invention, computers needed a separate integrated-circuit chip for each function (hence the need for such large machines). Microprocessors were the size of a thumbnail and could run the computer’s programs and manage its data. Intel’s first microprocessor, the 4004, had the same computing power as the massive ENIAC.

These innovations led to the birth of the small, relatively inexpensive “microcomputer” now known as the “personal computer.” In 1974, a corporation called Micro Instrumentation and Telemetry Systems (MITS) introduced Altair, a mail-order build-it-yourself PC kit. In 1975, MITS hired two young programmers to adapt the BASIC programming language for the Altair. In April 1975, the two young programmers formed Microsoft, responsible for the hugely popular Windows operating systems. By some estimates, Windows runs more than 90% of all PCs.

Over the next two years, two engineers in Silicon Valley built the Apple I and Apple II PCs, with more memory and a cheaper microprocessor than the Altair, a monitor, and a keyboard. Innovations like the “graphical user interface,” allowing users to select icons on a monitor screen instead of writing complicated commands, and the computer mouse made PCs more user-friendly. Bell Laboratories, Texas Instruments, Fairchild Semiconductor, Intel, MITS, Microsoft and Apple were all “for-profit” corporations. These corporations and their inventions spearheaded the PC revolution.

Soon, other “for-profit” corporations, like Xerox, Tandy, Commodore and IBM, had entered the PC market. PCs, networked over the global telephony infrastructure, created the Internet we have today. Innovations in personal computing facilitated the Internet’s expansion to 201 countries and to 3.8 billion or 51.7% of the human population. There might have been an Internet without PCs, but it would have been uninteresting, and probably confined to the research community and computer scientists.

In my Syracuse finance classes, professors inculcated the axiom that “for-profit” corporations exist for the sole purpose of maximizing shareholder wealth. According to Columbia alumnus, Milton Friedman, “[t]here is . . . only one social responsibility of business- to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, engages in open and free competition, without deception or fraud."

Moreover, shareholder wealth maximization is the law. For decades, Delaware courts, which dominate U.S. corporate jurisprudence, have espoused the tenet that the maximization of shareholder wealth must be every “for-profit” corporation’s ultimate objective. In essence, a corporation that pursues “capitalist gain” is merely following its legal mandate and complying with contractual obligations to its shareholders. As Senator Franken reminded us, “it is literally malfeasance for a corporation not to do everything it legally can to maximize its profits.”

No doubt, the U.S. government, through DARPA, funded some research and technological developments that made the ARPAnet, and eventually the Internet, possible. However, in many cases, this funding was provided to private “for-profit” corporations, like Xerox. It was the desire for “capitalist gain” that led these corporations to develop the technology products commissioned by DARPA.

Having advanced the Internet architects’ goal of reaching every human on earth, it should not come as a surprise that “for-profit” corporations would then seek to exploit the Internet for “gain.” These corporations are simply following market and legal expectations. The “gain” sought is a cost of the corporations’ facilitation of the Internet’s expansion. As Friedman stated, “[t]here’s no such thing as a free lunch.”

>
>
Despite the government’s efforts to restrict and control internet access, Cubans haven’t exactly been sitting around, letting the digital revolution pass them by. The Cuban population has responded to the media blockade with innovative solutions that range from hacking to the creation of an underground internet system. One of their most notable innovations is el paquete semanal, the “Weekly Package.” The paquete is a flash drive loaded with a week’s worth of foreign entertainment – including movies, music videos, and Netflix shows – that is distributed throughout the island. The process is coordinated entirely by an informal market of data traffickers, based both in Havana and the United States. Another, perhaps more sophisticated invention, is a bootleg internet system referred to as the “Streetnet.” The system is a home-grown network created and maintained by Cubans on the island – a network of black market computers, routers, nano-modems, and concealed cables. Connecting to the Streetnet gives thousands of Cubans access to one another for online gaming, messaging, and media sharing.
  \ No newline at end of file
Added:
>
>
Although it is undeniable that the Cuban people have been creative and resourceful in bypassing their government’s resistance to internet access, their success is meaningfully limited. The content of the weekly paquete, for example, is primarily recreational in nature – the information disseminated is not political or religious in any way. Perhaps this is why the Cuban government has tolerated the circulation of such prohibited content – as long as the distributed material remains free of any ideologically-threatening information, the regime allows Cubans to get a taste of illicit foreign culture. The existence of these bootleg inventions is convenient for the government because it allows the masses to feel empowered while simultaneously silencing and appeasing them. In their quest to consume and share information, Cuba’s digital revolutionists have attained a restricted sense of “freedom,” one granted and controlled by their government.
 \ No newline at end of file

TWikiGuestFirstEssay 6 - 11 Nov 2017 - Main.TravisMcMillian
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<
Maya Uchima What to Learn From the European Union’s Recent Reforms on Data Privacy
>
>
The Internet and Capitalist Gain: The Cost of Lunch
 
Changed:
<
<
The Privacy Infringement Problem in the US It has become more and more apparent in today’s society that the concept of privacy has been eroded, redefined, and curtailed as the power of corporations have dominated. Consumers must actively and aggressively opt-out from having private information logged and stored by websites. Oftentimes, consumers are not given the option to prevent companies from collecting data from them. For example, EPIC’s lawsuit against Google, alleging that Google has been tracking in-store purchases by gathering information from credit card transactions and using that data to target ads specific to each consumer. Not only can purchases (on and offline) reveal one’s tastes and interests, but searches on the internet or viewing trends logged by a cable box can provide valuable data that can be used in profitable marketing strategies. There is an argument that these targeted ads serve only to make life easier, more convenient, and tailored. Nevertheless, with no choice given to the consumer, the discomfort one feels due to the ruthless invasion of private life far outweighs the possible benefit of finding out about a sale at a preferred shoe store. It feels like the fight for privacy has succumbed to the allure of a blinded trust in these mega corporations.
>
>
It was fall 1990. I was a freshman at Syracuse and my high school girlfriend was a freshman at Dartmouth. In one of her letters, she described this system called “ELM” that would allow us to write each other through our university personal computers. I was intrigued and slightly skeptical (I had always thought of myself as the more tech-savvy in the relationship). The next day, an assistant from Syracuse’s personal computer (PC) lab demoed the “ELM” system and I sent my girlfriend an email message - my first experience with the Internet.
 
Changed:
<
<
Insufficient Protections The US is not without any protections for the consumer. The Fourth Amendment outlines broadly the right against unreasonable search and seizures. This sets the foundation for arguing for the consumer’s right to protect his data and his online choices. There exist also the Wiretap Laws, Electronic Communications Privacy Act, and most importantly, the FTC Act of 1914, which seeks to protect consumers from unfair or unreasonable business practices. The FTC is granted the power to pursue a corporation for questionable behavior, but unless the FTC deems the behavior worthy of an investigation, the private consumer is left with scant recourse. Other regulations tend to be too specific, such as a regulation on just medical data disclosure or just financial data protection. So what can the US do to begin providing more coverage for the consumer?
>
>
Columbia Professor Eben Moglen would have us to believe that the Internet’s architects designed it with the altruistic goal of reaching, and then availing education to, every human on earth and that corporations, such as Google and Facebook, have depredated this ideal in pursuit of “capitalist gain.” That might be true. But, it is true also that, without the pursuit of “gain,” the Internet would have never experienced such a colossal expansion in global usage. This “gain” is a quid-pro-quo or cost of corporate contribution to the Internet’s growth.
 
Changed:
<
<
Possible Pointers in the EU The EU’s recent policies may shed some light for possible next steps. Regulation 679 (2016), also known as GDPR, will go into effect across the member states of the EU (including the UK) in May 2018. It hopes to strengthen supervision and protection of consumer data. These new policies apply to both “controllers” and “processors” of data who work in conjunction to carry out any activity concerning the usage of personal data. The regulation sets out higher punishments if there is a breach and increased legal compliance regulations, including keeping more strict activity logs. It also defines “personal data” more broadly, now including IP addresses, where before it only recognized personally identifiable information (names, social security, etc.). The EU also issued Directive 680 (2016), the Law Enforcement Directive, last year. Directives, although not treated as immediate and binding legislation as regulations are, act as general guidelines for member states, which in turn create internal policies to fall into compliance with the overarching goal of the directive. It states that data can only be used in the process of preventing or investigating crimes and proceeds to define the limitations and scope of what constitutes a crime more clearly. Administrative agencies will provide independent supervision over law enforcement actions and certain remedies will be made available for the infringement of privacy if it is breached unfairly or disproportionately.
>
>
The Internet is the world-wide, public network of interconnected computer networks. The modern-day Internet is commonly thought to have descended from the ARPAnet, a network developed by the U.S. Department of Defense’s Advanced Research Projects Agency (DARPA). In February 1958, the U.S. government created DARPA, after being caught off-guard by the Soviet Union’s launch of an Intercontinental Ballistic Missile and the world's first unmanned satellites, Sputnik 1 and 2. In 1962, amidst fears of what might happen if the Soviet Union attacked the nation’s telephone system, a scientist from M.I.T. proposed, as a solution, a “galactic network” of computers that could talk to each other (later referred to as ARPAnet).

Although a network in name, the Internet is a creature of the computer. During the early computing age, computers were incredibly expensive to produce and operate. An early computer, the Electronic Numerical Integrator Analyzer and Computer (ENIAC), cost $500,000 ($6,781,798 in today’s money), weighed 30 tons, covered nearly 2,000 square feet, and had almost 18,000 vacuum tubes. The pursuit of “gain” motivated “for-profit” corporations to produce smaller, faster and more affordable computers, with more memory and user-friendly software.

In 1948, Bell Laboratories introduced the transistor, an electronic device carrying and amplifying electrical current, but much smaller than the vacuum tube. Ten years later, scientists at Texas Instruments and Fairchild Semiconductor invented the integrated-circuit, incorporating the computer’s electrical parts into a single silicon chip.

In 1971, an Intel engineer developed the microprocessor, one of the most significant advancements in computer technology. Before this invention, computers needed a separate integrated-circuit chip for each function (hence the need for such large machines). Microprocessors were the size of a thumbnail and could run the computer’s programs and manage its data. Intel’s first microprocessor, the 4004, had the same computing power as the massive ENIAC.

These innovations led to the birth of the small, relatively inexpensive “microcomputer” now known as the “personal computer.” In 1974, a corporation called Micro Instrumentation and Telemetry Systems (MITS) introduced Altair, a mail-order build-it-yourself PC kit. In 1975, MITS hired two young programmers to adapt the BASIC programming language for the Altair. In April 1975, the two young programmers formed Microsoft, responsible for the hugely popular Windows operating systems. By some estimates, Windows runs more than 90% of all PCs.

Over the next two years, two engineers in Silicon Valley built the Apple I and Apple II PCs, with more memory and a cheaper microprocessor than the Altair, a monitor, and a keyboard. Innovations like the “graphical user interface,” allowing users to select icons on a monitor screen instead of writing complicated commands, and the computer mouse made PCs more user-friendly. Bell Laboratories, Texas Instruments, Fairchild Semiconductor, Intel, MITS, Microsoft and Apple were all “for-profit” corporations. These corporations and their inventions spearheaded the PC revolution.

Soon, other “for-profit” corporations, like Xerox, Tandy, Commodore and IBM, had entered the PC market. PCs, networked over the global telephony infrastructure, created the Internet we have today. Innovations in personal computing facilitated the Internet’s expansion to 201 countries and to 3.8 billion or 51.7% of the human population. There might have been an Internet without PCs, but it would have been uninteresting, and probably confined to the research community and computer scientists.

In my Syracuse finance classes, professors inculcated the axiom that “for-profit” corporations exist for the sole purpose of maximizing shareholder wealth. According to Columbia alumnus, Milton Friedman, “[t]here is . . . only one social responsibility of business- to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, engages in open and free competition, without deception or fraud."

Moreover, shareholder wealth maximization is the law. For decades, Delaware courts, which dominate U.S. corporate jurisprudence, have espoused the tenet that the maximization of shareholder wealth must be every “for-profit” corporation’s ultimate objective. In essence, a corporation that pursues “capitalist gain” is merely following its legal mandate and complying with contractual obligations to its shareholders. As Senator Franken reminded us, “it is literally malfeasance for a corporation not to do everything it legally can to maximize its profits.”

No doubt, the U.S. government, through DARPA, funded some research and technological developments that made the ARPAnet, and eventually the Internet, possible. However, in many cases, this funding was provided to private “for-profit” corporations, like Xerox. It was the desire for “capitalist gain” that led these corporations to develop the technology products commissioned by DARPA.

Having advanced the Internet architects’ goal of reaching every human on earth, it should not come as a surprise that “for-profit” corporations would then seek to exploit the Internet for “gain.” These corporations are simply following market and legal expectations. The “gain” sought is a cost of the corporations’ facilitation of the Internet’s expansion. As Friedman stated, “[t]here’s no such thing as a free lunch.”

 
Deleted:
<
<
Not a Perfect System The EU’s continued interest in protecting consumers stems most likely from a stronger belief that privacy is a fundamental human right, a value not quite shared yet in the US. There have been many theories for why Europeans in general tend to want to shield their private lives more so than citizens in the US. One of the most dominant theories states that the trauma from during the Holocaust when Nazi officials would use school and bank records to find the names and addresses of Jewish people in the area has strengthened the necessity of protections for personal information. However, the EU system is not perfect. Their policies work mainly because of a heightened sense of trust among citizens in their individual member states’ governments. The US government has struggled with its citizens to maintain a semblance of respect for privacy and with the reveal in 2013 of PRISM being used by the NSA to monitor and track the data from internet transactions, the people’s distrust of the government has skyrocketed. To call for US citizens to all of the sudden embrace government regulation and surveillance as guardians of their data against corporations would be too large a bridge to gap, and would, in fact, lead to many other problems, as the government and its subsidiaries have proven to be a dubious and mysterious entity when it comes to maintaining boundaries with its citizens. The key takeaway from the EU reforms would be the shift in mentality towards viewing privacy as a fundamental right to be protected at all costs. The EU has instituted independent bodies to oversee the uses of data and has ensured steep remedies for breaches. These steps will not end the problems with private data infringement, but may begin the deterring process.

TWikiGuestFirstEssay 5 - 10 Nov 2017 - Main.MayaUchima
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"
Changed:
<
<

Hacking as Spectacle

>
>
Maya Uchima What to Learn From the European Union’s Recent Reforms on Data Privacy
 
Added:
>
>
The Privacy Infringement Problem in the US It has become more and more apparent in today’s society that the concept of privacy has been eroded, redefined, and curtailed as the power of corporations have dominated. Consumers must actively and aggressively opt-out from having private information logged and stored by websites. Oftentimes, consumers are not given the option to prevent companies from collecting data from them. For example, EPIC’s lawsuit against Google, alleging that Google has been tracking in-store purchases by gathering information from credit card transactions and using that data to target ads specific to each consumer. Not only can purchases (on and offline) reveal one’s tastes and interests, but searches on the internet or viewing trends logged by a cable box can provide valuable data that can be used in profitable marketing strategies. There is an argument that these targeted ads serve only to make life easier, more convenient, and tailored. Nevertheless, with no choice given to the consumer, the discomfort one feels due to the ruthless invasion of private life far outweighs the possible benefit of finding out about a sale at a preferred shoe store. It feels like the fight for privacy has succumbed to the allure of a blinded trust in these mega corporations.
 
Added:
>
>
Insufficient Protections The US is not without any protections for the consumer. The Fourth Amendment outlines broadly the right against unreasonable search and seizures. This sets the foundation for arguing for the consumer’s right to protect his data and his online choices. There exist also the Wiretap Laws, Electronic Communications Privacy Act, and most importantly, the FTC Act of 1914, which seeks to protect consumers from unfair or unreasonable business practices. The FTC is granted the power to pursue a corporation for questionable behavior, but unless the FTC deems the behavior worthy of an investigation, the private consumer is left with scant recourse. Other regulations tend to be too specific, such as a regulation on just medical data disclosure or just financial data protection. So what can the US do to begin providing more coverage for the consumer?
 
Changed:
<
<

Introduction

>
>
Possible Pointers in the EU The EU’s recent policies may shed some light for possible next steps. Regulation 679 (2016), also known as GDPR, will go into effect across the member states of the EU (including the UK) in May 2018. It hopes to strengthen supervision and protection of consumer data. These new policies apply to both “controllers” and “processors” of data who work in conjunction to carry out any activity concerning the usage of personal data. The regulation sets out higher punishments if there is a breach and increased legal compliance regulations, including keeping more strict activity logs. It also defines “personal data” more broadly, now including IP addresses, where before it only recognized personally identifiable information (names, social security, etc.). The EU also issued Directive 680 (2016), the Law Enforcement Directive, last year. Directives, although not treated as immediate and binding legislation as regulations are, act as general guidelines for member states, which in turn create internal policies to fall into compliance with the overarching goal of the directive. It states that data can only be used in the process of preventing or investigating crimes and proceeds to define the limitations and scope of what constitutes a crime more clearly. Administrative agencies will provide independent supervision over law enforcement actions and certain remedies will be made available for the infringement of privacy if it is breached unfairly or disproportionately.
 
Deleted:
<
<
At current, a large portion of consumers are unaware of the extent to which software companies are “selling” or giving away their personal information by accepting user agreements found in most major “free” programs like GoogleMail? or Apple’s iCloud. Therefore, the question we must address is not how to reach sophisticated users of technology, but rather in mobilizing the masses or the casual users. From this perspective, the exposure of software deficiencies as spectacle through hacking may be an effective means of undermining programs structured to take advantage of it’s unwitting users. Despite the fact that the practice of “hacking” means and functions in many different ways for many different people, over the years, the term “hacker” or “hacking” has become increasingly demonized. As discussed in class, hacking can be characterized as the ability to use creative means to make power move or be shifted in directions it was not originally intended. In a more positive light, hacking can also be described as a means of expressing dissatisfaction with a current system or a form of civil disobedience. Applying Henry David Thoreau philosophy from his 1848 essay on Civil Disobedience and applying it to consumer software, users should not permit systems to overrule or atrophy their consciences, and they have a duty to avoid allowing such acquiescence to enable said systems to make them the agents of injustice. Most importantly, hacking as spectacle can provide an effective means of advertising the necessity for greater open-source software to improve the dissemination of information in a more transparent manner. It is not enough to merely undermine the system, but hacking as spectacle should illustrate how unconscionable many programs and systems are to both privacy and autonomy.

The significance of spectacle

Hacking as spectacle has the potential to serve multiple purposes. It can both create national dialogues on questions of surveillance and privacy, but may also provide solutions or reform to remedy the privacy issues identified. Hackers have long played a vital role in improving both software and hardware issues. For instance, as it relates to open source software development, hackers are indispensible for both innovation and their ability to continually improve and repurpose software code. Even developers of proprietary or copyrighted software hire “white hat hackers” to test the security and functionality of web sites or new software.

Moving forward, hackers will play an increasingly important role in bringing to light deficiencies in “privacy” protocols, website surveillance, and other security mechanisms that are purposely hidden from the majority of technology users. Two recent examples have demonstrated how strategic hacking and the use of internet leaks can bring the question of privacy front and center in national and global debate. In the first instance, Edward Snowden, a former NSA contractor, leaked documents outlining numerous global surveillance programs being run by the NSA with the cooperation of telecommunication companies and European governments. Through these leaks, Snowden not only brought to light the practices of the NSA, but more importantly, his actions sparked an international dialogue on internet security, privacy, and government surveillance. On a smaller scale, the celebrity nude photo leaks from Apple’s iCloud this fall have similarly sparked public concern over privacy and the security of cloud computing, with a particular emphasis on their use to store sensitive or private information. While Apple’s iCloud leak did not have the same National Security implications or backlash as Mr. Snowden’s work, these leaks were an effective means of demonstrating the deficiencies of broad based cloud computing to the general public.

As illustrated in the aforementioned examples, hacking as spectacle can be an efficient means of affecting change because the efforts have low marginal costs. This low marginal costs mean that in theory, the practice cannot be “outspent” by capitalism. Similarly, like open source software, the practice of hacking as spectacle, because of it’s low marginal costs, will be superior to the efforts of capitalism as it will be constantly improved through collaboration. Along with this, the distribution of the information on the practice will also be superior, given the low marginal costs. With this in mind, if attacking sites and programs like iCloud, Gmail, or the Facebook server become pervasive enough, less consumers would use them. If this is to happen, there will likely be changes in these programs that will address the concerns of users, or free/open source alternatives will be created to satisfy the new demand for more secure applications.

Redefining and repossessing rights

Under these circumstances, hacking can help expose the way that privacy has turned from a right that the government must be justified in violating, to one that individuals must affirmatively defend. As stated by Edward Snowden in an interview as a part of the New Yorker Festival, people say they don’t have anything to hide and when that happens, the model of responsibility for how rights work is inverted. “When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.” Hacking can also introduce costs that have long been unrecognized by mass consumers of so-called “free-applications” and re-introduce questions of how privacy should function in the Internet age. This dialogue will be increasingly important as the “Facebook” generation transitions into positions of influence with a skewed sense of privacy, as a premium is currently being placed on the ability to volunteer your location, activities, relationships, spending habits, job experience, and other personal information to the general public. The movement from the era of the written word to the era of technology is an ongoing trial, which hacking as spectacle may improve down the road, although no one can truly be sure of what social results will come from this hap hazardous experiment with social media and internet surveillance,

-- WyattLittles - 16 Oct 2014

 
<--/commentPlugin-->
 \ No newline at end of file
Added:
>
>
Not a Perfect System The EU’s continued interest in protecting consumers stems most likely from a stronger belief that privacy is a fundamental human right, a value not quite shared yet in the US. There have been many theories for why Europeans in general tend to want to shield their private lives more so than citizens in the US. One of the most dominant theories states that the trauma from during the Holocaust when Nazi officials would use school and bank records to find the names and addresses of Jewish people in the area has strengthened the necessity of protections for personal information. However, the EU system is not perfect. Their policies work mainly because of a heightened sense of trust among citizens in their individual member states’ governments. The US government has struggled with its citizens to maintain a semblance of respect for privacy and with the reveal in 2013 of PRISM being used by the NSA to monitor and track the data from internet transactions, the people’s distrust of the government has skyrocketed. To call for US citizens to all of the sudden embrace government regulation and surveillance as guardians of their data against corporations would be too large a bridge to gap, and would, in fact, lead to many other problems, as the government and its subsidiaries have proven to be a dubious and mysterious entity when it comes to maintaining boundaries with its citizens. The key takeaway from the EU reforms would be the shift in mentality towards viewing privacy as a fundamental right to be protected at all costs. The EU has instituted independent bodies to oversee the uses of data and has ensured steep remedies for breaches. These steps will not end the problems with private data infringement, but may begin the deterring process.

TWikiGuestFirstEssay 4 - 10 Nov 2017 - Main.DylanHunzeker
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

Hacking as Spectacle


TWikiGuestFirstEssay 3 - 26 Oct 2015 - Main.LianchenLiu
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

Hacking as Spectacle


TWikiGuestFirstEssay 2 - 20 Oct 2014 - Main.WyattLittles
Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

Hacking as Spectacle


TWikiGuestFirstEssay 1 - 16 Oct 2014 - Main.WyattLittles
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebPreferences"

Hacking as Spectacle

Introduction

At current, a large portion of consumers are unaware of the extent to which software companies are “selling” or giving away their personal information by accepting user agreements found in most major “free” programs like GoogleMail? or Apple’s iCloud. Therefore, the question we must address is not how to reach sophisticated users of technology, but rather in mobilizing the masses or the casual users. From this perspective, the exposure of software deficiencies as spectacle through hacking may be an effective means of undermining programs structured to take advantage of it’s unwitting users. Despite the fact that the practice of “hacking” means and functions in many different ways for many different people, over the years, the term “hacker” or “hacking” has become increasingly demonized. As discussed in class, hacking can be characterized as the ability to use creative means to make power move or be shifted in directions it was not originally intended. In a more positive light, hacking can also be described as a means of expressing dissatisfaction with a current system or a form of civil disobedience. Applying Henry David Thoreau philosophy from his 1848 essay on Civil Disobedience and applying it to consumer software, users should not permit systems to overrule or atrophy their consciences, and they have a duty to avoid allowing such acquiescence to enable said systems to make them the agents of injustice. Most importantly, hacking as spectacle can provide an effective means of advertising the necessity for greater open-source software to improve the dissemination of information in a more transparent manner. It is not enough to merely undermine the system, but hacking as spectacle should illustrate how unconscionable many programs and systems are to both privacy and autonomy.

The significance of spectacle

Hacking as spectacle has the potential to serve multiple purposes. It can both create national dialogues on questions of surveillance and privacy, but may also provide solutions or reform to remedy the privacy issues identified. Hackers have long played a vital role in improving both software and hardware issues. For instance, as it relates to open source software development, hackers are indispensible for both innovation and their ability to continually improve and repurpose software code. Even developers of proprietary or copyrighted software hire “white hat hackers” to test the security and functionality of web sites or new software.

Moving forward, hackers will play an increasingly important role in bringing to light deficiencies in “privacy” protocols, website surveillance, and other security mechanisms that are purposely hidden from the majority of technology users. Two recent examples have demonstrated how strategic hacking and the use of internet leaks can bring the question of privacy front and center in national and global debate. In the first instance, Edward Snowden, a former NSA contractor, leaked documents outlining numerous global surveillance programs being run by the NSA with the cooperation of telecommunication companies and European governments. Through these leaks, Snowden not only brought to light the practices of the NSA, but more importantly, his actions sparked an international dialogue on internet security, privacy, and government surveillance. On a smaller scale, the celebrity nude photo leaks from Apple’s iCloud this fall have similarly sparked public concern over privacy and the security of cloud computing, with a particular emphasis on their use to store sensitive or private information. While Apple’s iCloud leak did not have the same National Security implications or backlash as Mr. Snowden’s work, these leaks were an effective means of demonstrating the deficiencies of broad based cloud computing to the general public.

As illustrated in the aforementioned examples, hacking as spectacle can be an efficient means of affecting change because the efforts have low marginal costs. This low marginal costs mean that in theory, the practice cannot be “outspent” by capitalism. Similarly, like open source software, the practice of hacking as spectacle, because of it’s low marginal costs, will be superior to the efforts of capitalism as it will be constantly improved through collaboration. Along with this, the distribution of the information on the practice will also be superior, given the low marginal costs. With this in mind, if attacking sites and programs like iCloud, Gmail, or the Facebook server become pervasive enough, less consumers would use them. If this is to happen, there will likely be changes in these programs that will address the concerns of users, or free/open source alternatives will be created to satisfy the new demand for more secure applications.

Redefining and repossessing rights

Under these circumstances, hacking can help expose the way that privacy has turned from a right that the government must be justified in violating, to one that individuals must affirmatively defend. As stated by Edward Snowden in an interview as a part of the New Yorker Festival, people say they don’t have anything to hide and when that happens, the model of responsibility for how rights work is inverted. “When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.” Hacking can also introduce costs that have long been unrecognized by mass consumers of so-called “free-applications” and re-introduce questions of how privacy should function in the Internet age. This dialogue will be increasingly important as the “Facebook” generation transitions into positions of influence with a skewed sense of privacy, as a premium is currently being placed on the ability to volunteer your location, activities, relationships, spending habits, job experience, and other personal information to the general public. The movement from the era of the written word to the era of technology is an ongoing trial, which hacking as spectacle may improve down the road, although no one can truly be sure of what social results will come from this hap hazardous experiment with social media and internet surveillance,

-- WyattLittles - 16 Oct 2014

 
<--/commentPlugin-->

Revision 37r37 - 09 Nov 2024 - 22:01:54 - XuanyiLee
Revision 36r36 - 24 Oct 2024 - 21:42:47 - CliftonMartin
Revision 35r35 - 22 Oct 2023 - 21:00:34 - JasmineBovia
Revision 34r34 - 13 Oct 2023 - 15:59:18 - EdenEsemuede
Revision 33r33 - 13 Oct 2023 - 02:30:33 - LudovicoColetti
Revision 32r32 - 13 Oct 2023 - 01:14:33 - LudovicoColetti
Revision 31r31 - 10 Jan 2022 - 04:15:10 - NataliaNegret
Revision 30r30 - 01 Nov 2021 - 09:57:43 - RochishaTogare
Revision 29r29 - 26 Oct 2021 - 04:50:01 - KatharinaRogosch
Revision 28r28 - 25 Oct 2021 - 20:57:22 - RochishaTogare
Revision 27r27 - 23 Oct 2021 - 00:22:22 - KatharinaRogosch
Revision 26r26 - 22 Oct 2021 - 20:48:47 - NathalieNoura
Revision 25r25 - 22 Oct 2021 - 20:34:23 - NathalieNoura
Revision 24r24 - 22 Oct 2021 - 18:43:18 - NathalieNoura
Revision 23r23 - 22 Oct 2021 - 16:06:34 - NathalieNoura
Revision 22r22 - 22 Oct 2021 - 00:59:17 - NathalieNoura
Revision 21r21 - 17 Oct 2021 - 20:52:07 - NuriCemAlbayrak
Revision 20r20 - 10 Oct 2020 - 13:34:47 - ConradNoronha
Revision 19r19 - 09 Oct 2020 - 22:03:29 - JohnClayton
Revision 18r18 - 09 Oct 2020 - 20:33:43 - JulieLi
Revision 17r17 - 09 Oct 2020 - 02:48:03 - KjSalameh
Revision 16r16 - 09 Oct 2020 - 00:35:10 - KjSalameh
Revision 15r15 - 07 Oct 2020 - 19:31:03 - ClaireCaton
Revision 14r14 - 11 Jan 2020 - 18:38:04 - JieLin
Revision 13r13 - 12 Oct 2019 - 00:11:36 - AndrewIwanicki
Revision 12r12 - 08 Oct 2019 - 12:52:49 - AyeletBentley
Revision 11r11 - 08 Oct 2019 - 00:54:27 - AyeletBentley
Revision 10r10 - 07 Oct 2019 - 04:22:41 - EungyungEileenChoi
Revision 9r9 - 05 Oct 2019 - 20:15:09 - KerimAksoy
Revision 8r8 - 03 Dec 2017 - 01:11:26 - LizzethMerchan
Revision 7r7 - 02 Dec 2017 - 23:21:36 - LizzethMerchan
Revision 6r6 - 11 Nov 2017 - 03:01:07 - TravisMcMillian
Revision 5r5 - 10 Nov 2017 - 18:58:18 - MayaUchima
Revision 4r4 - 10 Nov 2017 - 05:05:52 - DylanHunzeker
Revision 3r3 - 26 Oct 2015 - 02:02:28 - LianchenLiu
Revision 2r2 - 20 Oct 2014 - 01:46:41 - WyattLittles
Revision 1r1 - 16 Oct 2014 - 01:26:45 - WyattLittles
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM