Law in the Internet Society
The Real Bias Built In At Facebook - Zeynep Tufekci

https://www.nytimes.com/2016/05/19/opinion/the-real-bias-built-in-at-facebook.html?rref=collection%2Fcolumn%2Fzeynep-tufekci

I believe that hidden algorithmic processes are the epitome of Richard Stallman's idea that the inability to read, comprehend, and change computer programs results in user enslavement. Algorithms function behind the scenes, and there is no point of contact between an average user of the Google search bar and the algorithms at work. How is a user to understand, rewrite, or stop an algorithm from functioning on that page if there is no way of accessing it?

There are ways to (in informal terms) "mess with" or alter the search results or trending topics one would normally see within the world of social media (an experiment conducted in a Social Networks course at Princeton University: https://freedom-to-tinker.com/2017/09/19/what-our-students-found-when-they-tried-to-break-their-bubbles/)... However, for the most part, search results are dictated by inherently biased algorithms that have been observing and categorizing you into some sort of box of vague, yet very identifiable categories.

I know that I am impacted by these algorithms even though my interaction is limited - that I am being controlled by some powerful force that I somewhat understand, but certainly cannot control. What I have learned to do (partly for my sanity, but mostly to ensure I am critical of all the sources I encounter and their various biases) is diversify the ways in which I interact with information. In other words, I interact with variety of sources for even the most basic searches (especially those conducted online) in an effort to side-step the algorithms. This way, in a very limited capacity, I am controlling my information intake.

If you are reading this and thinking that it's a very naive solution, or something that you have been doing for years, I am glad to hear it. In my experience, I can't confidently say that this is the trend when it comes to accessing information, especially in 2017.

I worked for the New York Public Library for about 1.5 years - after which I could not help but feel repulsed by the way the system handles, disseminates, and treats access to books and information. Within the Harlem Network of libraries where I worked, there was little to no attention given to working with patrons in navigating a basic research process. Questions we would receive on conducting "safe" research online as mandated by schools in the area were ignored, and time I spent running workshops and tutoring sessions with teens were eventually phased out because they were "time consuming" and required "too much prep".

This anecdote speaks to a societal issue that we face: the assumption that thinking critically about information we interact with online is intuitive, that we will know the difference between a "fake" fact or story, and a real one. But it is 2017, and the proliferation of misinformation is a worldwide trend. We are already late in thinking about the impact of algorithms, of disseminating sources and pieces of information that may not entirely be true, and have to some degree, completely missed the facts.

A quote from the article linked above:

"For example, in August 2014, my analysis found that Facebook’s newsfeed algorithm largely buried news of protests over the killing of Michael Brown by a police officer in Ferguson, Mo., probably because the story was certainly not “like”-able and even hard to comment on. Without likes or comments, the algorithm showed Ferguson posts to fewer people, generating even fewer likes in a spiral of algorithmic silence. The story seemed to break through only after many people expressed outrage on the algorithmically unfiltered Twitter platform, finally forcing the news to national prominence."

-- MadihaZahrahChoksi - 09 Oct 2017

 

Navigation

Webs Webs

r1 - 09 Oct 2017 - 21:44:52 - MadihaZahrahChoksi
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM