Law in the Internet Society

View   r3  >  r2  >  r1
LudovicoColettiFirstEssay 3 - 08 Jan 2024 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
 

Social Credit Systems: Dystopian Visions and Real-world Challenges

Line: 35 to 34
 In Europe, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of “automated decision making” and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual).

I believe that such a rule is a good start, but simply relying on a human intervention to review a decision already made by an algorithm is not sufficient. The scope of the rule should be expanded to establish a genuine right to be assessed as a human, not based on predictions of our future behaviors derived from our online activity. In light of the examples mentioned above, the adoption of such rules is particularly necessary in contexts such as the provision of banking services or dealings with public administration, but it should eventually come to include all those situations in which fundamental rights and freedoms of individuals are at risk. \ No newline at end of file

Added:
>
>

A much-improved draft. I don't have further suggestions.

 \ No newline at end of file

LudovicoColettiFirstEssay 2 - 04 Dec 2023 - Main.LudovicoColetti
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
Social Credit Systems
>
>

Social Credit Systems: Dystopian Visions and Real-world Challenges

 In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.
Deleted:
<
<
The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age” this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the home benefits offered by the tax collection authority". However, Prof. Werbach believes that western's depiction of the SCS is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.
 
Changed:
<
<
He cites the idea brought forward by Yuval Noah Harari that free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized.
>
>

Real-world examples

The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (see Larry Cata Backer). The SCS relies on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach, this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the home benefits offered by the tax collection authority". However, Prof. Werbach believes that western's depiction of the SCS is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He cites the idea brought forward by Yuval Noah Harari that free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized.

Starting from this assumption, it shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crisis, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness.

Historically, banks have used scoring models to formulate a person's credit score based on the past financial behavior and additional factors bearing predictive value. This phenomenon has also been regulated such as with the Fair Credit Reporting Act and the and the Equal Credit Opportunity Act (ECOA), with the latter prohibiting credit discrimination on the basis of race, color, religion, national origin, sex, marital status, and age.

But the new models are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior. Such information is often collected through the individual’s consent, but in other circumstances the data analysis is performed in a hidden manner, as happened in the case of American Express credit card users who had their credit limits reduced after they went shopping in the same places as other customers who didn't pay their bills on time.

Concerns and Implications

As outlined in this 2016 article by Nizan Packin and Yafit Lev-Aretz, these practices cause privacy harms at two levels - direct, to the loan seeker, and derivative, to the loan seeker's contacts, as “social credit systems inherently implicate the information of third parties, who never agreed their information could be collected, evaluated, or analyzed”. Also, they favor social segregation and reduce social mobility, and increase the risk of arbitrary decisions based on incorrect data. For example, the use of social credit systems can nullify the above-mentioned limits set forth in the ECOA, as attributes like gender and race are easily detectable by the algorithm because “they are typically explicitly or implicitly encoded in rich data sets”.

Turning our gaze to Europe, we see that the risk of discrimination highlighted above has already become painfully real. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The algorithm trained itself to use risk indicators such has having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Dutch Data Protection Authority.

 
Deleted:
<
<
Starting from this assumption, it shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crisis, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness. Historically, banks have used scoring models to formulate a person's credit score based on the past financial behavior and additional factors bearing predictive value. This phenomenon has also been regulated such as with the Fair Credit Reporting Act and the and the Equal Credit Opportunity Act (ECOA), with the latter prohibiting credit discrimination on the basis of race, color, religion, national origin, sex, marital status, and age.
 
Changed:
<
<
But the new models are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior. Such information is often collected through the individual’s consent. As outlined in the 2016 article “On social credit and the right to be unnetworked” by Nizan Geslevich Packin and Yafit Lev-Aretz, these practices cause privacy harms at two levels - direct, to the loan seeker, and derivative, to the loan seeker's contacts, as “social credit systems inherently implicate the information of third parties, who never agreed their information could be collected, evaluated, or analyzed” Also, they favor social segregation and reduce social mobility, and increase the risk of arbitrary decisions based on incorrect data. For example, the use of social credit systems can nullify the above-mentioned limits set forth in the ECOA, as attributes like gender and race are easily detectable by the algorithm because “they are typically explicitly or implicitly encoded in rich data sets”. The authors believe that the solution should be the introduction of a right to be unnetworked or to opt-out from being socially scored.
>
>

Conclusion

 
Changed:
<
<
Turning our gaze to Europe, we see that the risk of discrimination highlighted above has already become painfully real. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The algorithm trained itself to use risk indicators such has having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Autoriteit Persoonsgegevens, the Dutch Data Protection Authority, for breaching several GDPR rules. In particular, the Authority found that the Tax Administration had no legal basis for processing the personal data used as risk indicators (under the GDPR, personal data processing is allowed only if one of the legal bases listed in Article 6 applies).
>
>
Social credit systems are a product of Surveillance Capitalism, a system that “unilaterally claims human experience as free raw material for translation into behavioral data” (S. Zuboff, The Age of Surveillance Capitalism). The same principles of the extraction of “behavioral surplus”, started by companies like Google and Facebook in the last 20 years to create more effective advertising, are now being used in contexts that are increasingly impactful on our freedoms and rights, such as the right to receive credit, and they will continue to be more so unless a line is drawn.
 
Changed:
<
<
In the hyper-regulated European Union, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of automated decision making, including profiling, and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual). Additionally, the proposed EU AI Act aims to place serious limitations on "AI systems providing social scoring of natural persons for general purposes by public authorities." These limitations prohibit social scoring systems from leading to detrimental or unfair treatment in unrelated social contexts or based on unjustified or disproportionate criteria.
>
>
In Europe, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of “automated decision making” and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual).
 
Changed:
<
<
The extent and effect of these limitations is yet to be tested, but it seems clear that a thorough reflection on the risks of social scoring systems must be started as soon as possible to avoid that reality overcomes fantasy.
>
>
I believe that such a rule is a good start, but simply relying on a human intervention to review a decision already made by an algorithm is not sufficient. The scope of the rule should be expanded to establish a genuine right to be assessed as a human, not based on predictions of our future behaviors derived from our online activity. In light of the examples mentioned above, the adoption of such rules is particularly necessary in contexts such as the provision of banking services or dealings with public administration, but it should eventually come to include all those situations in which fundamental rights and freedoms of individuals are at risk.
 \ No newline at end of file

LudovicoColettiFirstEssay 1 - 13 Oct 2023 - Main.LudovicoColetti
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
Social Credit Systems

In the dystopian world of the TV show "Black Mirror," the episode "Nosedive" describes a world where social media ratings determine one’s socioeconomic status and access to essential services. Using a mobile application, everyone constantly rates others on a five-point scale. Those with higher scores can access to better services and exclusive clubs, while those with low scores are penalized in many ways. While this may seem like a far-fetched fiction, the reality of today may be not too distant from this portrayal.

The first example that comes to mind is China’s Social Credit System (SCS), developed between 2014 and 2020. The SCS uses artificial intelligence "to develop comprehensive data-driven structures for management around algorithms that can produce real time reward-punishment structures for social-legal-economic and other behaviors" (Larry Cata Backer, Next generation law: data-driven governance and accountability-based regulatory systems in the west, and social credit regimes in China, 2018). The SCS in reality does not actually rely on a universal score but rather on a series of blacklists and redlists managed at different levels (municipal, local, or national). Each authority can manage its own blacklist (e.g., on those who failed to pay fines or child support) and they all converge into the National Credit Information Sharing Platform. As mentioned by Kevin Werbach in his 2022 article “Orwell that ends well? Social credit as regulation for the algorithmic age” this makes possible that "grade A taxpayers receive customs fee waivers and low-interest loans, in addition to the home benefits offered by the tax collection authority". However, Prof. Werbach believes that western's depiction of the SCS is exaggeratedly negative, especially in a world where governments and corporations are extensively tracking our behavior. He sees the Nosedive scenario as more resembling to the ratings system on Uber or eBay, expanded beyond the boundaries of one service.

He cites the idea brought forward by Yuval Noah Harari that free-market capitalism and state-controlled communism can be regarded as distinct data processing systems: the former is decentralized and the latter is centralized.

Starting from this assumption, it shouldn't come as a surprise then that western's versions of social credit experiments are being made mainly by private corporations, especially in the financial sector. Since the 2008 financial crisis, many "fintech" online lenders began experimenting new scoring model for establishing creditworthiness. Historically, banks have used scoring models to formulate a person's credit score based on the past financial behavior and additional factors bearing predictive value. This phenomenon has also been regulated such as with the Fair Credit Reporting Act and the and the Equal Credit Opportunity Act (ECOA), with the latter prohibiting credit discrimination on the basis of race, color, religion, national origin, sex, marital status, and age.

But the new models are based on a person's "social footprint" which is revealed by elements such as his/her social circle, or shopping habits: surprisingly, it appears that buying felt pads has a positive influence on how the algorithms forecast your financial behavior. Such information is often collected through the individual’s consent. As outlined in the 2016 article “On social credit and the right to be unnetworked” by Nizan Geslevich Packin and Yafit Lev-Aretz, these practices cause privacy harms at two levels - direct, to the loan seeker, and derivative, to the loan seeker's contacts, as “social credit systems inherently implicate the information of third parties, who never agreed their information could be collected, evaluated, or analyzed” Also, they favor social segregation and reduce social mobility, and increase the risk of arbitrary decisions based on incorrect data. For example, the use of social credit systems can nullify the above-mentioned limits set forth in the ECOA, as attributes like gender and race are easily detectable by the algorithm because “they are typically explicitly or implicitly encoded in rich data sets”. The authors believe that the solution should be the introduction of a right to be unnetworked or to opt-out from being socially scored.

Turning our gaze to Europe, we see that the risk of discrimination highlighted above has already become painfully real. In 2013, the Dutch Tax Authorities employed a self-learning algorithm to detect child care benefits fraud. The algorithm trained itself to use risk indicators such has having low income or belonging to ethnic minorities. As a result, thousands of families were wrongly characterized as fraudsters and suffered severe consequences. This led to the Dutch Government’s resignation and a 3.7 million Euros fine on the Tax Administration from the Autoriteit Persoonsgegevens, the Dutch Data Protection Authority, for breaching several GDPR rules. In particular, the Authority found that the Tax Administration had no legal basis for processing the personal data used as risk indicators (under the GDPR, personal data processing is allowed only if one of the legal bases listed in Article 6 applies).

In the hyper-regulated European Union, the GDPR has attempted to address the issues that may arise with the use of social scoring systems (and other systems that are meant to “profile” individuals) by introducing Article 22, which allows individuals to opt out of automated decision making, including profiling, and obtain human intervention whenever their personal information is used to take a decision which produces a legal effect (e.g., entering into a contract with that individual). Additionally, the proposed EU AI Act aims to place serious limitations on "AI systems providing social scoring of natural persons for general purposes by public authorities." These limitations prohibit social scoring systems from leading to detrimental or unfair treatment in unrelated social contexts or based on unjustified or disproportionate criteria.

The extent and effect of these limitations is yet to be tested, but it seems clear that a thorough reflection on the risks of social scoring systems must be started as soon as possible to avoid that reality overcomes fantasy.


Revision 3r3 - 08 Jan 2024 - 16:31:38 - EbenMoglen
Revision 2r2 - 04 Dec 2023 - 02:54:57 - LudovicoColetti
Revision 1r1 - 13 Oct 2023 - 21:00:46 - LudovicoColetti
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM