Law in the Internet Society

View   r3  >  r2  >  r1
IppeiKawaseSecondEssay 3 - 06 Jan 2021 - Main.IppeiKawase
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Changed:
<
<

AI Decision Making in the Workplace

>
>

New Technologies in the Workplace and Legislations

 
Changed:
<
<
-- By IppeiKawase - 21 Nov 2020
>
>
-- By IppeiKawase - 5 Jan 2021
 

Introduction

Added:
>
>
New technologies have been used in the workplace, ranging from relatively simple software to more complex computer programs, to support employer’s decision-making. With the introduction of new technologies, legal questions have arisen, and legislators have attempted to address these new issues by adopting new legislation. This essay will examine two legislations – GDPR and Illinois statute – in a situation of recruitment and termination and clarify the challenges and issues of such legislations.
 
Changed:
<
<
Artificial intelligence (AI) has been used in many ways in a wide range of areas. One of such fields is the workplace. Many employers now have adopted AI to recruit employees, assess their performance, and dismiss them. What issues arise from these changes in the workplace? This essay will explore how AI is used in the workplace, what issues lie in such use, and the legal challenges to address these issues.
>
>

GDPR

This section will focus on the General Data Protection Regulation (GDPR) Article 22, which regulates “automated individual decision-making,” and discuss issues of the provision by taking an example of recent litigation.
 
Changed:
<
<

How AI is used in the workplace

>
>

The Content of the Law

GDPR Article 22 aims to protect people from unfair automated decision making.‍ It provides, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Regarding the term “based solely,” the European Data Protection Board (EDPB) explains that if human reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing. The EDPB further explains that you cannot avoid Article 22 by “fabricating human involvement.”
 
Changed:
<
<
According to Cambridge Dictionary, AI is defined as “the use of computer programs that have some of the qualities of the human mind, such as the ability to understand language, recognize pictures, and learn from experience.” This illustrates that it is obviously not human but a human-like computer program. How do we, humans, use these computer programs in workplaces? One of such situations is recruitment. For example, employers decide what characteristics of candidates should be evaluated and develop algorism to select candidates who have such preferable characteristics for the employer.
>
>

Uber Case

In 2020, a UK-based union filed legal action in the Netherlands against Uber over the use of an algorithm to dismiss four Uber drivers. The plaintiff claims that the dismissals violate Article 22. They argue that Uber’s decisions are based exclusively on automated processing of the drivers’ personal data, and there is no significant human intervention, by showing the fact that the messages sent by Uber to the drivers contain largely standardized and very general texts. In response, Uber argues that the activities of the drivers have been assessed by their employees.
 
Changed:
<
<
>
>

The Issues of the Law

Although the EDPB notes that “fabricating human involvement” cannot be an excuse to escape from Article 22, the standard for judgment of whether an employer fabricates human involvement is not clear. Considering that there can be a certain kind of human involvement with a computer program to a certain extent, including algorithm in the Uber case, it seems possible for the employer to explain such involvement afterward even if there was no human involvement in a way that EDPB assumes. In addition, the argument regarding the messages sent by Uber seems not so persuasive because employers can make their employees involved in the decision-making of dismissal and send a message whose content is the same as automatically generated by the algorithm.
 
Changed:
<
<
The dictionary definition is misleading. "AI" refers in current parlance to "machine learning," which is not actually learning but pattern matching, finding recurrent similarities in very large collections of unstructured data. Whatever one may think about the potential of "general artificial intelligence," about which I have been skeptical for fifty years, there's no relationship between machine learning applications and AI except hype by people with products to sell. Precision about technology is as important as precision about law or political history in understanding the issues we discuss.
>
>

Illinois Statute

While the GDPR does not specifically focus on the workplace, the Illinois legislature has recently passed a new law regulating “artificial intelligence analysis” in job interviews. This section will analyze issues of this new legislation.
 
Changed:
<
<
>
>

The Content of the Law

The law is called the Artificial Intelligence Video Interview Act, which became effective on January 1, 2020. It requires an employer that “asks applicants to record video interviews and uses an artificial intelligence analysis of the applicant-submitted videos. . .” (1) to “Notify each applicant before the interview that artificial intelligence may be used to analyze the applicant’s video interview and consider the applicant’s fitness for the position”; (2) to “Provide each applicant with information before the interview explaining how the artificial intelligence works and what general types of characteristics it uses to evaluate applicants”; and (3) to “Obtain, before the interview, consent from the applicant to be evaluated by the artificial intelligence program as described in the information provided.”
 
Changed:
<
<
Another example is that algorism itself decides such characteristics based on data which contains information of employees who performed well in the past. These algorisms enable employers to find and select suitable candidates more efficiently.
>
>

The Issues of the Law

The first and most critical problem is that the law does not define “artificial intelligence” or “artificial intelligence analysis.” Because the concept of “artificial intelligence” itself is controversial and has not obtained a common understanding in both legal and general context, the scope of the law is ambiguous. Employers have difficulty understanding whether the organization or specific act of the organization falls within the scope of the law. In addition, although the law imposes the three requirements of notification, explanation, and consent, the precise meaning of each word is not clear. For example, even if assuming the minimum or core meaning of “artificial intelligence,” it is unclear whether the law covers activities in which employers use “artificial intelligence” to generate or improve questions during interviews or to enhance their recruiting processes in a way that is not directly connected with a specific candidate but affect each of them as a result. Furthermore, the concrete and practical measure of obtaining consent from candidates cannot be easily determined. Although just obtaining an oral answer “I agree” from a candidate seems to satisfy the third requirement literally, it remains a question whether such an answer substantially represents the candidate’s understanding of and consent to the use of “artificial intelligence” in the interview.
 
Changed:
<
<
So long as "suitable" can be defined as "like other people we know already."
>
>

The Significance of the Law

The Illinois law has its significance in that it raises questions regarding a new kind of technology and its use in the workplace in connection with legislation. However, while the law covers a wide range of employers (affecting positions based in Illinois), the language of the law is still ambiguous, and it does not include provisions regarding enforcement such as administrative request, direction, or penalty. Thus, it seems that in reality, its disadvantages, such as administrative burdens on employers, outweigh its significance of raising questions in this field. Since relevant laws regarding discrimination and personal information already exist, the Illinois legislature should clarify the purpose of the new law and adopt concrete measures necessary to achieve the purpose. More specifically, it is not necessary to use the term “artificial intelligence”, but rather, the legislature can and should use clear and specific language that expresses certain types of technology or activity using such technology that the legislature aims to regulate. Even in that case, the legislature can call it “artificial intelligence law” as a common name if they wish.
 
Deleted:
<
<
Especially in the second example, it is often said that it leads to fair recruitment because it is a computer, not human, that decides the recruitment standard.

"It is often said" is not a substitute for evidence. Who, credibly, says this? What does the extensive literature about implicit discrimination in automated hiring say about this point? What have you read that would convince your reader that there is the slightest foundation for this claim?

These kinds of AI use can be seen in other situations; employers review and evaluate employees to decide their salary or promotion or dismiss them.

We need to distinguish between "decision support" and "making decisions." What is the evidence that HR decisions are automated outputs, rather than decisions made by people on the basis of data and analysis produced by software?

What issues arise from the use of AI in the workplace

In this part, I will explore issues of AI use in the workplace in general and look at the actual example in which such an issue appears in the form of a lawsuit.

Overview

The main issue is that the use of AI may lead to discriminatory or unfair treatment of employees, including potential employees. As for the first example of algorism above, because employers decide the characteristics of a desirable person before developing algorism, there is a possibility that such employers decide, intentionally or unintentionally, in an arbitrary way. As for the second example, although there is no human intervention in the process of deciding characteristics, if data on which algorism relied contains biased or limited information, the outcome of the decision is likely to be biased. In this case, people other than the employer cannot understand how the algorism works to conclude unless the employer makes it public and available to others. Also, the possibility is that even some employers who use such technology cannot grasp the mechanism of it, just as ordinary people have little knowledge of how smartphones and the Internet actually work despite heavily relying on them. This difficulty in understanding the mechanism of AI seems more significant than traditional technologies because AI itself has the ability to understand and learn.

What does this last sentence mean? If it is intended as a direct observation about the behavior of software, it's too blunt: machine learning applications "learn" some things but cannot learn others. Consider, for example, an ML application that learns how to beat the best human chess players, but can never learn that chess is a game, or that it is playing. In your example, why don't we talk about the problem of spreadsheets in the workplace? Is that because spreadsheets do not "learn," so we attribute decisions made by people using spreadsheets to people? What about hammers or keyhole saws?

Example - Uber Case

In fact, recently, we have seen the actual lawsuit resulting from such use of AI in the workplace. On 26th October 2020, a UK-based union, App Drivers & Couriers Union (ADCU), which represents four former Uber drivers, filed legal action in the Netherlands against Uber over the use of an algorithm to dismiss the drivers. They argue that the drivers are wrongly accused of ‘fraudulent activity’ as detected by Uber’s AI systems and that the dismissals violate General Data Protection Regulation (GDPR), specifically a provision to protect people from unfair automated decision making.‍

What does the filing of a lawsuit prove?

Financial houses use software to scrutinize the actions of their own traders in order to detect illicit or prohibited trading behavior. When they act against a trader whose behavior was initially discovered by a program, at what point does the intervention of the people who use the tool change the content of the employee's claim from "the program fired me unfairly" to "the bank fired me unfairly," and why does that matter?

What legal challenges are to address issues

So what kind of legal challenges do we face now to address these issues? This part will examine how GDPR regulations address these issues with the example of Uber case above and point out some challenges which GDPR faces.

GDPR

Article 22 GDPR provides, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” With regard to the term “based solely,” the European Data Protection Board (EDPB) explains that if human reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing. The EDPB further explains that you cannot avoid the Article 22 provisions by “fabricating human involvement.” In the Uber case, the plaintiff claims that Uber’s decisions are based exclusively on automated processing of personal data of the drivers and there is no significant human intervention, by showing the fact that the messages sent by Uber to the drivers contain largely standardized and very general texts. In contrast, Uber argues that the activities of the drivers have been assessed by their employees.

A factual issue in which the principles are agreed, correct?

Challenges

As this case illustrates, I think that at least GDPR provides workers with legal tools to fight against certain types of AI systems adopted by employers. However, although the EDPB notes that “fabricating human involvement” cannot be an excuse to escape from Article 22, the standard for judgment of whether an employer fabricates human involvement is not clear. Considering that there can be human involvement with AI systems to a certain extent, it seems not so difficult to explain such involvement afterward. In addition, the argument regarding the messages sent by Uber seems not so persuasive because it is possible for employers to make their employees involved in the decision-making of dismissal and send messages whose content is the same as those automatically generated by AI.

Conclusion

At this moment, AI has been heavily incorporated into our society, including workplaces, but AI itself has not become independent and there is a certain kind of human involvement to a varying degree. While GDPR addresses legal issues arising from the use of AI in workplaces, we need to further clarify what is a decision made solely by AI, through examining individual cases in the future.

I think the best route to improvement is to consider the fundamental technical and social issues net of the concept "AI," which is not helping you. Under current technological conditions, collecting and analyzing all or substantially all employee behavior on the job is feasible, allowing employers to make decisions on bases previously unobtainable in the discipline of the "industrial army" of 20th century capitalism. Unless the precise details of the employer's analytical process and the software involved in it are immediately relevant, the issues are the same whether the employer uses "AI," "ML," or spreadsheet formulas to decide whom to hire, promote, discipline and fire. Concentrating on the employment law issues themselves, rather than the employer's particular choices in decision support software seems more promising.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

IppeiKawaseSecondEssay 2 - 29 Dec 2020 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="SecondEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

AI Decision Making in the Workplace

Line: 15 to 14
 According to Cambridge Dictionary, AI is defined as “the use of computer programs that have some of the qualities of the human mind, such as the ability to understand language, recognize pictures, and learn from experience.” This illustrates that it is obviously not human but a human-like computer program. How do we, humans, use these computer programs in workplaces? One of such situations is recruitment. For example, employers decide what characteristics of candidates should be evaluated and develop algorism to select candidates who have such preferable characteristics for the employer.
Changed:
<
<
Another example is that algorism itself decides such characteristics based on data which contains information of employees who performed well in the past. These algorisms enable employers to find and select suitable candidates more efficiently. Especially in the second example, it is often said that it leads to fair recruitment because it is a computer, not human, that decides the recruitment standard. These kinds of AI use can be seen in other situations; employers review and evaluate employees to decide their salary or promotion or dismiss them.
>
>

The dictionary definition is misleading. "AI" refers in current parlance to "machine learning," which is not actually learning but pattern matching, finding recurrent similarities in very large collections of unstructured data. Whatever one may think about the potential of "general artificial intelligence," about which I have been skeptical for fifty years, there's no relationship between machine learning applications and AI except hype by people with products to sell. Precision about technology is as important as precision about law or political history in understanding the issues we discuss.

Another example is that algorism itself decides such characteristics based on data which contains information of employees who performed well in the past. These algorisms enable employers to find and select suitable candidates more efficiently.

So long as "suitable" can be defined as "like other people we know already."

Especially in the second example, it is often said that it leads to fair recruitment because it is a computer, not human, that decides the recruitment standard.

"It is often said" is not a substitute for evidence. Who, credibly, says this? What does the extensive literature about implicit discrimination in automated hiring say about this point? What have you read that would convince your reader that there is the slightest foundation for this claim?

These kinds of AI use can be seen in other situations; employers review and evaluate employees to decide their salary or promotion or dismiss them.

We need to distinguish between "decision support" and "making decisions." What is the evidence that HR decisions are automated outputs, rather than decisions made by people on the basis of data and analysis produced by software?
 

What issues arise from the use of AI in the workplace

Line: 25 to 48
 The main issue is that the use of AI may lead to discriminatory or unfair treatment of employees, including potential employees. As for the first example of algorism above, because employers decide the characteristics of a desirable person before developing algorism, there is a possibility that such employers decide, intentionally or unintentionally, in an arbitrary way. As for the second example, although there is no human intervention in the process of deciding characteristics, if data on which algorism relied contains biased or limited information, the outcome of the decision is likely to be biased. In this case, people other than the employer cannot understand how the algorism works to conclude unless the employer makes it public and available to others. Also, the possibility is that even some employers who use such technology cannot grasp the mechanism of it, just as ordinary people have little knowledge of how smartphones and the Internet actually work despite heavily relying on them. This difficulty in understanding the mechanism of AI seems more significant than traditional technologies because AI itself has the ability to understand and learn.
Added:
>
>
What does this last sentence mean? If it is intended as a direct observation about the behavior of software, it's too blunt: machine learning applications "learn" some things but cannot learn others. Consider, for example, an ML application that learns how to beat the best human chess players, but can never learn that chess is a game, or that it is playing. In your example, why don't we talk about the problem of spreadsheets in the workplace? Is that because spreadsheets do not "learn," so we attribute decisions made by people using spreadsheets to people? What about hammers or keyhole saws?

 

Example - Uber Case

In fact, recently, we have seen the actual lawsuit resulting from such use of AI in the workplace. On 26th October 2020, a UK-based union, App Drivers & Couriers Union (ADCU), which represents four former Uber drivers, filed legal action in the Netherlands against Uber over the use of an algorithm to dismiss the drivers. They argue that the drivers are wrongly accused of ‘fraudulent activity’ as detected by Uber’s AI systems and that the dismissals violate General Data Protection Regulation (GDPR), specifically a provision to protect people from unfair automated decision making.‍

Added:
>
>
What does the filing of a lawsuit prove?

Financial houses use software to scrutinize the actions of their own traders in order to detect illicit or prohibited trading behavior. When they act against a trader whose behavior was initially discovered by a program, at what point does the intervention of the people who use the tool change the content of the employee's claim from "the program fired me unfairly" to "the bank fired me unfairly," and why does that matter?

 

What legal challenges are to address issues

So what kind of legal challenges do we face now to address these issues? This part will examine how GDPR regulations address these issues with the example of Uber case above and point out some challenges which GDPR faces.

Line: 36 to 69
 

GDPR

Article 22 GDPR provides, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” With regard to the term “based solely,” the European Data Protection Board (EDPB) explains that if human reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing. The EDPB further explains that you cannot avoid the Article 22 provisions by “fabricating human involvement.” In the Uber case, the plaintiff claims that Uber’s decisions are based exclusively on automated processing of personal data of the drivers and there is no significant human intervention, by showing the fact that the messages sent by Uber to the drivers contain largely standardized and very general texts. In contrast, Uber argues that the activities of the drivers have been assessed by their employees.
Added:
>
>
A factual issue in which the principles are agreed, correct?

 

Challenges

As this case illustrates, I think that at least GDPR provides workers with legal tools to fight against certain types of AI systems adopted by employers. However, although the EDPB notes that “fabricating human involvement” cannot be an excuse to escape from Article 22, the standard for judgment of whether an employer fabricates human involvement is not clear. Considering that there can be human involvement with AI systems to a certain extent, it seems not so difficult to explain such involvement afterward. In addition, the argument regarding the messages sent by Uber seems not so persuasive because it is possible for employers to make their employees involved in the decision-making of dismissal and send messages whose content is the same as those automatically generated by AI.

Line: 44 to 81
 At this moment, AI has been heavily incorporated into our society, including workplaces, but AI itself has not become independent and there is a certain kind of human involvement to a varying degree. While GDPR addresses legal issues arising from the use of AI in workplaces, we need to further clarify what is a decision made solely by AI, through examining individual cases in the future.
Added:
>
>

I think the best route to improvement is to consider the fundamental technical and social issues net of the concept "AI," which is not helping you. Under current technological conditions, collecting and analyzing all or substantially all employee behavior on the job is feasible, allowing employers to make decisions on bases previously unobtainable in the discipline of the "industrial army" of 20th century capitalism. Unless the precise details of the employer's analytical process and the software involved in it are immediately relevant, the issues are the same whether the employer uses "AI," "ML," or spreadsheet formulas to decide whom to hire, promote, discipline and fire. Concentrating on the employment law issues themselves, rather than the employer's particular choices in decision support software seems more promising.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

IppeiKawaseSecondEssay 1 - 21 Nov 2020 - Main.IppeiKawase
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="SecondEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

AI Decision Making in the Workplace

-- By IppeiKawase - 21 Nov 2020

Introduction

Artificial intelligence (AI) has been used in many ways in a wide range of areas. One of such fields is the workplace. Many employers now have adopted AI to recruit employees, assess their performance, and dismiss them. What issues arise from these changes in the workplace? This essay will explore how AI is used in the workplace, what issues lie in such use, and the legal challenges to address these issues.

How AI is used in the workplace

According to Cambridge Dictionary, AI is defined as “the use of computer programs that have some of the qualities of the human mind, such as the ability to understand language, recognize pictures, and learn from experience.” This illustrates that it is obviously not human but a human-like computer program. How do we, humans, use these computer programs in workplaces? One of such situations is recruitment. For example, employers decide what characteristics of candidates should be evaluated and develop algorism to select candidates who have such preferable characteristics for the employer.

Another example is that algorism itself decides such characteristics based on data which contains information of employees who performed well in the past. These algorisms enable employers to find and select suitable candidates more efficiently. Especially in the second example, it is often said that it leads to fair recruitment because it is a computer, not human, that decides the recruitment standard. These kinds of AI use can be seen in other situations; employers review and evaluate employees to decide their salary or promotion or dismiss them.

What issues arise from the use of AI in the workplace

In this part, I will explore issues of AI use in the workplace in general and look at the actual example in which such an issue appears in the form of a lawsuit.

Overview

The main issue is that the use of AI may lead to discriminatory or unfair treatment of employees, including potential employees. As for the first example of algorism above, because employers decide the characteristics of a desirable person before developing algorism, there is a possibility that such employers decide, intentionally or unintentionally, in an arbitrary way. As for the second example, although there is no human intervention in the process of deciding characteristics, if data on which algorism relied contains biased or limited information, the outcome of the decision is likely to be biased. In this case, people other than the employer cannot understand how the algorism works to conclude unless the employer makes it public and available to others. Also, the possibility is that even some employers who use such technology cannot grasp the mechanism of it, just as ordinary people have little knowledge of how smartphones and the Internet actually work despite heavily relying on them. This difficulty in understanding the mechanism of AI seems more significant than traditional technologies because AI itself has the ability to understand and learn.

Example - Uber Case

In fact, recently, we have seen the actual lawsuit resulting from such use of AI in the workplace. On 26th October 2020, a UK-based union, App Drivers & Couriers Union (ADCU), which represents four former Uber drivers, filed legal action in the Netherlands against Uber over the use of an algorithm to dismiss the drivers. They argue that the drivers are wrongly accused of ‘fraudulent activity’ as detected by Uber’s AI systems and that the dismissals violate General Data Protection Regulation (GDPR), specifically a provision to protect people from unfair automated decision making.‍

What legal challenges are to address issues

So what kind of legal challenges do we face now to address these issues? This part will examine how GDPR regulations address these issues with the example of Uber case above and point out some challenges which GDPR faces.

GDPR

Article 22 GDPR provides, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” With regard to the term “based solely,” the European Data Protection Board (EDPB) explains that if human reviews and takes account of other factors in making the final decision, that decision would not be ‘based solely’ on automated processing. The EDPB further explains that you cannot avoid the Article 22 provisions by “fabricating human involvement.” In the Uber case, the plaintiff claims that Uber’s decisions are based exclusively on automated processing of personal data of the drivers and there is no significant human intervention, by showing the fact that the messages sent by Uber to the drivers contain largely standardized and very general texts. In contrast, Uber argues that the activities of the drivers have been assessed by their employees.

Challenges

As this case illustrates, I think that at least GDPR provides workers with legal tools to fight against certain types of AI systems adopted by employers. However, although the EDPB notes that “fabricating human involvement” cannot be an excuse to escape from Article 22, the standard for judgment of whether an employer fabricates human involvement is not clear. Considering that there can be human involvement with AI systems to a certain extent, it seems not so difficult to explain such involvement afterward. In addition, the argument regarding the messages sent by Uber seems not so persuasive because it is possible for employers to make their employees involved in the decision-making of dismissal and send messages whose content is the same as those automatically generated by AI.

Conclusion

At this moment, AI has been heavily incorporated into our society, including workplaces, but AI itself has not become independent and there is a certain kind of human involvement to a varying degree. While GDPR addresses legal issues arising from the use of AI in workplaces, we need to further clarify what is a decision made solely by AI, through examining individual cases in the future.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 06 Jan 2021 - 02:41:41 - IppeiKawase
Revision 2r2 - 29 Dec 2020 - 15:08:55 - EbenMoglen
Revision 1r1 - 21 Nov 2020 - 03:45:42 - IppeiKawase
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM