Law in the Internet Society

"SAFE" for Kids?

-- By SidneyLee - 25 Oct 2024

Introduction

Data privacy and internet addiction are issues that affect social media users of all ages, but their effects on children are of serious concern. Social media platforms profit significantly off of adolescent users–a 2023 study estimates that in 2022, six major social media platforms generated almost $11 billion in advertising revenue attributed to users aged 0-17 years old (1). Critics claim that these platforms use manipulative algorithms to sustain engagement while collecting personal information for advertisers. In some cases these platforms have violated the federal Children’s Online Privacy Protection Rule of 1998, which prohibits collecting personal data from users 13 years and younger without parental consent (2).

In June 2024, two bills addressing social media usage by minors were signed into law in New York (3). The Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the Child Data Protection Act (CDPA) address addictive algorithms of social media and prohibit sites from using personal data of users under 18 years without parental consent (4).

Is the new state legislation on children’s online safety a good development? Children’s privacy and online safety are important concerns and the law should seek to protect minors on the internet. However, due to enforcement obstacles and the potential for even greater data collection, it is difficult to say that these laws will have their intended effect.

SAFE for Kids Act and CDPA

SAFE and CDPA are fairly narrowly tailored in their application, and provide ways of opting into or out of requirements set by the legislation. Both acts only apply to users under 18 years old, and the restrictions imposed by SAFE and CDPA may be lifted with “verifiable parental consent.” SAFE prohibits “addictive feeds,” recommending media based on a user’s previous interactions with the platform or user’s device, that are designed to prolong social media usage (4). SAFE also prohibits social media operators from sending notifications from 12 AM to 6AM. CDPA prohibits operators of platforms “targeted to minors” and operators who have ‘actual knowledge’ that a user is a minor from processing the personal data of minors, with certain exceptions (e.g. collecting data is strictly necessary for the purpose of the website). Additionally, an operator must obtain informed consent from minors 13 years or older to collect data, and obtain informed consent from a parent of minors under 13 years old.

Details of the regulations remain to be seen, as certain standards must be given fuller meaning in order for social media platforms to comply. SAFE will not go into effect until 180 days after the New York Attorney General finalizes regulations on its implementation, and the CDPA will not go into effect until June 20, 2025 (5). Both acts give the OAG the sole authority to bring action to enjoin, seek damages, or seek civil penalties of up to $5,000 (4). The OAG still has to define what “commercially reasonable and technically feasible methods” of determining age and “verifiable parental consent” entail. How much of a burden is placed on social media operators will depend on how narrowly the OAG construes these terms.

Criticisms of the Acts

Predictably, there has been strong pushback from technology industry trade groups. NetChoice? , a trade association of online businesses, has claimed that both acts are unconstitutional (5). NetChoice? has already launched successful challenges against similar legislation in other states. In 2023, the organization prevailed in the District Court for the Northern District of California in its challenge of the California Age-Appropriate Design Code Act (CAADCA), which barred websites’ use of children’s personal information in a way “materially detrimental” to their physical health, mental health, or well-being (5). Regarding SAFE and CDPA, Netchoice claims that the acts infringe on free speech rights, provide yet another way for the government to track what sites people visit by forcing websites to censor content unless users provide an ID to verify their age, and undermine parents’ rights (6).

The legislature has anticipated a free speech challenge by focusing not on potentially harmful content for minors, but instead on the potential for social media to be addictive through mechanisms such as algorithmic feeds and nighttime notifications. While the regulation of content would open up the new laws to First Amendment suits, regulating the addictive capacity of social media is not so clearly a free speech violation. A possible argument may be that SAFE interferes with platforms’ rights to arrange content in the way they want. Another line-drawing issue is the difference between ‘addictive’ and merely ‘appealing’--all businesses appeal to customers in some way in order to encourage repeated use of their goods or services. Consumer protection laws typically protect against physical and financial harm, which are arguably missing when it comes to social media use.

Conclusion

Legislation aimed at protecting minors from social media addiction is a desirable outcome, provided it does not censor content or expression. Prohibiting nighttime notifications and user-tailored algorithmic feeds for children does not prevent them from accessing social media or searching for the content they are interested in. Social media operators are already having to use non-algorithmically personalized feeds to comply with the EU’s Digital Services Act, which requires platforms to use systems not based on profiling (7). The new legislation from New York and similar laws in other states could present an opportunity for operators to use more creative feed systems and diversify the content that users see, which offers benefits beyond addressing social media’s addictive nature.

There are still obstacles ahead for enforcement–depending on the standards for “commercially reasonable and technically feasible methods” and “verifiable parental consent,” the new laws will have no bite, or may accidentally affect some adult users as well. It is also unclear what kind of information a site will use to determine age, and whether this will require further data collection. If further data collection is “strictly necessary” for the site, the law indirectly encourages processing users’ personal information. There is no perfect outcome, but making efforts to protect childrens’ online safety is an important mission.

This is a responsible summary of recent legislation. Its purpose was to "give parents rights." The rights are worthless, the laws are toothless, and no actual effort to get technology to support learning and healthy social development for young people will result from it. But parental consent (as useless as any other form of bilateral consent in dealing with environmental dangers) will become another ritual of online privacy destruction.

I think the best route to improvement is to cut back the policy rhetoric and color outside the lines a little bit more. If the whole exercise is absurd, explain the attraction of the absurdity. If it is not absurd, which part of it would you choose to put your practice behind, and in what way?

Navigation

Webs Webs

r2 - 12 Nov 2024 - 15:45:58 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM