Computers, Privacy & the Constitution

"Deepfake" or Cheap Take?—Keep Vermont Weird.

-- By MichaelMacKay - 01 May 2025

The Grass is Always Greener...

In Montpelier, this spring, state legislators have held public hearings on a senate bill that would restrict AI-generated content in elections, but such a measure aimed at “election purity” would hurt democratic participation in campaigning for public office. Last year, neighboring New Hampshire faced an out-of-state robocall from an AI-generated President Biden discouraging primary voters from the polls—inspiring Vermont’s S.23. Now, 21 states have enacted legislation against so-called “deepfakes” in elections, and last year, all three of Vermont’s neighbors enacted similar laws to curb AI-generated content. However, Massachusetts, New Hampshire, and New York also tend to view democratic participation differently than the Green Mountain State (e.g. each restores voting rights to felons immediately upon release from prison, whereas Vermont imposes no felon disenfranchisement). Hence, Vermont’s approach to “synthetic media” is problematic, even if unoriginal, and like most states, its legislature should avoid adopting such a law where: (1) making political speech appear to be dubious likely violates the First Amendment, (2) required disclosures would inevitably cause confusion online, and (3) what well-funded parties can afford without AI simply eludes those less resourced.

The First Amendment Issue

First, there is the freedom of speech issue where the law requires self-disclosure when anyone uses AI “with the intent… to influence the outcome of an election.” FIRE has already filed an objection on First Amendment grounds, citing Buckley v. Valeo, to argue that Vermont’s proposed “Subchapter 4. Use of Synthetic Media in Elections” would not survive strict scrutiny as a content-based measure. Here, there is some support, as U.S. District Court Judge Mendez recently granted a preliminary injunction against California’s similarly worded restriction on “synthetic media” in elections. Industry groups like TechNet? raise no objection—so long as their cloud providers and online services are not liable. But considering how quickly Meta folded fact-checking operations after the 2024 general election, it is hard to say whether such companies’ support actually pertains to “election purity.” Rather, a firm like Reddit would prefer to avoid obligations under Vermont’s S.23, which would chill anonymous speech, protected under _McIntyre v. Ohio Elections Commission_ and part of the national discourse since at least The Federalist Papers. Ultimately, if candidates choose to communicate through a dice-throwing machine, their political speech should not bear any extra burdens beyond those upheld by the courts.

The Enforcement Issue

Second, even if Vermont could compel such disclosure within 90 days of an election, there is a maze to online sharing that would confound enforcement. Here, S.23 says that any “deceptive and fraudulent synthetic media” used in elections must have a prominent disclosure indicating that it was developed in whole or part with AI. But what if a video editor in Vermont merely used AI to master some audio from a campaign rally and shared the results on Facebook? That would require a disclaimer “in a size easily readable by the average viewer,” and if shared in an audio format only, then “in a clearly spoken manner.” But how clearly, and presumably, why English? Technically speaking, should a dinosaur now appear in that recording from the campaign rally, could the state even discern its origin? Even if the original authors of the AI-enhanced content diligently complied with the bill—scrutinizing existing contracts with vendors already using AI—the issue of establishing a digital paper trail to track changes makes entry-level penalties of $1000 look onerous and prone to finger-pointing. Instead, the easy case is the misrepresentation already protected under law (like impersonation of a former president), but the harder case is routing through a labyrinth of lighter touches to find the agenda behind an AI-edit.

The Green Issue

Above all, elections are expensive, and S.23 likely disarms minority points of view by removing relatively cheap and effective campaign tools. According to OpenSecrets? , a nonprofit organization promoting transparency in elections, the average successful campaign for the House of Representatives spent $2.79M in the last round of midterms, so what is “synthetic” to some with deep pockets may be an “equalizing” force to most without. Last year, candidates increasingly sought joint fundraising committees, as McCutcheon struck down aggregate contribution limits, so where the evenly split FEC also allowed these fundraising entities to run ads without allocating costs, laws like Vermont’s S.23 would only make it harder for less well-off insurgents to compete after Citizens United. Accordingly, the Vermont Green Party could use AI to more easily reach voters, whom other parties also target, or advocates of ending Vermont's homelessness crisis could leverage the same tools to coordinate action on behalf of harder-to-reach constituencies.

The Unafraid Approach

Across the country, in a smaller state, a candidate ran for mayor of Cheyenne as an AI avatar. His campaign last year was unsuccessful, but his positive message on technology was successful in other respects (overcoming a now failed piece of legislation resembling S.23). In the end, no matter how Vermonters—whether activists, campaign managers, or other volunteers—choose to use "AI," they are still humans, and there are already laws for people on the books. As with any new technology, there are always unknowns, but as with this bill, fear of the unknown seems to have led state senators to restrict AI-generated content "90 days" before an election, as California had also attempted back in 2019. But fear is no friend of the freedom of speech, and Vermont's House should reject the bill that the state senate passed in late March. Surely, if there is an “uncanny valley” in the Green Mountains, then state lawmakers should keep Vermont weird.

I think this is rather a heavy load of words for a proposed state law that is almost certainly not going anywhere.

Surely the real question is why one kind of software tool is rationally discriminable from another. Is it okay to use a photo editor to modify photographs but not an image generator? If not, at what point could photo editing pass from being acceptable to unacceptable? Is the intention to deceive the relevant point, in which case these are false advertising statutes. Or could an information graphic conveying accurate information produced by an image generator be forbidden because of the nature of the software used to make it?

It seems evident that tools are neither the issue nor an appropriate subject of regulation of speech, political or otherwise. Why not make that argument out with due economy of words, and move from it to some larger idea?


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

Navigation

Webs Webs

r4 - 08 May 2025 - 18:25:37 - EbenMoglen
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM