For political consultants operating in multiple states, there are a myriad of new regulations around artificial intelligence (AI) that could impact your work this cycle. But with new bills being introduced in state legislatures every day, it can be difficult to determine what activity is currently regulated.
Over forty states are currently considering legislation that seeks to regulate AI, and eleven states have already enacted laws regulating the use of AI. While all legislation shares the overarching goal of greater AI regulation, there are categorical differences in the structure and of these rules and the ways they have been implemented.
Whether you want to identify the guardrails for your campaign advertising, or simply want to understand how to hit back on a campaign who might be using AI against your clients, we’re here to lay out the various rules of the road:
Category 1
State Regulation (Disclaimers on AI misrepresentations)
Several states, including California, require any political ad to include a disclaimer if the ad includes a visual misrepresentation that conveys a “fundamentally different understanding or impression” than reality. Importantly, these disclaimer rules only kick in when there is actual malice, meaning the AI was generated in a way intended to deceive voters.
For example, imagine an ad which shows Ron DeSantis “speaking” to a rally and saying words he never said, or images of Gavin Newsom which depict him attending a meeting that never actually occurred. Similar to how each political ad must include the name (and sometimes other information depending on the campaign) of the ad’s sponsor, the AI disclaimer requirement would be satisfied simply by including that information in the ad.
Category 2
State Regulation (Disclaimers on any AI use)
Similar to Category 1 but broader in scope, a few states require disclaimers on any ads which incorporate AI, whether or not those ads are intended to misrepresent reality. Michigan and Washington are two such states that require the use of a disclaimer whenever AI is used to create a visual representation (whether it’s a representation of your own candidate or an opposing candidate).
Category 3
State Regulation (Banning AI use)
The last category of state regulation seeks to entirely ban the use of AI technology in political ads. States such as Minnesota, Georgia, Texas, and Indiana have sought to ban the use of visual misrepresentations using AI without the consent of the person being misrepresented (and given the low likelihood of obtaining such consent, these laws operate as effective bans). Note that the breadth of the ban differs based on the state; in Texas, for example, only video deepfakes are prohibited, while other state laws apply to text or audio manipulations as well. Criminal penalties are available as an enforcement mechanism. While no notable legal challenges have been filed yet, arguably some of these regulations could run afoul of First Amendment free speech rights.
Federal Regulation
On the federal level, which only affects federal races for Congress or the presidency, very little action has been taken, and very little is expected to take place during 2024. In Congress, some legislation – both partisan and bipartisan – has been introduced, but none has yet gained any traction.
Late last year, the Federal Election Commission (FEC) took public comments for a potential rulemaking on how it might regulate AI, though the comment period has since concluded. Even if the FEC decides to introduce new rules this summer, due to the notice and comment regulatory requirements, nothing is expected to be in place this cycle.
Yet, the Federal Communications Commission has recently taken some action, clarifying rules under the Telephone Consumer Protection Act (“TCPA”). Specifically, the FCC has stated that AI-generated voices will be considered “artificial” messages within the meaning of the TCPA, and hence subject to all of the FCC’s rules pertaining to artificial voice calls.
Privacy Protections
In addition to straightforward AI regulations, some states have privacy protections that may protect candidates from misrepresentation. Thirty-nine states generally prohibit using a person’s name, image, or likeness without their permission, but some are now going even farther. The ELVIS Act, recently introduced in Tennessee, would make it a misdemeanor to distribute to the public a person’s voice or likeness without their permission, and explicitly addresses the use of AI video and audio for the first time.
Platform-specific Rules
Laws are not the only source of authority that regulate the AI content of political ads. Some tech platforms have adopted their own rules. For example, Meta (applying to both Facebook and Instagram) and Google require clear and conspicuous disclaimers on AI ads that are required when the ad contains an image, video, or even audio that has been digitally created or otherwise modified. Meta has announced an intention to reject ads that contain undisclosed digital alteration and levy penalties against repeat offenders.
As state law and federal regulations continue to evolve, navigating the AI landscape can become especially tricky. It’s essential that political consultants operating in multiple states remain up-to-date on developments in AI regulation, which will require knowing the law — and the right lawyer.
Nicole Kelly and Andrew Pardue are associates at Holtzman Vogel. Steve Roberts is a partner at the firm.