The feds’ approach to regulating the use of artificial intelligence in campaign advertising is increasingly coalescing around disclosure requirements.
The question is whether or not political advertisers will see federal regulation come into force before the end of the cycle.
While the FEC is mulling the approach of potentially prohibiting “deliberately deceptive Artificial Intelligence campaign advertisements,” Congress and the Federal Communications Commission (FCC) are aiming lower: a simple disclosure requirement for campaigns and groups.
Alaska Sen. Lisa Murkowski (R) and Minnesota Sen. Amy Klobuchar (D) are pushing the “AI Transparency in Elections Act” in the Senate, which would require disclaimers on campaign ads “with images, audio, or video that are substantially generated by artificial intelligence (AI),” according to a release.
Klobuchar stated that disclaimers were needed to “improve transparency in our elections so that whether you are a Republican or a Democrat, voters will know if the political ads they see are made using this technology.”
Similar legislation is expected to be introduced in the House shortly.
Meanwhile on Wednesday, Jessica Rosenworcel, chairwoman of the FCC, released a proposal that would require both candidate campaigns and groups to disclose “when there is AI-generated content in political ads on radio and TV.”
It’s the first step the FCC is taking toward regulating AI in campaign advertising. “If adopted, this proposal would launch a proceeding during which the Commission would take public comment on the proposed rules,” the announcement states.
Rosenworcel stated: “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue. … As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used.”
While this was a first step on the disclosure requirement, this is the FCC’s second move to regulate AI in political advertising this cycle, Holtzman Vogel attorneys Steve Roberts, Nicole Kelly and Andrew Pardue pointed out in a recent piece for C&E.
Earlier this year in response to a New Hampshire robocall using an AI-version of President Biden’s voice, the Commission made “clarifying rules under the Telephone Consumer Protection Act (“TCPA”)” to state that AI-generated voices “will be considered ‘artificial’ messages within the meaning of the TCPA, and hence subject to all of the FCC’s rules pertaining to artificial voice calls.”
Now, while practitioners track these federal regulation efforts, they’ll also have to consider that “over forty states are currently considering legislation that seeks to regulate AI,” while at the same time “eleven states have already enacted laws regulating the use of AI,” according to Roberts, Kelly and Pardue.
In an interview with C&E on Wednesday, Roberts said there is “no reasonable possibility of this being in place during the 2024 cycle.”
“At this phase, the Chair still needs to win a vote of the other FCC Commissioners to even advance this proposal to a formal ‘notice of proposed rulemaking,’ after which the FCC must publish the proposed rule in the Federal Register, hold open a 60-day comment period, analyze those comments, and publish a final rule which can become effective no sooner than 30 days after publication,” he noted.
“Hopefully, her fellow FCC Commissioners will understand that the lead federal political regulator already has a proposed rule on whether, and how, to require disclaimers for ads using AI – the Federal Election Commission is currently reviewing comments to a proposed rule from 2023.”