Consultants Reckon With the Rise of the ‘Deep-Not-So-Fake’
A Fannie Mae ad featured an artificial-intelligence cloned version of President Donald Trump’s voice.
A digital video from the National Republican Senatorial Committee used an AI-generated version of Chuck Schumer to verbalize a quote the Senate minority leader gave to Punchbowl News.
And in Massachusetts, a Republican gubernatorial candidate posted an Instagram video of a fake radio ad that included an AI-generated version of Gov. Maura Healey’s voice.
Political consultants are reckoning with the latest trend in artificial intelligence-assisted political advertising: using the emerging technology to mimic the voices and likenesses of supporters, rivals and critics – in many cases without the individual’s permission.
At the Reed Awards in Charleston, S.C. last week, a panel of Democrats weighed in on that trend, with some warning that it has opened up a new frontier for misinformation in the world of political communications.
While some states have moved to create disclosure rules around political ads that utilize AI, they argue, the technology remains largely unregulated, giving campaigns and committees the ability to create and distribute ethically dubious or misleading content.
In the case of the Fannie Mae ad, the Trump administration had granted permission to use AI to clone the president’s voice. The NRSC’s video featuring Schumer’s voice and image, however, was created without the New York Democrat’s permission. And while that video featured the words “AI GENERATED” in the bottom right-hand corner, Megan Sullivan, a partner at GtP Media, said that it wasn’t enough.
“I think if there was some sort of regulation or ability to put an AI-generated label on something like that, they wouldn’t make it,” Sullivan said during a panel discussion at the Reeds. “They wouldn’t stand behind it. They’re happy to say it’s true. He really said it in print. But if they had to put the label on it, they wouldn’t do it.”
Nicolas Magalhaes, the vice president of video creative at SBDigital, said that he’s “extremely worried about losing the credibility of your own voice as sort of an authorial tool that you have.” The NRSC’s Schumer video, he said, was particularly problematic because it used his likeness and voice without his permission.
“We have been clipping what people said and remixing it ad nauseum for a very, very long time in human history,” Magalhaes said during the same panel discussion. “But I think that – at least they actually spoke those words.”
“I think there’s something being lost there,” he added. “I don’t quite know how we implement rules that actually circumvent that problem.”
There’s been widespread concern across politics and the wider media environment over the proliferation of so-called “deepfakes,” AI-generated images, video or audio. But using AI to verbalize a quote from a political supporter or rival falls somewhere in between.
Reed Elman, the senior vice president of campaigns at HGCreative, dubbed it the “deep-not-so-fake” – and predicted that there will be more to come in the months and years ahead.
“I think we’ll certainly see more of those synthetic voiceovers, where they were using real quotes,” Elman said at the Reed Awards panel.
