How Political Pros Are Thinking About AI Right Now
A recent survey commissioned by the American Association of Political Consultants found that most political consultants are already using artificial intelligence as part of their work.
But despite the growing use of the emerging technology, many political pros are wary of using AI for creative content, fearing that it could hasten the spread of misinformation, disinformation and bias.
C&E recently sat down for a panel discussion with three practitioners to talk about the role AI is already playing in creative and content generation. Those industry veterans are already finding ways to put the tech to use, but they’re also approaching it with caution. Each stressed the need for transparency when using AI for content creation and raised questions about just how far the politics industry should go when it comes to AI.
Still, they said, the technology has clear uses, allowing ad makers to experiment with new messaging techniques and streamlining processes that would ordinarily take creatives days or weeks to complete.
Here are some of their thoughts:
Rebecca Pearcey, partner, Bryson Gillette:
“There’s a couple of different ways I’ve started using it in ad creation. One is using an AI voiceover before I finalize who I’m going to get to VO the ad. And then forgetting to tell the client that it’s AI and they’re just like ‘what is this robot?’
“It’s so much cheaper and it’s so much easier. And so that’s like the main way this cycle so far – so in the last few months – that we’ve really started to use it.
“There was a comment earlier about justifiable wariness, which is where I am with a lot of AI in ad making. I don’t want to put us out of a job. I don’t want to put my production guys out of a job or VO artists. And so it’s hard.”
“You’ve got to be very clear about what is AI and what’s not…you’ve got to be upfront about what this is so people can tell that it is AI.”
Tate Holcombe, creative director, GoBig Media:
“We did a spot just recently for the New Jersey gubernatorial race, which just concluded last night, where we cloned our opponent’s voice and had him read a horrible op-ed that he wrote 10 years ago. And you know, we cloned his voice. We clearly said this is an AI recording of him reading this op-ed, it was published ‘dot dot dot dot dot.’
“So it’s his own words. And we set off a bit of a firestorm and we weren’t totally prepared for that. But at the same time, man, the earned media was great, because everybody was talking about that ad…it got attention.
“And I think that’s one of those use cases where, one, it was a 100-point font at the bottom of the thing that’s like ‘this is an AI generated voice, here’s the link actually talking about it.’ ”
Glenn Greenstein, Founder and Creative Director, MeanGreen Media:
“At MeanGreen, we work with strategists, consultants and agencies to bring the message from the campaign forward, and so there are several layers before it gets to us in the decision-making process.
“But when it comes to us – the script, a concept – now it’s up to us to figure out how to execute that; how to bring that to the screen. And all of a sudden, there is an important layer of decision-making that starts at the creative shop. It doesn’t mean that we make the final decision.
“But we are asking the question on a creative kickoff call, ‘hey, should we use AI for this? Well, why would we? Well, we could do in a day what would take a week before and make it look like 3D, Cinema 4D generated animation in a couple of days.’
“We weren’t trying to deepfake anyone. We were, you know, essentially doing what we would have done years ago with Photoshop and AfterEffects and Cinema 4D, and instead doing it with AI as a tool. Great. But when it comes to deepfakes, which I’m sure will be part of the ask as we move forward and as the gloves come off…there will be more of that I’m sure.”