Developing the right creative and targeting it to the right voters will follow a different process in 2024 — one guided by the unseen hand of generative artificial intelligence.
Eli Kaplan, founding partner at Democratic digital shop Rising Tide Interactive, offered C&E his take on what to expect from AI when it comes to Big Tech moderation, creative testing and targeting.
C&E: Will Big Tech successfully moderate AI in political advertising?
Kaplan: I have so little faith in the big tech companies to be able to catch the stuff as it happens that it’s sort of on campaigns to have an early warning-detection-like system so that if there is disinformation powered by artificial intelligence that poses a threat to their campaign, they’re the ones that are going to get in front of it, and communicate that really proactively.
I don’t think you can spot it real time. I think the question is, ‘Can you get to it at a point before it explodes [and] becomes a real problem?’
C&E: Will AI panels meant to mimic different voters’ responses be used for messaging or overall creative testing this cycle?
Kaplan: How you train these things to think makes a huge difference. So I think the $100 million dollar question when it comes to this sort of thing is, ‘Can you statistically show that these types of bots will behave the same way as the electorate in general?’ If that is the case, then I think this could be pretty powerful. I think the risks are that it doesn’t at all: they behave like a segment of the electorate with value systems that are very similar to the people who created it.
And that really causes you to go down the wrong path because you might end up making spots that are really effective with a cohort of the electorate, but actually cause backlash with the group that’s actually much more important.
C&E: Can generative AI create better messaging than human practitioners?
Kaplan: I think it can, [but] I also think it can be pretty awful. So can humans though. Your biases [are] to think about messages that work on you when, in fact, the people that we really need to be persuading and thinking about have very different cultures and value systems.
There might be a piece of content that a human writes that you might think is not good, but actually when you test it, it is pretty good. Similarly, the same is true for artificial intelligence and the testing that I’ve reviewed suggests that these types of messages are kind of all over the map.
Some are really effective, others aren’t. I think that’s sort of where the landscape is heading anyway, which is just to create a lot of different messages and to test them and to hone in on the ones that are really good. I also think this binary between human-created messages and then AI-created messages is kind of a false choice. The way that I see most people using this stuff this cycle is leaning on AI to do a first draft of a lot of stuff and then having human editors go in there and [make] a lot of changes.
C&E: Give us your thoughts on modeling and targeting for 2024.
Kaplan: We’re very usually trained to think of things as you vote for this candidate, you vote for this other candidate [or] you don’t vote. This [2024] is the hold-your-nose election. We’ve done this sort of thing in the past, but I think this is the first time that you’re going to get this level of treatment of it in the presidential race, and it’s going to be very different audiences than you’ve ever seen before.
And I think the ability to sort of use a lot of different data points that define these audiences is gonna be more important than ever. Because it’s not sort of just getting people who are very left wing and very right wing to not vote for somebody further left and further right.
The dynamics are much more complicated this time around. I think you’re going to have a lot of potentially moderate Republicans who would defect to a third-party [candidate] instead of voting for Trump or Biden, in a way that we just haven’t really seen in a lot of these models that have been developed in statewide races.