Generative AI has, among other things, the power to create voiceover for campaign ads, which could help reduce firms’ production costs and improve efficiency.
Major emphasis here on could.
In August, Democratic digital firm Trilogy Interactive conducted an experiment on four different voices — human female, human male, AI female, and AI male — to see which one would be the most persuasive for a scripted 15-second ad on prescription drug costs.
The results, which they released this week, showed the spot moved independent voters by 20 points when paired with a human female voiceover. C&E spoke with Larry Huynh, a partner at Trilogy Interactive and president of the AAPC, about the experiment.
C&E: What was the impetus for this test?
Huynh: It came out of numerous conversations over the years. Initially the idea was really about gender differences, but as we started pulling together ideas and structuring it, we realized more and more people are having questions about whether they can use AI voices. So we felt like it was important for us to understand, but potentially clients and other practitioners in the space to understand, the type of impact of whether it’s worth it to do, whether it’s effective or not effective.
C&E: How did the AI voices perform?
Huynh: I had very strong feelings going into the test, and those assumptions and my hypotheses were actually wrong. I was most impressed with the female AI voice. It felt the most emotive, it felt the most human. The male AI voice felt the most robotic. But the female AI voice was a complete dud. And if you bill it hourly, it’d probably be, at least initially in this case, more expensive than hiring a human voiceover artist.
C&E: Would you caution media consultants against using AI voices?
Huynh: I think it’s reasonable to test things. It’s reasonable to be curious: make no assumptions on what you think works, especially on something that’s not as vetted as AI in any form, right? It’s important to test and make no assumptions that cutting the corner, while it may save you time, it’s actually [not] serving the interest of your clients.
C&E: Is this something you are going to continue to test, or do you feel like you’ve reached the conclusion on it?
Huynh: I think there are definitely more areas to test: getting a better feel for what may have driven that one voiceover to really underperform versus the others, how it became a value destroying voiceover? Which is [something] we should all be concerned about.
You can spend a lot of time on the visuals and a lot of time on the script writing. And then if you shortchange that process with the voiceover, it really destroys the impact. That is a concern.
C&E: Are there voices that are better at reaching certain groups?
Huynh: It often will come down to the ad and the content of the ad and working hard to understand how certain subject matters or certain styles of the ad [resonate]. Is an angry ad more appropriate for a certain type of voiceover? Is a more emotive ad more appropriate with another type of voiceover? We’re certainly curious about [these things]. There are also, obviously, issues of race and ethnicity and vocal inflection and accents and how that impacts performance as well. Are there voices that are just universally good? Again, I think it often will come down to the ad and the content of the ad.
C&E: Do you think creative testing needs to evolve as we’re now firmly in this generative AI era?
Huynh: Testing always needs to continue to evolve. This test is one form of testing using panels. Certainly, I’ve used it in a lot of creative testing and it’s valuable. The more you’re able to test, the more you can better get a sense of how you can optimize your persuasive impact on voters.