With six weeks to go before the 2016 election, Hillary Clinton’s team had what it thought was a knockout punch against Donald Trump.
In one of the most memorable TV ads of the cycle, audio of Trump’s lewd comments about women were set to dramatic images of young women looking in the mirror.
The ad struck an emotional chord and earned rave reviews from the political media. And according to Democratic insiders, it tested very highly in focus groups. No one in the focus groups could imagine themselves condoning such comments.
The ad seemed like a home run — except for one thing: A randomized control trial run by Democrats, I’m told, showed the “Mirrors” ad actually backfired against Clinton. She lost ground among voters who saw the ad. Meanwhile, Trump gained ground from hard-edged late ads on immigration.
The “Mirrors”-ad episode highlights all the ways traditional ad testing in campaigns is broken. It’s 2019, but ad testing remains stuck in 1989.
The gold standard for measuring the persuasive effects of advertising isn’t focus groups, traditional survey message testing, or measuring likes or clicks on Facebook. It’s randomized control trials where one group of survey respondents is exposed to advertising, another group is not, and where you measure the impact on a ballot test or another bottom line measure of support.
Ultimately, we don’t care whether an ad was liked — focus groups will always tell you they don’t like negative ads. We only care about whether it moved voters on a ballot test. The only way to know that is by measuring how voter behavior actually changes, not whether someone says they’d be more likely to support someone after seeing an ad.
The growing reach of online survey panels, along with new ways to survey people on mobile devices, puts scientific creative testing within reach for advocacy organizations or campaigns at all levels — not just those who can afford to run a six-or-seven figure campaign with digital “lift” studies.
And it’s why my firm has built a self-serve platform to allow clients to run these ad tests themselves. Creative testing is the missing link in the message and audience optimization puzzle. If 2016 was the cycle of slicing and dicing audiences on Facebook, 2020 will be the election of scientifically optimized creative.
When we discuss media optimization, it’s usually in the context of targeting your TV ad to key voters using set-top box data matched to voter models. This can increase the effective reach of your media to key voters by 10 or 15 percent.
But while campaigns are rightly optimizing their media placement, creative remains as gut-driven as ever, even though the possible persuasive lift of a great ad is much, much higher than 10 or 15 percent.
How can organizations most effectively leverage this kind of ad testing to assure their ad dollars aren’t wasted on creative that doesn’t move the needle? Here are some ways to make sure that TV or digital advertisers get the most out of new creative testing tools in 2020.
Testing should be part of the plan from the get-go.
Think through in advance questions you want answered about your advertising whether it’s message, messenger, or creative elements like music or voiceover. Craft your shoot days with an eye towards capturing all the elements you’d like to test. Then decide by asking a representative panel of voters in your state or district — not through time-consuming email or text chains between the team, the candidate, and her or his spouse.
Decide on the right metric to decide whether your ad worked and stick to it.
For a campaign, this will be easy: a ballot test, your image, or your opponent’s image. For issue or ballot campaigns, this can be trickier, but it should ultimately tie in closely to your message and the core attitude you’re trying to change.
Different people on the team will have their own favorites and will try to cherry-pick data to support the ad they wanted to run all along. You can avoid this second-guessing by deciding up front what the most important success metrics are.
Test multiple creative variations.
Testing one potential spot and finding that it fell flat doesn’t leave you with many places to go. Testing a range of creative approaches helps you find both the gold mines and the land mines. Ads rarely backfire, but when they do, it helps to have an alternative ready that will actually move voters.
Have the humility to test your unfinished work and commit to change as a result of testing.
This is the hardest part. Creating TV spots (or having them made for you) entails having strong opinions about what works best and it can be gut-wrenching to change course.
The temptation may be to set the test results aside and go with your gut. But if you’re not going to ultimately change what you do as a result of testing, why even try to improve what you’re doing at all? Testing should have a clear and defined outcome, whether it’s choosing which ad to run, choosing which group should be targeted by an ad, or fine-tuning your creative based on quantitative data and qualitative open-ended feedback from hundreds of survey respondents.
Campaigns have kludged together numerous ways to assess the persuasive impact of advertising over the years, most of them limited by technological or methodological barriers. It simply hasn’t been possible to rapidly collect objective feedback on video creative before the advent of large-scale online survey research. That’s changing fast.
Campaigns should assume their opponents will have all the tools of creative testing at their disposal in 2020, and act accordingly.
Patrick Ruffini is partner & co-founder at Echelon Insights.