The rise of COVID and the many different approaches to managing it has shed new light on the importance of experimental science.
For instance, scientists have tried to tease apart the role of protests versus expanded economic reopening in the latest surge of coronavirus infection.
Certainly, indoor activities are much more likely to spread the virus than outdoor activities. And there’s no evidence yet that the wave of Black Lives Matter protests across the U.S. sparked COVID-19 outbreaks since they began after George Floyd’s killing by police in late May, according to a June study by the National Bureau of Economic Research.
The NBER researchers were able to look at not only how cases of COVID-19 were trending in various cities but also how the protests affected the behavior of those who didn’t participate.
They found a relative increase in stay-at-home behaviors by those not attending protests after they began in a city, with people really starting to stay at home by the third day of the protests.
In the paper, the authors suggested that this increase in staying at home may have offset the decrease in social distancing observed by those participating in protests.
Still, the NBER authors said that while there was little effect on the spread of COVID-19 for counties as a whole, it’s possible that the protests caused an increase in the spread of COVID-19 among those who actually attended the protests. That’s not exactly a clear picture.
But what if we could take thousands of people, randomly assign them to join mass protests, head out for a night at the bar, or stay home, and then track how many get sick? That simple, if impossible act — randomly assigning people to behavioral treatment and control groups — slices through all the confounding complexity and leaves us with the answer we’re looking for:
How much increase in infections did bar-hopping versus protesting actually cause?
But this kind of complexity is everywhere we look, including in campaigns. Will “Issue A” or “Issue B” cause more people to vote for the Republican or Democratic candidate? We can ask voters with survey research. But we know people are bad at self-prediction, and often treat that kind of question as a chance to express preexisting support or opposition to the candidate. In fact, asking voters what’s an effective message will often make a bad message look good, or a good one look bad.
This is exactly the challenge many in the pro-life movement have faced. How do we know whether, and with whom, a pro-life electoral message works?
The solution, as I see it, is to randomly assign voters to receive either one of the political messages or a placebo — just like in our imaginary protest vs bar-hopping example.
But in this real-world situation, voters from the voter file are recruited into an online survey that is the same for everyone, except one group gets a non-political “placebo,” and other groups get just one of the messages. By comparing vote preferences in the treatment groups to those in the control-placebo group, we can identify the impact of the messages on vote preference.
That’s the core of what political behavioral scientists do to test the effectiveness of messages. We run randomized message trials, the same methodology as the gold-standard for medical treatments, except we’re studying messages, rather than, say, Remdesivir.
Recently, my firm conducted a series of message trials in five swing congressional districts in Texas for Pro-Life America. Each race is tracking close to the 2018 results, and pro-life messages can shift a large number of votes to the Republican candidate.
In each district — Texas-07, 21, 22, 24, and 32 — we find that messages attacking the Democratic candidate for their positions on abortion policy improve the prospects of the Republican candidate.
Overall, the ads the participants viewed effectively increased the Republican two-party vote share by between 1 point in D-32 to over 9 points in D-24. These impacts are across all voters, not limited to particular subgroups, and in three out of the five districts, we see broad and substantial movement
Even more important, by using machine learning techniques to predict vote preferences for every voter in the district — for both the placebo-control group (baseline) and the treatment groups (exposed to a message) — we’re able to identify the voters most likely to be moved by the pro-life message.
Among these most persuadable voters, we find the pro-life messages increase the Republican two-party vote share by nearly 14-points in D-32 to almost 19-points in D-22. That translates into vote gains of about 4,500 and 18,500 respectively and changing a defeat into victory in some districts.
Elections are won on the margin, and every saved dollar can mean the difference between a close loss and a close win. We know these messages work, but not to the same degree in every district or with every voter. Testing provides certainty that a message is effective, and huge efficiencies can be had by focusing on just the most persuadable voters.
Adam B. Schaeffer, Ph.D., is founder and Chief Behavioral Scientist at Evolving Strategies, a data and analytics firm dedicated to understanding human behavior through the creative application of randomized-controlled experiments.