When polling failed to predict former House Majority Leader Eric Cantor’s primary loss in June, you could chalk it up to the fact that the race was only polled twice publicly, with the last survey done a week out.
The 2014 midterm polls didn’t have that problem. Each of the competitive Senate races and many gubernatorial races were polled numerous times in the last week. In many states, the gap between the final polls and the final results was nearly double digits. Take the shocking near defeat of Sen. Mark Warner (D-Va.), who lead GOP challenger Ed Gillespie by an average of 9.7 points in public polling. Warner won reelection by less than a point.
Discussions of this issue tend to ask how we squeeze more blood from the stone of traditional polling. But this is the wrong question to be asking. If anything, what the election results show is that campaigns and media outlets need to look at the quantitative and qualitative tools at their disposal to assess what’s really happening on the ground, beyond just traditional polls. Here are the trends I’m looking out for after Election Day’s wake-up call:
Merging polling and analytics to forecast turnout
The jury’s still out on overall voter turnout this year, but early evidence suggests that it may have been lower nationwide. In recent election cycles, age and turnout propensity has been highly correlated with the likelihood to vote Republican. This makes answering the turnout question much more urgent than it’s been in the past, as slight changes in turnout can powerfully shape electoral outcomes.
Polls offer at best a blunt instrument for judging turnout. In 2012, the likely voter screens may have been too tight. In 2014, they were possibly too loose. Rather than relying solely on self-reported vote history, which can lead to vast overestimates of a voter’s likelihood to cast a ballot, the industry has rightly been moving towards listed samples from voter files. These listed samples should further rely on turnout scores developed by modeling firms, rather than cruder measures like whether someone voted in 2010 and 2012. Even in a low-turnout environment, we’ve found that 10 to 15 percent of this year’s voters will have sat out the previous midterm, depending on the state, so it’s important to look at the fuller picture that a hybrid approach of traditional polling and analytics can provide.
Forecasting models can’t just be glorified polling aggregators
This year, numerous media outlets and analysts got in the game of building forecasting models projecting the statistical likelihood that Republicans would take the Senate. As they relied heavily on polls, however, they fell victim to the consequences of “garbage in, garbage out.” In many states, the results for Senate races looked more like an average of the final polls combined with Mitt Romney’s margin over President Obama in these red states, rather than the final polls on their own. While many models take non-polling factors into account, the Nov. 4 results would argue for boosting other factors like the partisanship of a state, likely voter turnout and candidate quality.
Apply probability beyond the polling models
Armed with a more nuanced picture of the electorate enhanced by scoring, it’ll be possible to do more than simply present a candidate with a single topline ballot number. Using scoring, we can present a ballot test result for each of many different turnout scenarios. In the same way that models present a range of possible outcomes (for example, the percentage chance the GOP would win 50, 52, or 54 Senate seats), pollsters should think more probabilistically about how they communicate their results.
Cultivate digital data sources
Digital data isn’t perfect yet, and sentiment analysis has a long way to go before being taken seriously as a forecasting tool. But applied with the right analytical filter, observational digital data can tell us things polling cannot. Data released by Google on the relative search frequency on terms related to voting and voter registration compared to 2010 and 2012 may have been the canary in the coalmine for reported lower turnout this year. Armed with these insights, pollsters could’ve adjusted their likely voter models using more precise modeling scores to arrive at a more accurate picture of the likely electorate.
Draw insights from voter psychology
According to research by political scientist Justin Wolfers and Microsoft’s David Rothschild, asking voters which candidate they expect to win (rather than who they actually favor) can significantly increase the accuracy of electoral forecasting models. Factoring this question into The Upshot’s election model increased the number of Senate races called correctly from 31 to 33 and governors’ races from 30 to 33. Looking at voters’ expectations of how their neighbors will behave can provide an indicator of which way late breaking undecideds may eventually move. While we should proceed very cautiously with any claim that the undecideds all broke one way at the end, thus absolving any failures in public polling, there are important insights about voter psychology that can be gleaned if we ask the right questions.
No one discipline—polling, data, digital analysis, competitive intelligence, behavioral psychology—has all the answers when it comes to forecasting electoral or business outcomes. It’s the ability to synthesize and draw actionable conclusions from all of them that’ll give candidates and marketers the edge in years to come.
Patrick Ruffini is a co-founder and partner at Echelon Insights, a next-generation research firm that works with political, advocacy, and corporate clients.