Much of the post-election polling analysis has centered on “who got it right” and “who got it wrong” by focusing on the final public polls released in the days before Election Day.
The conventional wisdom has already formed around the following conclusions: Gallup and Rasmussen missed the mark; the big media pollsters and Democratic campaign pollsters were right; online polls and Public Policy Polling (PPP) were surprisingly accurate; and the Republican campaign pollsters didn’t see it coming.
While I largely agree with these conclusions and appreciate the need for this kind of analysis, a laser-focus on evaluating only the handful of final public polls is a disservice to the polling community. The final polls are not much more than a beauty pageant conducted by public pollsters to show the world they are capable of getting it right. Fact is there were thousands of campaign polls released in 2012. (Mark Blumenthal of Pollster.com estimated there were more than 1,800 polls on the presidential race alone.)
But the great majority of these polls were conducted well before the final week. And it was the polls released in the weeks and months before the end of the campaign that actually mattered most. These early polls—conducted in August, September and early October—had a huge impact on the race because they were the ones used by the media and the campaigns to build the election’s narrative.
Without these polls, the media would not have been able to say that the Democratic National Convention was a huge success or that President Obama had a rough night during his first debate with Mitt Romney in Denver.
But since there is no Election Day in September—with no exit polls or election returns to confirm against—the accuracy and validity of these critically important early polls is largely ignored.
However, we can apply one of the big polling lessons from 2012 to evaluate these earlier polls, and that’s to ignore the polling averages at your peril. The forecasters and the aggregators got it right, and the pollsters that deviated strongly from the final polling averages got it wrong. That same principle can be applied to polls conducted throughout the cycle.
Take this example: a few weeks before Election Day, a GfK poll on behalf of the Associated Press was released stating the gender gap was gone. The public release of that poll focused on the finding that Romney had “erased President Barack Obama’s 16-point advantage among women.” But that same day, the Washington Post/ABC tracking poll showed Obama up 15 points with women, and most other polls at the time showed the President up double-digits with women.
The exit polls showed the president won women by 11 points. Obama’s lead with women had never been erased; the GfK poll was an outlier.
The vanishing gender gap finding was counter to the narrative of the campaign at the time, and it became the centerpiece of the reporting on that poll because it was different. Other news outlets highlighted the finding because, after all, who wants to report on another poll saying the exact same thing as the rest? The problem was that the finding ignored mounds of other evidence from other pollsters saying the opposite—the gender gap remained in full effect.
Focusing on outlier results is not a mistake unique to this poll; it is repeated over and over by pollsters and analysts throughout every cycle. It’s driven by the fact that unique data is more interesting even if it’s wrong. When Pew released a poll showing Romney up four points after the Denver debate (among the biggest leads shown by a poll outside of Gallup), it received considerably more ink from reporters than other polls around that time showing the race tied because it was a more extreme result.
When Republican polling firms put out polls showing Rep. Todd Akin in Missouri and Republican Richard Mourdock in Indiana leading their races, reporters took notice because it seemed at odds with what was happening in those races. Throughout the cycle, Gallup got a ton of coverage in part because they are Gallup but also because they were showing such a different result from the rest.
So here’s a recommendation for you as you evaluate the polling narrative of 2014 and 2016: if a poll comes out that looks like an outlier and smells like an outlier, then it’s probably an outlier. Nick Gourevitch is senior vice president and director of research at Global Strategy Group, a Democratic polling firm.