Is it time to pull the plug on traditional polling?

We’ve seen a lot of polling angst since 2012 when many organizations, Gallup top among them, were off the mark. There’s been more angst still in the weeks since the British elections were comprehensively called incorrectly, bringing the issues of public opinion polling and election prediction back to prominence. The public and professionals are right to be concerned.

 

So what explains the problems and promise of political polling and prediction? Two things: We can get good predictions of preferences from polling (who someone is going to vote for), but we can’t get good predictions of turnout from polling (whether someone is going to vote or not.)


Traditional polling does a decent job with the first of these. But the way it goes about the second is unscientific. In fact, you might even call it pure make-believe.


Humans have direct access to their personal preferences at any given moment. Ask me who I’m going to vote for (if I vote), and my self-prediction will be reasonably accurate. A good poll should get us very close to the real distribution of voter preferences at the time it was taken. If we’re close to the election (and no big bombs drop), the prediction should hold up well. Traditional polling, however, does a terrible job with predictions of turnout based on polling.


Polling firms make a decision about whether or not a survey respondent will vote based on a series of questions referred to as a “likely voter screen.” Based on things like how interested you say you are in the election, how closely you say you’ve been following it, how likely you say you are to turn out, or whether you say you’ve voted in past elections, the polling company will decide to classify you as a “likely voter” or non-voter.


The problem is people don’t have direct access to future voting behavior like they do to current voting preferences. When we ask about voting, we’re asking about an action, not a preference — a concrete behavior rather than a state of mind. And people are just really bad at self-prediction. It’s been shown time and again, from exercise and finances to voting—using self-reported intention is a terrible way to predict future behavior.


We know that these screens are unreliable. And from year to year, election to election, we have no way of knowing how these screens bias the results of polls. The bias in these “likely voter” screens produces a make-believe electorate which only sometimes corresponds with the actual electorate.


For instance, Gallup conducted extensive research post-2012 to improve its election predictions. And yet in 2013, their voter screen produced less accurate results for the New Jersey governor’s race, whereas the voter screen produced more accurate results in Virginia.


These “likely voter” screens are measuring something, but it’s certainly not measuring likely voting. These screens are unscientific and introduce a potentially large and entirely unpredictable source of error into the predictions.


The good news is we don’t need “likely voter” screens. We already have vote history for most voters in the nation, and that vote history does a good job of “predicting” future turnout.


We’re still vulnerable to being wrong, because, again, no one can really predict the future. But with vote history, our expectations about a voter’s future turnout are actually based on past behavior instead of a battery of self-reported states of mind and self-predictions that have a tenuous connection to the behavior of interest—casting a ballot in an election.


Instead of relying on traditional sampling and voter screens, we can turn to predicted probabilities of turnout (and vote preferences for that matter). All this means is that we conduct statistical analysis using information about a voter’s past behavior and other characteristics to predict how likely it is that a citizen will turn out to vote. Are they 10 percent likely, 50 percent, or perhaps 95 percent likely to turn out?


Using past turnout history, demographic, census, and commercial data, we can generate predicted vote probabilities for past elections that are quite accurate (on average). Then we use those probabilities to create our predictions of future elections. If the supporters of each candidate turn out in the same proportions they did in the past, then the prediction will be accurate.


We have a great example of how modeling vote preference and turnout probabilities can get us very close to the results even in a difficult-to-predict race and using far-from-perfect survey data: former Rep. Eric Cantor versus current Rep. Dave Brat in the 2014 Republican primary in Virginia’s 7th Congressional District.


A poll conducted by the Daily Caller just before the race had Cantor winning with a double-digit margin, albeit a tight one. Evolving Strategies conducted some data analysis and modeling for the Brat campaign in the general election, and we had access to voter-ID data from the primary — a jumble of rolling automated and volunteer survey data collected during the weeks before the election.


We used modeled vote preference probabilities and average primary turnout probabilities from recent elections to predict the vote share: 53.8 percent for Brat and 46.2 percent for Cantor. The actual results were 55.5 to 44.5 percent.


We missed Brat’s real margin by about 3.4 points, just by using “found” data. But the Daily Caller poll was wildly off, particularly for the self-reported likely voters they found going 58 percent for Cantor. They missed Brat’s margin by about 27 points.

Meanwhile, Cantor’s own pollster had the incumbent up by 34 points ahead of Brat’s surprise win.


We predicted a future election based on then-current vote preferences combined with vote probabilities produced from previous elections. If turnout probabilities move up or down for everyone, then it doesn’t matter: we’ll just be wrong about total turnout, not about the margin of victory (setting aside error in the vote preference probabilities). But if turnout shifts are connected to vote preference, we’ll be off.


Again, we can’t really predict the future, but we can give our best estimate of what it will look like based on known facts related to the behavior at hand: voting. We can say what the future will look like if voters behave as they did in the past, combined with current data on vote preferences.


Using this as our baseline prediction, we can then look at plausible deviations — what if Republicans turnout out at higher rates and Democrats at lower rates, or vice versa? This gives us what one might call a “margin of reality.”


Instead of a “margin of reality” based on relevant facts, however, too many pollsters still report a margin of error as if that encapsulates their uncertainty about the results. But they are referring to an electorate that is arbitrarily constructed through a process with a massive source of potential error, which is not even considered in the calculations.


The conversation about political polling is largely focused on side issues like online-versus-telephone surveys. The bigger issue is how to predict the likely electorate, and here the traditional firms have doubled down on the indefensible.


Polling and prediction must be based on the voter file. Traditional polling — with its “likely voter” screens — has flatlined. It’s time to pull the plug.


Adam B. Schaeffer, Ph.D., is the Director of Research and co-founder of Evolving Strategies, a data and analytics firm dedicated to understanding human behavior through the creative application of randomized-controlled experiments.

Is it time to pull the plug on traditional polling? Campaigns & Elections June 1, 2015 By Adam B. Schaeffer

Evolving Strategies

Get in touch today

703-373-7384