News & Insights

2015 British Elections: Lessons Learned

Following the outcome of the 2015 British Elections, the polling industry is once again under fire for missing the mark by a rather notable margin. According to the BBC, in 92 polls gathered over the six-week campaign, 17 showed an exact tie while the remaining 75 had one party leading the other by a variance of one to six percentage points. Of that 75, over half (56%) predicted a Labour victory. However, on Election Day, Conservative Party leader David Cameron went on to defeat Labour’s Ed Miliband by a substantial 7 percent. Additionally, despite forecasts of a neck-and-neck outcome, Conservatives picked up significant seats in the House of Commons (28), gaining an absolute majority.

So what happened? Have the ever-evolving ways voters in the developed world communicate made accurate opinion research impossible? In a word, no. Now, I’m not suggesting all is well in the polling universe. Clearly it’s not. I’d be the first to admit that the opinion research industry faces mounting technical and methodological challenges. We live in a world that continues to migrate away from voice to text based communication. Not only does that present us with the need for cell phone inclusion, but it makes it harder to keep people on the phone period. Pair those challenges with the difficulty of developing reliable internet-based sample, not to mention ongoing shifts in demographics, and it all combines to make polling more difficult by the day. But none of that is what derailed polling in UK’s recent election.

Here’s the bottom-line. If pollsters knew the exact demographic and geographic make-up of an electorate prior to Election Day, they would accurately predict results close to 100% of the time. At the very least, we’d always be within the margin of error. Obviously, no one has the magic eight ball that tells us this. Thus, pollsters are left to rely on history and their own assumptions to build a sample frame they think is most likely to mirror the electorate of the yet to be held election. It’s easy to see how those assumptions can get off track.

Pollsters make these assumptions in three ways – in sample frame, in data weighting, and in turnout modeling. When pollsters significantly weight and model data according to their assumptions, they’re often guilty of casting a small net with the hopes that they’re getting exactly the right fish to mirror the proportional makeup of the whole pond. The trouble comes when we layer one weight on another and another and so on, and then model off that. At that point, maybe all the pollster’s assumptions are right, or maybe they’ve turned their sample of fish into fisherman’s stew.

Over the past five year both Republicans and Democrats have racked up an impressive string of big misses. In 2010, many Republican firms grossly underestimated turnout in Republican Primaries all over the country which left the accuracy of their polls shattered when primary election results showed the highest GOP turnout since 1974. Following this miss, it was then many Democrat firms who got it wrong in the 2010 general elections, where despite the onslaught of Republican polling pointing to a wave, Democrat polling showed a much closer outcome right up to the end. Then came 2012, where many Republican firms assumed there was no way the Democrats could rev up the Obama engine to replicate Democrat turnout of 2008. Of course, that’s exactly what happened. Lastly, looking at 2014, Democrats were the culprit as left leaning firms pointed to a wash election of minimal gains while in reality, Republicans significantly expanded their majority in the House and flipped the Senate in another Republican wave election.

The UK’s flop is unique in that both the Tories and Labour, not to mention the media, ALL got it wrong. The key ingredient among all five elections is assumptions, namely bad assumptions about what the electorate would look like on Election Day.

Now, if you’re looking for more proof, consider this – Internet based polling firm, Survey Monkey, got the UK election right. I know what you’re thinking. Survey Monkey? Can’t be. They’re methodology is far too simplistic and unrefined to be accurate. Yes, but maybe that’s the point, at least to some degree. Survey Monkey is a low cost internet based survey platform that allows organizations and businesses to cheaply and easily survey their membership or clients. As it turns out, Survey Monkey has quite a few users in the UK. They simply took all the email addresses of anyone living in the United Kingdom who has answered one of their surveys and asked them if they intended to vote and if so, for whom. With such a simple methodology, there is no question that a number of people in their sample were either not going to vote or even qualified to vote. However, Survey Monkey got it right despite those problems for two reasons: they had a massive 18,000 person sample, and they took the voters at their word.

And, what about the “shy Tory” theory for the UK miss? That seems like a reasonable explanation, but it doesn’t explain Survey Monkey’s success. In the end, the “shy Tory” theory is perfect for those looking to quickly assign blame without further investigation because it’s nearly impossible to prove or disprove. But again, if the “shy Tory” theory were true, it certainly should be true for a survey with a sample size of 18,000.

All this leads me to an uncomfortable truth. In the political realm, the polling industry never wants to believe good or bad news. The only thing worse than telling a client they are losing badly is telling them they are winning big. The risk to a pollster’s professional reputation increases exponentially with the size of ballot margin. The most comfortable result is a ballot within the margin of error. For that reason, the temptation to “correct” the data to bring it within a “safer” range is immense.

So where does the industry go from here? First, we must continue to wrestle with the various technical challenges and find the best way to rectify them. Better cell sample, shorter questionnaires, and building more robust internet sample are a good start. And, it means casting a broader net, allowing our sample frame to float in a way that doesn’t squeeze the electorate into our preconceived assumptions.

Second, to get back on track pollsters must do a better job at actually listening to the voters. Any one survey could easily contain a significant anomaly that makes it inaccurate. But if I see two or three surveys with the same “error,” maybe it’s not an anomaly. Maybe it’s a trend.

Too often we brush off various findings, labeling them errors or anomalies if they don’t fit with our notion of what the data should look like. When we do this, we once again allow our assumptions to get in the way and we miss possibly the most important finding of all. Point being, if numbers point to a clear sentiment that doesn’t fit with my assumptions, I may need to change my assumptions. There’s only one assumption we should embrace, assume most of the voters are telling the truth. Most of the time, they are.