This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Find out more here

Friday, 24 October 2014

General Election 2010: Did the opinion polls flatter to deceive?

In the run-up to the UK general election few people would have predicted a Conservative–Liberal Democrat coalition goverment – and fewer still that the Lib Dems would actually lose seats despite their popularity in the polls. Martin Boon and John Curtice examine the ‘systematic bias’ that led pollsters to overestimate the party’s support.

The research industry may have good reason to be grateful to the UK general election exit poll, although few would have suggested such a thing when the traditional 10pm release of its results pointed to a hung parliament and – most surprising – that Nick Clegg’s supposedly surging Liberal Democrats would win fewer seats than in 2005.

The disbelief among commentators and pundits before the actual results started to come in was evident. Yet as the night drew on and it became apparent that the exit poll was, after all, as close to the bull’s-eye as could realistically be expected, those who had been critical found themselves uttering plaudits instead.

The accuracy of the exit poll can only be seen as vindication of its research methods, something for which the entire research industry should be grateful. But it also had the advantage of relieving some of the pressure on the pre-election pollsters, whose own performance is perhaps more deserving of a place in the dock. A record total of nine polls based wholly or mostly on interviewing conducted in the final few days of the campaign were published during its final hours. Their success at anticipating the eventual outcome can only be regarded as ‘mixed’.

In the following graph we compare the average of the estimates produced by the final polls with the actual outcome at each of the last six general elections. It shows that in 2010 the polls clearly avoided repeating their 1992 Waterloo, when they seriously underestimated Conservative support and overestimated Labour’s strength. Indeed, although the polls were yet again inclined to underestimate Conservative support, they actually managed to underestimate Labour’s support for the first time since 1983.

Poll results

This graph shows the difference between the average (to the nearest integer) of the estimated vote share for each party in the final campaign polls, and the actual election outcome in Great Britain (again to the nearest integer).

However, while the polls might have avoided past errors, they appear to have fallen foul of a new one. The exit poll caused such surprise because its projection for the Liberal Democrats was at variance with the predictions of the final polls, which had suggested that the much-vaunted surge in favour of Nick Clegg’s party had carried through to polling day. It was on this point that the polls were wrong, significantly overestimating Liberal Democrat support for the first time in recent polling history.

Of the various polls, the best prediction came from ICM whose final poll had an average error of 1.25%. (Average error is defined as the average of the differences between the estimated percentage for each party and the actual result.) In contrast two other polls, both of which incorrectly suggested the Liberal Democrats would win more votes than Labour, had an average error of no less than 3.25. But even ICM’s poll overstated the Liberal Democrats’ eventual tally by 2 points. And the fact that every single poll overestimated Liberal Democrat support implies systematic bias rather than mere sampling error. In order to understand why this apparent bias occurred, ICM re-interviewed after the election a large proportion of those whom the company had interviewed during the last two weeks of the campaign, including as many as 1,200 of those who had participated in its final prediction poll.

Our preliminary analysis of these recall interviews suggests the following:

  1. Only a small part of the bias can be accounted for by a late swing away from the Liberal Democrats. Among those who actually did vote, those who said they were going to vote Liberal Democrat were only a little less likely than Conservative and Labour supporters to vote as they had indicated. As many as 87% of those who expressed an intention to vote Liberal Democrat actually did so, while the equivalent figures for the Conservatives and Labour were 95% and 93% respectively. Those who switched to the Liberal Democrats at the last minute almost equaled those who defected.
  2. Differential turnout, of which nearly all polls tried to take account, seems in practice to have been relatively unimportant. Liberal Democrat supporters were no more likely to stay at home than their Labour counterparts.
  3. An important role was played by the ‘Shy Tory syndrome’, that is, differential failure to declare their voting intention by those who in the event vote for one party rather than another, perhaps because they feel that their choice of party is currently unfashionable. The inquest into the 1992 debacle suggested that a reluctance by those who eventually voted Conservative to declare their intentions in advance helped explain the failure of the polls on that occasion. This time it seems to have been voting Labour that was regarded as unfashionable. In 2010 no fewer than one in five of those who actually voted failed to declare their voting intention when interviewed by ICM for its final poll – and they were nearly twice as likely to vote Labour as Liberal Democrat. Although ICM’s final poll prediction (unlike many others) included an adjustment that took into account evidence that Labour voters were apparently particularly reluctant to declare their intentions, that adjustment may not have been sufficient to take full account of what actually happened.
  4. Like a number of other pollsters, ICM weighted its poll data to take account of how people said they voted in 2005. Although the weighting adopted seems likely to have helped avoid what would otherwise have been another overestimate of Labour support, it may have had the effect of upweighting Liberal Democrat support too much as well.

This suggests two lessons for the future. The ‘Shy Tory’ adjustment was quite clearly the correct course of action – without it ICM’s final prediction would have been notably worse. Moreover, it is now evident that the practice can identify key ‘Shy Labour’ as well as ‘Shy Tory’ voters. What may be necessary is to increase the size of the adjustment; at present ICM assumes that half of those who fail to declare a vote intention will in practice vote for the party they backed the last time around.

Meanwhile, weighting samples by recall of past voting seems to have played a role in the more accurate estimate of Labour’s strength. However, it might be wise to consider taking somewhat less account of the level of Liberal Democrat recall in future. Lib Dem recall will be high on our list of investigations in the coming months.

Martin Boon is research director at ICM while John Curtice is professor of politics at the University of Strathclyde.

Follow us on
Follow us on Twitter

Readers' comments (6)

  • What were the unweighted results for party shares?

    The day after the Election, I put a post up on research-live speculating that there might have been a 'narrative effect' leading to pollsters finding reasons to weight up the LibDem share. Still interested to know what the unweighted results were.

    Unsuitable or offensive? Report this comment

  • It is a possible explanation, but not the only one consistent with the evidence. We know that there are consistent problems with false recall of past voting – and the article admits as much: "Lib Dem recall will be high on our list of investigations". Yet the evidence that the overstatement of the Lib Dem vote was a sampling problem rather than a measurement problem (differential turnout, late swing or whatever) rests entirely on taking the Lib Dem recall in the post-election poll at face value. If, instead, the problem was that Lib Dems were more likely to exaggerate their likelihood of turning out before the election, and also more likely to overclaim on their turnout after the election, then you would see exactly the same results but would need a completely different solution to correct it.

    Unsuitable or offensive? Report this comment

  • I feel like Jeremy Paxman now...what were the unweighted results?

    Unsuitable or offensive? Report this comment

  • The author's have been alerted to your request. Hopefully, we'll be able to get an answer for you soon.

  • I am not sure I agree with the conclusions. There are so many factors at play here that have not seemingly been taken into account.

    Having conducted internal polling for the Liberal Democrats from October 2009 to the final Saturday before the election I can say explicitly that our polls pre-campaign were 0.1% correct overall to the final results. We essentially used ICM's methodology with an added twist for local incumbency factors.

    These polls - like others - went wildly off compared to the results after the 'Clegg effect' had kicked in. The main conclusion that can be drawn is that voting intentions were decided prior to the campaign - with the polls merely reflecting a flirtacious exuberance with Clegg during the campaign.

    In other words the polling methodology that we used was very good when freed from the added noise generated by a novel situation. And for all the razzamatazz, perhaps, as has been argued before, that it is the parties that need to look at how for all the cash they spend in 4 heady weeks, they can barely shift the public's real mood in that period.

    Unsuitable or offensive? Report this comment

  • Heres a thought: Since 1997 a number of Lab and LD voters are motivated mostly by keeping out a Con MP being elected in their seat (so called Tactical Voting). In the run up to the 2010 election it seems many of these voters were also out of sorts with the Lab govt and/or saw the LDs (possibly a result of the Clegg effect) as being the only salvation to a Con govt. This was reflected in the polls prior to the election.

    However when these voters got in the ballot box the realities of the First Past the Post (FPTP) system kicked in and they realised their best anti-Con vote was to vote Lab and so in practice this is what they did.

    For this reason I would now counsel against those predicting a LD at any future General Election. Whilst a lot of the anti-Con voter bloc will be annoyed at the LDs going into coalition with the Cons (and a loss of 12-14% since the 2010 reflects this) the reality maybe for anti-Con voters is that under FPTP it is still better to support the best anti-Con which in a number of seats will indisputably be the LDs. So whilst these voters currently say they will vote Lab the realities of being in the ballot box with FPTP may well see them vote LD.

    Unsuitable or offensive? Report this comment

  • That's really thinking out of the box. Thanks!

    Unsuitable or offensive? Report this comment

Have your say

Please add your comment. You can include links, but HTML is not permitted.
Your email address will not be displayed on the site. All comments are moderated.

Mandatory
Mandatory
Mandatory
Mandatory