General Election 2010: Did the opinion polls flatter to deceive?
In the run-up to the UK general election few people would have predicted a Conservative–Liberal Democrat coalition goverment – and fewer still that the Lib Dems would actually lose seats despite their popularity in the polls. Martin Boon and John Curtice examine the ‘systematic bias’ that led pollsters to overestimate the party’s support.
The research industry may have good reason to be grateful to the UK general election exit poll, although few would have suggested such a thing when the traditional 10pm release of its results pointed to a hung parliament and – most surprising – that Nick Clegg’s supposedly surging Liberal Democrats would win fewer seats than in 2005.
The disbelief among commentators and pundits before the actual results started to come in was evident. Yet as the night drew on and it became apparent that the exit poll was, after all, as close to the bull’s-eye as could realistically be expected, those who had been critical found themselves uttering plaudits instead.
The accuracy of the exit poll can only be seen as vindication of its research methods, something for which the entire research industry should be grateful. But it also had the advantage of relieving some of the pressure on the pre-election pollsters, whose own performance is perhaps more deserving of a place in the dock. A record total of nine polls based wholly or mostly on interviewing conducted in the final few days of the campaign were published during its final hours. Their success at anticipating the eventual outcome can only be regarded as ‘mixed’.
In the following graph we compare the average of the estimates produced by the final polls with the actual outcome at each of the last six general elections. It shows that in 2010 the polls clearly avoided repeating their 1992 Waterloo, when they seriously underestimated Conservative support and overestimated Labour’s strength. Indeed, although the polls were yet again inclined to underestimate Conservative support, they actually managed to underestimate Labour’s support for the first time since 1983.
However, while the polls might have avoided past errors, they appear to have fallen foul of a new one. The exit poll caused such surprise because its projection for the Liberal Democrats was at variance with the predictions of the final polls, which had suggested that the much-vaunted surge in favour of Nick Clegg’s party had carried through to polling day. It was on this point that the polls were wrong, significantly overestimating Liberal Democrat support for the first time in recent polling history.
Of the various polls, the best prediction came from ICM whose final poll had an average error of 1.25%. (Average error is defined as the average of the differences between the estimated percentage for each party and the actual result.) In contrast two other polls, both of which incorrectly suggested the Liberal Democrats would win more votes than Labour, had an average error of no less than 3.25. But even ICM’s poll overstated the Liberal Democrats’ eventual tally by 2 points. And the fact that every single poll overestimated Liberal Democrat support implies systematic bias rather than mere sampling error. In order to understand why this apparent bias occurred, ICM re-interviewed after the election a large proportion of those whom the company had interviewed during the last two weeks of the campaign, including as many as 1,200 of those who had participated in its final prediction poll.
Our preliminary analysis of these recall interviews suggests the following:
- Only a small part of the bias can be accounted for by a late swing away from the Liberal Democrats. Among those who actually did vote, those who said they were going to vote Liberal Democrat were only a little less likely than Conservative and Labour supporters to vote as they had indicated. As many as 87% of those who expressed an intention to vote Liberal Democrat actually did so, while the equivalent figures for the Conservatives and Labour were 95% and 93% respectively. Those who switched to the Liberal Democrats at the last minute almost equaled those who defected.
- Differential turnout, of which nearly all polls tried to take account, seems in practice to have been relatively unimportant. Liberal Democrat supporters were no more likely to stay at home than their Labour counterparts.
- An important role was played by the ‘Shy Tory syndrome’, that is, differential failure to declare their voting intention by those who in the event vote for one party rather than another, perhaps because they feel that their choice of party is currently unfashionable. The inquest into the 1992 debacle suggested that a reluctance by those who eventually voted Conservative to declare their intentions in advance helped explain the failure of the polls on that occasion. This time it seems to have been voting Labour that was regarded as unfashionable. In 2010 no fewer than one in five of those who actually voted failed to declare their voting intention when interviewed by ICM for its final poll – and they were nearly twice as likely to vote Labour as Liberal Democrat. Although ICM’s final poll prediction (unlike many others) included an adjustment that took into account evidence that Labour voters were apparently particularly reluctant to declare their intentions, that adjustment may not have been sufficient to take full account of what actually happened.
- Like a number of other pollsters, ICM weighted its poll data to take account of how people said they voted in 2005. Although the weighting adopted seems likely to have helped avoid what would otherwise have been another overestimate of Labour support, it may have had the effect of upweighting Liberal Democrat support too much as well.
This suggests two lessons for the future. The ‘Shy Tory’ adjustment was quite clearly the correct course of action – without it ICM’s final prediction would have been notably worse. Moreover, it is now evident that the practice can identify key ‘Shy Labour’ as well as ‘Shy Tory’ voters. What may be necessary is to increase the size of the adjustment; at present ICM assumes that half of those who fail to declare a vote intention will in practice vote for the party they backed the last time around.
Meanwhile, weighting samples by recall of past voting seems to have played a role in the more accurate estimate of Labour’s strength. However, it might be wise to consider taking somewhat less account of the level of Liberal Democrat recall in future. Lib Dem recall will be high on our list of investigations in the coming months.
Martin Boon is research director at ICM while John Curtice is professor of politics at the University of Strathclyde.