OPINION6 July 2010
OPINION6 July 2010
In the run-up to the UK general election few people would have predicted a Conservative–Liberal Democrat coalition goverment – and fewer still that the Lib Dems would actually lose seats despite their popularity in the polls. Martin Boon and John Curtice examine the ‘systematic bias’ that led pollsters to overestimate the party’s support.
The research industry may have good reason to be grateful to the UK general election exit poll, although few would have suggested such a thing when the traditional 10pm release of its results pointed to a hung parliament and – most surprising – that Nick Clegg’s supposedly surging Liberal Democrats would win fewer seats than in 2005.
The disbelief among commentators and pundits before the actual results started to come in was evident. Yet as the night drew on and it became apparent that the exit poll was, after all, as close to the bull’s-eye as could realistically be expected, those who had been critical found themselves uttering plaudits instead.
The accuracy of the exit poll can only be seen as vindication of its research methods, something for which the entire research industry should be grateful. But it also had the advantage of relieving some of the pressure on the pre-election pollsters, whose own performance is perhaps more deserving of a place in the dock. A record total of nine polls based wholly or mostly on interviewing conducted in the final few days of the campaign were published during its final hours. Their success at anticipating the eventual outcome can only be regarded as ‘mixed’.
In the following graph we compare the average of the estimates produced by the final polls with the actual outcome at each of the last six general elections. It shows that in 2010 the polls clearly avoided repeating their 1992 Waterloo, when they seriously underestimated Conservative support and overestimated Labour’s strength. Indeed, although the polls were yet again inclined to underestimate Conservative support, they actually managed to underestimate Labour’s support for the first time since 1983.
However, while the polls might have avoided past errors, they appear to have fallen foul of a new one. The exit poll caused such surprise because its projection for the Liberal Democrats was at variance with the predictions of the final polls, which had suggested that the much-vaunted surge in favour of Nick Clegg’s party had carried through to polling day. It was on this point that the polls were wrong, significantly overestimating Liberal Democrat support for the first time in recent polling history.
Of the various polls, the best prediction came from ICM whose final poll had an average error of 1.25%. (Average error is defined as the average of the differences between the estimated percentage for each party and the actual result.) In contrast two other polls, both of which incorrectly suggested the Liberal Democrats would win more votes than Labour, had an average error of no less than 3.25. But even ICM’s poll overstated the Liberal Democrats’ eventual tally by 2 points. And the fact that every single poll overestimated Liberal Democrat support implies systematic bias rather than mere sampling error. In order to understand why this apparent bias occurred, ICM re-interviewed after the election a large proportion of those whom the company had interviewed during the last two weeks of the campaign, including as many as 1,200 of those who had participated in its final prediction poll.
Our preliminary analysis of these recall interviews suggests the following:
This suggests two lessons for the future. The ‘Shy Tory’ adjustment was quite clearly the correct course of action – without it ICM’s final prediction would have been notably worse. Moreover, it is now evident that the practice can identify key ‘Shy Labour’ as well as ‘Shy Tory’ voters. What may be necessary is to increase the size of the adjustment; at present ICM assumes that half of those who fail to declare a vote intention will in practice vote for the party they backed the last time around.
Meanwhile, weighting samples by recall of past voting seems to have played a role in the more accurate estimate of Labour’s strength. However, it might be wise to consider taking somewhat less account of the level of Liberal Democrat recall in future. Lib Dem recall will be high on our list of investigations in the coming months.
Martin Boon is research director at ICM while John Curtice is professor of politics at the University of Strathclyde.