OPINION7 May 2010

The morning after the night before

Features

So how did the polls do at predicting the general election result? Well, it certainly wasn’t a disaster. On the whole, the final polls got the Conservative and Labour votes right, but overestimated support for the Lib Dems and underestimated the smaller parties.

At the time of writing, with 641 of 650 seats declared, the Conservatives are on 36%, Labour are on 29%, the Lib Dems are on 23% and the rest are on 12% (excluding Northern Ireland, as the opinion poll samples do, the figures are 37%, 30%, 23% and 10%). Results from the final polls are listed below – all of them had the Tory vote between 35% and 37% and most had Labour on 28% or 29%. But almost all of them overstated Lib Dem support, placing the party either just behind Labour or neck-and-neck.

It was always going to be tough for the pollsters to work out how the Lib Dems’ dramatic surge in the polls after the first TV debate would translate into votes.

Angus Reid Public Opinion’s final poll, published on Wednesday, was the furthest out in its predictions for the second two parties, putting the Lib Dems in second place on 29% and Labour on just 24%. However, it did produce a closer prediction of support for the smaller parties than some of the others.

Many pollsters had hinted at the ‘softness’ of Lib Dem support, but apparently didn’t attach sufficient importance to this into their predictions.

Research spoke this morning to Ben Page of Ipsos Mori, who was reluctant to get into too much detail before all the votes had been counted, but said that the research industry had been “vindicated” over earlier criticism of the exit polls. “Iain Dale, the prominent blogger, said he would run naked down Whitehall if the exit poll was right and the exit poll was right,” said Page. “So the market research industry is looking forward to seeing Iain Dale naked in Whitehall some time soon.”

As for the apparent overstatement of Lib Dem support, Page said: “On our final poll for the Evening Standard on Wednesday, we had 40% of Lib Dems saying they might change their mind. We’ll all want to look and see what we can do about soft support for the Lib Dems, we’ll have to find a rational and reasonable way of dealing with it rather than just saying Lib Dems tend to overstate. We will all be looking at certainty of vote, voting history – the surge was partly younger people – and late switching, things like that. The Lib Dems were most likely to say they would vote tactically. So the support was there but it didn’t actually manifest itself in votes on the day – Lib Dem support was slowly deflating after initial Clegstacy and on the day fell further.”

Pollsters can only do so much, Page said, especially as political polling represents only a tiny part of the business of most of the companies that do it (even if it accounts for most of their publicity). If the industry really wanted to, he said, they could follow the lead of the broadcasters with the exit polls and work together. “We could all combine together our resources and do a final, final poll that would be about as perfect as we could make it,” he said. “Whether we’ll ever do that I don’t know.”

ICM, which also did its surveys by phone, seems to have come closest to the Lib Dem share of vote with its estimate of 26%, but was still 3% out. Research director Martin Boon said it was a disappointing night for the polling industry in general. “There are some sizeable average errors out there and we all do need to take a look at our methods,” he told Research. “Clearly all polling companies have overstated the Lib Dems, so there has to be something consistent going on. It would be a little bit premature to consider the reasons for this but it’s up to the opinion pollsters to see why it might have been the case. We’re always testing our methods and this is the best time to be looking at methodologies, assumptions and techniques in order to improve them in the future.”

But considering the number of wild card factors involved in this election, and the anxiety in the MR industry about a repeat of the polling failures of 1992, it could have been a lot worse.

The final poll results were as follows:

PollsterConLabLib DemOthers
Angus Reid PO36%24%29%11%
ComRes37%28%28%7%
Harris35%29%27%9%
ICM36%28%26%10%
Ipsos Mori36%29%27%8%
Populus37%28%27%8%
YouGov35%28%28%9%
BBC poll of polls36%28%27%9%

These polls were, of course, all conducted in different ways and at different times, so please visit the sites of Angus Reid Public Opinion, ComRes, Harris Interactive, ICM, Ipsos Mori, Populus, YouGov and the BBC if you want more detail.

UPDATE 16:25, 7/5/10 It has been brought to our attention that, as opinion poll samples exclude Northern Ireland, we should state share of vote figures excluding NI, which are: Conservatives 37%, Labour 30%, Lib Dems 23%, the rest 10%.

5 Comments

14 years ago

This result is every bit as bad a measurment erros as 1992. In 92 th emain problem was estimating the Conservative share, this time the error is in the LibDem share. Given the consistancy of the error it is clearly methodolgy, not a one-in-million sampling error.

Like Report

14 years ago

I'd be interested to know what the unweighted Lib Dem share was in each of the polls and whether it was weighted up or not. I've worked with opinion pollsters when decisions have had to be taken about weighting under pressure and it's not an exact science that's for sure. Sometimes a 'narrative effect' comes into play, ie everyone is saying that the Lib Dems are going to do well so analysts are looking for higher Lib Dem figures and reasons to weight up. But maybe the unweighted numbers were also systematically out so I'm looking forward to reading the full story when the dust settles. Congratulations to Simon Danczuk by the way - what a remarkable result that was, despite that sort of grumpy old man he had as leader!

Like Report

14 years ago

Ray I don't entirely agree. The only number that is madly different from the actual vote is the Libs - 27/28% in the polls and more like 22% in the booth. What that tells me is not that the polls were wrong, but that on the day a quarter of their erstwhile supporters thought long and hard about where to cast their vote - most of this "swing" was in seats unwnnable for the Libs. The Dewey Wins effect took aprt because Gallup thought people would stick with their preference over the fortnight before the election in 1948. But the effect still holds true - people can't be counted on to hold their preference overnight: not when the electoral outcome is uncertain. In NZ in 1993 I was with a company that went back to those polled before the 1993 election (which had a surprise outcome - it was known as the Bugger the Pollsters election) and something like 30% of voters in the survey had switched allegiance in the rpevious 24 hours. Some went left, some went right - but 30% So in my view this is probably what happened here. I thought the polls did a uniformly good job. One last thing. If Gordon Brown hopes to stay in power, I don't think he'll have a Clegg to stand on.

Like Report

14 years ago

Now that time has passed since these pre-election OPINION polls, isn't it time to talk about the very accurate EXIT poll, which I think got it more or less exactly right? Quant surveys are not my field, but I agree with Duncan, wasn't EVEYONE saying in the day or two before May 6 that there were huge numbers of floating voters? So it was not that the polls were wrong, but that people could not honestly answer the question, how will you vote. On a technical front, can the next question not be, how certain are you that you will vote this way?

Like Report

14 years ago

The exit poll was impressive, I probably should have said more on that in the post above. Some of the pollsters did ask about certainty of choice and likelihood of changing mind etc, and published these numbers alongside the voting intention results, although I don't know what efforts were made to combine the two to refine their predictions. On your other point, I think there is an issue with the way opinion polls tend to be assessed as 'right' or 'wrong' based on whether they match the final result. They might be a perfectly accurate reflection of people's intentions 24 hours or so before they voted - and still turn out to be 'wrong' in the sense that that's not the same as the election result. But the problem is, the media and the public aren't interested in those sorts of excuses (and arguing that the polls were right but the voters were wrong would, I suspect, be particularly unlikely to win much public sympathy). Particularly in the case of the final pre-election polls, newspaper editors and readers just want to be told who's going to win, and if you choose to do political polling, you're pretty much choosing to play along with that.

Like Report