NEWS8 May 2015

Post-mortem for the pollsters

News UK

UK — ‘A terrible night for us pollsters’ according to YouGov’s CEO Stephan Shakespeare. With the General Election exit polls, and eventual outcome, taking many by surprise, the focus today has turned to the polling.

Res_4013296_election_leaflets

But while Shakespeare went for a mea culpa on Twitter – ‘A terrible night for us pollsters. I apologise for a poor performance. We need to find out why’ others were more robust in their defence of the polls.

Andrew Hawkins, ComRes chairman argued that the pollsters were accurate measuring vote share but were not in the business of forecasting seats saying there was “not a systemic problem” with polling.

“Exit polls are a calculation of seats whereas the pre-election polls are a calculation of vote share. All ComRes election vote shares were in the margin of error so statistically it’s where it should be. So the problem has not been with measuring vote share –although some voting intention was more accurate than others– what has changed is the relationship with vote share and number of seats. UKIP gets 12% share and still only one seat; conservatives get 35 – 37% and it can be the difference between a majority and forming a coalition,” said Hawkins.

He did concede that a collective effort was needed from academics, the media and pollsters to make sure polling share is accurately interpreted.

But Mike Smithson, polling analyst and independent political blogger was scathing of the pollsters’ performance. “It was an absolutely terrible night for the pollsters. It was quite shocking that they all went to such great lengths to have final polls with fieldwork going right up to late evening on Wednesday – with the polls only published yesterday – in order to try and detect any late swing, and they didn’t detect it. They found the swing going to Labour, and of course that was not what happened. That is just extraordinary.

“I think there’ll be lots of lessons to be learned here – across sampling, methodology, weightings. I can see a complete re-look and re-examination of how we do political polling.”

Deborah Mattinson, co-founder of BritainThinks said that while the margin of error argument might stand up if the polls were all varying, it didn’t when they were so close.

“Political polling is an art rather than a science,” she said. “Someone said to me: ‘we pollsters would rather be wrong together than right on our own’. In this instance they were grouping together and wrong. Lord Ashcroft spent so much money on marginals and that was also wrong. The national picture is explainable with late surge, silent Tories etc, but I don’t know how you explain getting it so wrong at a constituency level – other than a late surge,” she added.

James Myring, director, BDRC Continental pointed to other methodological factors: “It does seem as if the polls got it wrong. Labour and the Conservatives were neck-and-neck in the polls at around 34% each, while the actual results show a clear lead for the Conservatives with 37% of the vote compared to Labour’s 31%.

“But there is more than one way to skin a cat…as researchers we don’t merely have the option of asking people how they would vote but also asking them how others would vote.  This is the wisdom method.  ICM conducted a wisdom poll, and this put Tories on 35% and Labour on 32%. Not quite the final outcome, but correct on the key point of showing a clear gap between the two major parties.” 

9 Comments

9 years ago

This was a bad night for polling. One thing we need to watch out for are pollsters saying that 34% 34% is within their margin of +/-3%, so 31% 37% is OK. First, as Deborah Mattinson points, out, when we look at the collection polls the fairy tale of sampling error would be much less, about 1%. Secondly, you can't use the same error twice in one measurement. 34%:34% is an estimate of 0% difference, so the sampling error would be around that point, so we might accept a gap of 1%, maybe 3%, but not 6%. A gap of 6% is a different result, it is a majority Conservative Government, not no overall control with neither party close to a majority! The scale of this problem, and similar problems abroad (think of British Columbia for example) re-ignite the debate about whether polls should be allowed to be published close the election. If they are accurate we can argue they help the electorate, but if we are unsure about their accuracy they can impact the way people vote, based on bad information, and that might reasonably be prohibited.

Like Report

9 years ago

I am by no means an expert in polling research, but are the polling studies asking the wrong question? I can imagine asking who you will vote for works for a referendum but its not applicable for a seat based system of results? As ComRes said their result was about right, but whats the point in asking this question when whats actually important is something else i.e. votes vs won seats. I do remember reading somewhere that asking people who they think would win is more accurate than counting up the claimed votes people say they will give.

Like Report

9 years ago

I heard many "wrong kind of snow" type excuses over the weekend. Someone even blamed respondents! Vote intent v number of seats? Rubbish. We all wanted to know was going to form the next government. UKIP getting 14% of the vote means nothing if it translates into 1 seat. Pollsters got it extremely wrong. It is clearly a methodology problem.

Like Report

9 years ago

I agree with Deborah here. Polling is definitely an art, not a science, just like many other areas of research. Having spent a lot of my earlier years running NPD volume estimation projects, I learnt very quickly that to strictly rely on a model and not to apply a brain to the numbers coming out it was a recipe for disaster. Researchers often like to think of themselves as scientists measuring precise facts. But life is more complicated than that. Voting probably has as a lot to do with emotional System 1 thinking. As an industry, we’ve been deluding ourselves for decades that people are always rational and can tell us accurately what they will do. I fear the impact on the industry of this result. It is just like that faced by economists post the 2008 crash. Interestingly its causes are almost the same – over-reliance on models, too much belief in the logical and rational and group-think. MR is in a vulnerable place currently. It is under pressure because of declining budgets and competition from other information providers and the impact of automation. The last thing it needed now is a scandal questioning its accuracy.

Like Report

9 years ago

The problem seems to be one of relying too much on a 'black box' prediction without the application of any insight. Let's consider some facts that should have rung some bells and caused the figures to be questioned. Cameron consistently scored higher than Miliband as being the Prime Minister of choice; The Tories consistently scored higher than Labour as the party most trusted with the economy; according to a number of independent economic organisations the economy is heading in the right direction; employment figures are dropping; the threat of the Scots Nats tail wagging the English dog was playing well on the doorstep; Cameron has a record in leading the country (regardless of what people think of it) Miliband was an unknown quantity; in uncertain times people are frightened of change. Was it likely that the public would totally turn their backs on all the above and leap into the dark? Even in my local pub there were people predicting a small majority for the Tories (including labour voters!). Maybe the pollsters need to get out more.

Like Report

9 years ago

Is this an instance in which prediction markets could have done better? Does anyone know if they did?

Like Report

9 years ago

It is a first past the post system so voting share was not going to reflect into the number of seats. One way to get more accurate results was to ask the UK population who they think would walk back into Downing street the next day. People who thus think of their vote but also how others are voting as well. The poll Alligator Research carried out hours before the polling stations closed, was in line with the eventual results: twitter.com/alligator_uk/status/596372561329520641

Like Report

9 years ago

Ray Poynter makes the comment: "If they are accurate we can argue they help the electorate, but if we are unsure about their accuracy they can impact the way people vote, based on bad information, and that might reasonably be prohibited". I cannot see how the two situations can be disentangled: If an "accurate" poll is published, showing neck-and-neck voting intentions, and our argument is that it helps the electorate, and "the electorate" decides to change its vote now that it can see how others are intending to vote, then the poll will end up seeming to have been wrong. So how do we know if the poll was accurate or inaccurate? Either polls should be published or not, because, accurate or not, they will have some effect on behaviour and considerations; I suspect much more so when they predict a close race, because this inflates the percived value of the individual vote (as seen in the higher than average turnout, and the various vote-swap initiatives reported).

Like Report

9 years ago

I just wanted to follow up the interesting and extensive comments on this crucial issue with a reminder that anyone interested in the polling results and its implications for the industry,should attend the Business Intelligence Group (BIG) post-election special with a panel discussion chaired by Dave Skelsey of Strictly Financial and featuring leading pollsters Keiran Pedley of GfK NOP and Adam Drummond of Opinium Research. You can email your questions in advance to organiser Mike Joseph. Details here: http://tinyurl.com/pkasdfv, or come along to Research Now at 160 Queen Victoria Street, London, EC4V 4BF at 6.30pm on 2 June...

Like Report