NEWS11 May 2015

British Polling Council sets up enquiry into election polls

Features News UK

UK — The British Polling Council, supported by the Market Research Society, has announced it is setting up an independent enquiry to look into possible causes of bias in the pre-election polling.

Res_4013297_election_poll_bbc

The council said in a statement: “the final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation.”

The enquiry will be chaired by Professor Patrick Sturgis, professor of research methodology and director of the ESRC National Centre for Research Methods and will make recommendations for future polling.

In terms of average pre-election polling the Conservative vote was underestimated by 4.2%, Labour vote was overestimated by 2.4% and Lib Dem was overestimated by 0.9%. No polling company consistently showed a Conservative lead to the extent that was finally recorded.

Over the weekend pollsters started putting out statements and opinions analysing the pre-election polls with the final result.

Anthony Wells, director at YouGov said: “Every couple of decades a time comes along when all the companies get something wrong. Yesterday appears to have been one such day.”

YouGov’s final poll put Conservatives on 34% (the election result was 37.8%), Labour 34% ( 31.2%), Lib Dem 10% ( 8.1%), UKIP 12% ( 12.9%) and Green 4% ( 3.8%).

“The all-important margin between the Conservatives and Labour was significantly off. When something like this happens there are two choices. You can pretend the problem isn’t there, doesn’t affect you or might go away. Alternatively, you can accept something went wrong, investigate the causes and put it right,” said Wells.

He pointed to the poll difference not being a random sample error but some deeper methodological failing and said it wasn’t down to mode effects but that “one potential cause of error may be the turnout models used”.

Andrew Cooper, founder and director at Populus pointed to the 1992 election and a similar over-estimation of the Labour vote, and subsequent change to methodology.

“At the next four general elections the polls have been right. There are likely to be variety of reasons behind the difference in the polls and the final outcome. Very late swing to the Conservatives, polling weightings, polling methodology and claimed propensity to vote will be just some of the factors that are likely to be discovered once an investigation is completed.”

@RESEARCH LIVE

4 Comments

9 years ago

This is a crucial issue for the industry as a whole as it calls into question the accuracy of all our methodologies; not only polling. For anyone interested in quizzing the pollsters, the Business Intelligence Group (BIG) is running a post-election special with a panel discussion chaired by Dave Skelsey of Strictly Financial featuring leading pollsters Keiran Pedley of GfK NOP and Adam Drummond of Opinium Research. Why was this was such a tough election to call, what was learned from it, and what the implications are for business of the election and its result? If you are interested come along to Research Now at 160 Queen Victoria Street, London, EC4V 4BF at 6pm on 2 June...

Like Report

9 years ago

There is an interesting sharing of views (including mine) on a LinkedIn discussion led by Ray Poynter: https://www.linkedin.com/pulse/should-opinions-polls-banned-run-up-elections-ray-poynter Lucy, I'd love to come along to the talk on June 2nd, but am out of the country that day. However, I'll look out for twitter/blog follow ups. In brief, I don't think we should be surprised that sometimes polls don't predict political results, the reality is that there are lots of conflicting and confusing factors. My issue is with the criticism is that those who over-stated the accuracy different polling results (news media/political pundits and groups) and now have no interest in accepting their role in 'boosting' different likely outcomes. No measurement can be perfect, we're setting ourselves up for a fall if we pretend otherwise. The work that's being done to review what we can do differently moving forward to improve accuracy and mitigate extraneous factors is welcome, but we should not be apologising so effusively.

Like Report

9 years ago

I would urge the review team to look first at the sampling. My impression is that some online polling is based on an opt-in panel which will not be representative of the wider electorate. Constructing a demographic representation will not correct for all bias. Telephone polls can over represent those with time on their hands or who do not work, even if they are conducted in evenings and at weekends. Sampling error is also an obvious issue and was never mentioned in the published polls that I saw. Even with a random probability sample the error on a 34% estimate will be +/- about 3%. So if both main parties seem to be at 34% one could be at 31% and the other at 37% ... This extreme result is unlikely to be consistent across many polls and over time so is probably not a major factor, but confidence limits should be explained to journalists and to a credulous public. The published polls that I saw tended not to include any don't know/undecided response. Why not? Everything added to 100%. This was also the case with the Scottish referendum. Which brings me to the question of the questions, which are rarely shown in a published poll. Are we sure that they are fit for purpose? Are people given the chance to say don't know/undecided? Online surveys frequently force a response; I hope these did not. Next, do people tell the truth when answering polling questions? That's a big can of worms. They may think they are telling the truth, some may not admit even to themselves that they are going to vote Conservative. They may want to feel that they will vote in a socially acceptable, altruistic way. But maybe in the final moments their behaviour becomes different from their opinion. An opinion poll may be able to measure opinion (subject to the above caveats) but it cannot measure future behaviour. We should not pretend that it can. Some of the scrutiny that will now occur may highlight uncomfortable truths about our Industry. We should all take a keen interest in the panel's findings.

Like Report

9 years ago

I keep having to pinch myself. The message we're giving clients here is that surveys aren't reliable. Why are we doing that when the final polls' estimates of share of vote were actually very close: look at Anthony Wells' data in the article above and ask yourself exactly how 'wrong' those results were. YouGov's weren't even the best, but would we be describing them as 'wrong' in any other context where survey data can be compared to actual data? Polls measuring share of vote cannot predict an election outcome if the election itself isn't based on share of vote (see last Sunday's analysis of ACTUAL share of vote versus election outcome: http://www.theguardian.com/politics/2015/may/09/electoral-reform-society-result-nail-in-coffin-first-past-the-post). We must stop banging on about bias and deep methodological failings before we know if there were any and look at the more fundamental question: what exactly is the purpose of publishing pre-election polls? On the morning of the election the R4 Today programme asked an industry commentator what the point of an exit poll was since we'd find out who the winner was within a few hours anyway. Our industry man said something to the effect that it would tell us how accurate the poll was. Which makes it sound as the survey industry is using general elections as a giant lab rat to test our methods. How dare we use the electoral process to indulge ourselves in this manner? Some countries ban the publication of voting intention polls during election campaigns, because there is evidence that they can influence voting behaviour. So we need to think hard about broadcasting survey results where we don't even seem to know what we're really measuring. We definitely need to stop making public statements that call the entire survey industry into disrepute.

Like Report