NEWS11 May 2015
All MRS websites use cookies to help us improve our services. Any data collected is anonymised. If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here.
All MRS websites use cookies to help us improve our services. Any data collected is anonymised. If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here.
UK — The British Polling Council, supported by the Market Research Society, has announced it is setting up an independent enquiry to look into possible causes of bias in the pre-election polling.
The council said in a statement: “the final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation.”
The enquiry will be chaired by Professor Patrick Sturgis, professor of research methodology and director of the ESRC National Centre for Research Methods and will make recommendations for future polling.
In terms of average pre-election polling the Conservative vote was underestimated by 4.2%, Labour vote was overestimated by 2.4% and Lib Dem was overestimated by 0.9%. No polling company consistently showed a Conservative lead to the extent that was finally recorded.
Over the weekend pollsters started putting out statements and opinions analysing the pre-election polls with the final result.
Anthony Wells, director at YouGov said: “Every couple of decades a time comes along when all the companies get something wrong. Yesterday appears to have been one such day.”
YouGov’s final poll put Conservatives on 34% (the election result was 37.8%), Labour 34% ( 31.2%), Lib Dem 10% ( 8.1%), UKIP 12% ( 12.9%) and Green 4% ( 3.8%).
“The all-important margin between the Conservatives and Labour was significantly off. When something like this happens there are two choices. You can pretend the problem isn’t there, doesn’t affect you or might go away. Alternatively, you can accept something went wrong, investigate the causes and put it right,” said Wells.
He pointed to the poll difference not being a random sample error but some deeper methodological failing and said it wasn’t down to mode effects but that “one potential cause of error may be the turnout models used”.
Andrew Cooper, founder and director at Populus pointed to the 1992 election and a similar over-estimation of the Labour vote, and subsequent change to methodology.
“At the next four general elections the polls have been right. There are likely to be variety of reasons behind the difference in the polls and the final outcome. Very late swing to the Conservatives, polling weightings, polling methodology and claimed propensity to vote will be just some of the factors that are likely to be discovered once an investigation is completed.”
Newsletter
Sign up for the latest news and opinion.
You will be asked to create an account which also gives you free access to premium Impact content.
“I wrote down a list of things that matter to me, in four categories – relationships, health, leisure and work. I d… https://t.co/MalB9p0olB
The number of profit warnings issued by UK-listed companies increased by 50% year-on-year in 2022, according to res… https://t.co/arjNoUjS2z
Cynozure bolsters US team https://t.co/eXWTCHXiKx #mrx #marketresearch
The world's leading job site for research and insight
Resources Group
Custom Analytics Analyst (Consumer) – Leading Specialist Insights Consultancy
£35,000
Hasson Associates
DP Analyst
£30000–38000
Spalding Goobey Associates
Senior Quantitative Consultant (RM/AD) – Strategic Growth Consultancy
£45 – 65,000 + Bens
Featured company
Town/Country: London
Tel: +44 (0)20 7490 7888
Kudos Research are leading providers of premium quality UK and International Telephone Data-Collection. Specialising in hard to reach B2B and Consumer audiences, we achieve excellent response rates and provide robust, actionable, verbatim-rich data. Methodologies include CATI, . . .
Related Articles
Lucy Davison
8 years ago
This is a crucial issue for the industry as a whole as it calls into question the accuracy of all our methodologies; not only polling. For anyone interested in quizzing the pollsters, the Business Intelligence Group (BIG) is running a post-election special with a panel discussion chaired by Dave Skelsey of Strictly Financial featuring leading pollsters Keiran Pedley of GfK NOP and Adam Drummond of Opinium Research. Why was this was such a tough election to call, what was learned from it, and what the implications are for business of the election and its result? If you are interested come along to Research Now at 160 Queen Victoria Street, London, EC4V 4BF at 6pm on 2 June...
Colin Wheeler
8 years ago
There is an interesting sharing of views (including mine) on a LinkedIn discussion led by Ray Poynter: https://www.linkedin.com/pulse/should-opinions-polls-banned-run-up-elections-ray-poynter Lucy, I'd love to come along to the talk on June 2nd, but am out of the country that day. However, I'll look out for twitter/blog follow ups. In brief, I don't think we should be surprised that sometimes polls don't predict political results, the reality is that there are lots of conflicting and confusing factors. My issue is with the criticism is that those who over-stated the accuracy different polling results (news media/political pundits and groups) and now have no interest in accepting their role in 'boosting' different likely outcomes. No measurement can be perfect, we're setting ourselves up for a fall if we pretend otherwise. The work that's being done to review what we can do differently moving forward to improve accuracy and mitigate extraneous factors is welcome, but we should not be apologising so effusively.
Penny Mesure
8 years ago
I would urge the review team to look first at the sampling. My impression is that some online polling is based on an opt-in panel which will not be representative of the wider electorate. Constructing a demographic representation will not correct for all bias. Telephone polls can over represent those with time on their hands or who do not work, even if they are conducted in evenings and at weekends. Sampling error is also an obvious issue and was never mentioned in the published polls that I saw. Even with a random probability sample the error on a 34% estimate will be +/- about 3%. So if both main parties seem to be at 34% one could be at 31% and the other at 37% ... This extreme result is unlikely to be consistent across many polls and over time so is probably not a major factor, but confidence limits should be explained to journalists and to a credulous public. The published polls that I saw tended not to include any don't know/undecided response. Why not? Everything added to 100%. This was also the case with the Scottish referendum. Which brings me to the question of the questions, which are rarely shown in a published poll. Are we sure that they are fit for purpose? Are people given the chance to say don't know/undecided? Online surveys frequently force a response; I hope these did not. Next, do people tell the truth when answering polling questions? That's a big can of worms. They may think they are telling the truth, some may not admit even to themselves that they are going to vote Conservative. They may want to feel that they will vote in a socially acceptable, altruistic way. But maybe in the final moments their behaviour becomes different from their opinion. An opinion poll may be able to measure opinion (subject to the above caveats) but it cannot measure future behaviour. We should not pretend that it can. Some of the scrutiny that will now occur may highlight uncomfortable truths about our Industry. We should all take a keen interest in the panel's findings.
Gill Wales
8 years ago
I keep having to pinch myself. The message we're giving clients here is that surveys aren't reliable. Why are we doing that when the final polls' estimates of share of vote were actually very close: look at Anthony Wells' data in the article above and ask yourself exactly how 'wrong' those results were. YouGov's weren't even the best, but would we be describing them as 'wrong' in any other context where survey data can be compared to actual data? Polls measuring share of vote cannot predict an election outcome if the election itself isn't based on share of vote (see last Sunday's analysis of ACTUAL share of vote versus election outcome: http://www.theguardian.com/politics/2015/may/09/electoral-reform-society-result-nail-in-coffin-first-past-the-post). We must stop banging on about bias and deep methodological failings before we know if there were any and look at the more fundamental question: what exactly is the purpose of publishing pre-election polls? On the morning of the election the R4 Today programme asked an industry commentator what the point of an exit poll was since we'd find out who the winner was within a few hours anyway. Our industry man said something to the effect that it would tell us how accurate the poll was. Which makes it sound as the survey industry is using general elections as a giant lab rat to test our methods. How dare we use the electoral process to indulge ourselves in this manner? Some countries ban the publication of voting intention polls during election campaigns, because there is evidence that they can influence voting behaviour. So we need to think hard about broadcasting survey results where we don't even seem to know what we're really measuring. We definitely need to stop making public statements that call the entire survey industry into disrepute.
RT @CatsProtection: Gorgeous #MatureMoggy Rose is looking for a new home after she sadly lost her owner. She is a very sweet natured #cat a…
The post-demographic consumerism trend means segments such age are often outdated, from @trendwatching #TrendSemLON
4 Comments
Lucy Davison
8 years ago
This is a crucial issue for the industry as a whole as it calls into question the accuracy of all our methodologies; not only polling. For anyone interested in quizzing the pollsters, the Business Intelligence Group (BIG) is running a post-election special with a panel discussion chaired by Dave Skelsey of Strictly Financial featuring leading pollsters Keiran Pedley of GfK NOP and Adam Drummond of Opinium Research. Why was this was such a tough election to call, what was learned from it, and what the implications are for business of the election and its result? If you are interested come along to Research Now at 160 Queen Victoria Street, London, EC4V 4BF at 6pm on 2 June...
Like Reply Report
Colin Wheeler
8 years ago
There is an interesting sharing of views (including mine) on a LinkedIn discussion led by Ray Poynter: https://www.linkedin.com/pulse/should-opinions-polls-banned-run-up-elections-ray-poynter Lucy, I'd love to come along to the talk on June 2nd, but am out of the country that day. However, I'll look out for twitter/blog follow ups. In brief, I don't think we should be surprised that sometimes polls don't predict political results, the reality is that there are lots of conflicting and confusing factors. My issue is with the criticism is that those who over-stated the accuracy different polling results (news media/political pundits and groups) and now have no interest in accepting their role in 'boosting' different likely outcomes. No measurement can be perfect, we're setting ourselves up for a fall if we pretend otherwise. The work that's being done to review what we can do differently moving forward to improve accuracy and mitigate extraneous factors is welcome, but we should not be apologising so effusively.
Like Reply Report
Penny Mesure
8 years ago
I would urge the review team to look first at the sampling. My impression is that some online polling is based on an opt-in panel which will not be representative of the wider electorate. Constructing a demographic representation will not correct for all bias. Telephone polls can over represent those with time on their hands or who do not work, even if they are conducted in evenings and at weekends. Sampling error is also an obvious issue and was never mentioned in the published polls that I saw. Even with a random probability sample the error on a 34% estimate will be +/- about 3%. So if both main parties seem to be at 34% one could be at 31% and the other at 37% ... This extreme result is unlikely to be consistent across many polls and over time so is probably not a major factor, but confidence limits should be explained to journalists and to a credulous public. The published polls that I saw tended not to include any don't know/undecided response. Why not? Everything added to 100%. This was also the case with the Scottish referendum. Which brings me to the question of the questions, which are rarely shown in a published poll. Are we sure that they are fit for purpose? Are people given the chance to say don't know/undecided? Online surveys frequently force a response; I hope these did not. Next, do people tell the truth when answering polling questions? That's a big can of worms. They may think they are telling the truth, some may not admit even to themselves that they are going to vote Conservative. They may want to feel that they will vote in a socially acceptable, altruistic way. But maybe in the final moments their behaviour becomes different from their opinion. An opinion poll may be able to measure opinion (subject to the above caveats) but it cannot measure future behaviour. We should not pretend that it can. Some of the scrutiny that will now occur may highlight uncomfortable truths about our Industry. We should all take a keen interest in the panel's findings.
Like Reply Report
Gill Wales
8 years ago
I keep having to pinch myself. The message we're giving clients here is that surveys aren't reliable. Why are we doing that when the final polls' estimates of share of vote were actually very close: look at Anthony Wells' data in the article above and ask yourself exactly how 'wrong' those results were. YouGov's weren't even the best, but would we be describing them as 'wrong' in any other context where survey data can be compared to actual data? Polls measuring share of vote cannot predict an election outcome if the election itself isn't based on share of vote (see last Sunday's analysis of ACTUAL share of vote versus election outcome: http://www.theguardian.com/politics/2015/may/09/electoral-reform-society-result-nail-in-coffin-first-past-the-post). We must stop banging on about bias and deep methodological failings before we know if there were any and look at the more fundamental question: what exactly is the purpose of publishing pre-election polls? On the morning of the election the R4 Today programme asked an industry commentator what the point of an exit poll was since we'd find out who the winner was within a few hours anyway. Our industry man said something to the effect that it would tell us how accurate the poll was. Which makes it sound as the survey industry is using general elections as a giant lab rat to test our methods. How dare we use the electoral process to indulge ourselves in this manner? Some countries ban the publication of voting intention polls during election campaigns, because there is evidence that they can influence voting behaviour. So we need to think hard about broadcasting survey results where we don't even seem to know what we're really measuring. We definitely need to stop making public statements that call the entire survey industry into disrepute.
Like Reply Report