OPINION19 June 2015

Protecting trust in statistics and democracy


As the first session of the British Polling Council’s (BPC) inquiry kicks off, NatCen’s Kirby Swales thinks the whole research industry must improve its communications skills.


The BPC inquiry into the accuracy of the polls during the 2015 General Election campaign starts today. This is a welcome and vital initiative as inaccurate polls are damaging to democracy and reflect badly on the survey research industry as a whole. The industry must get better at communicating and improving methodology, and funders and the public must be encouraged to understand the strengths and weaknesses of different methods.

So, we now know the polling industry over-estimated the extent of Labour support, and it wasn’t until the exit poll – based on an arguably more sophisticated methodology – was published, that we knew the shape of the result.

NatCen associate John Curtice has written and spoken about the recent history of political polling and the possible factors behind the results. This moment is likely to be significant in the history of polling – there are echoes of 1992 when the Labour vote was also over-estimated and the infamous 1936 US Presidential Election campaign when a Literary Digest poll of more than two million people failed to predict that FDR would be elected.

Polls are an important part of our democracy because they throw light on what the public is thinking. They make it possible to check claims by politicians about the public mood and they give information that can enhance public debate. But if they are incorrect then they can be damaging.

There is evidence that they can influence the way people vote through bandwagon and underdog effects. And they certainly seem to have influenced the tone and approach of the campaign in the 2015 General Election; Ed Miliband’s decision to rule out a Labour-SNP coalition followed repeated polls showing that this was the most likely election outcome.

Moreover, polling performance affects everyone working in survey research and with statistics.

Opinion polls are one of the statistics that the public really engage with and so a poor reputation can be bad for everyone. British Social Attitudes findings from this year suggest that most of the public are broadly trusting of ‘official statistics’, far more so than when they get stats from politicians or the media. And it is important that the pollsters’ shortfalls don’t undermine trust in statistics more generally.

So, what might be done?

We should wait for the inquiry’s findings before deciding the way forward, but if the polling industry is not seen to put its ‘house in order’ others will be keen to do so. The reaction has been vociferous, with some political commentators arguing that opinion polls cannot be taken seriously. There is even a Private Member’s Bill that would regulate ALL research looking at voting intentions.

There are two possible avenues of action that spring to my mind.

1. Polling companies and the media becoming better at explaining methodology, despite its complications. NatCen focuses on random probability surveys but we find the media often lazily categorise them as a ‘poll’. People engage with the polls largely through the media, so we need to make sure that the media understand what they are reporting and help them to identify a probability sample from a volunteer web panel, and the possible impact of different sampling, weighting and modelling methods.

Being a member of the BPC already brings with it a requirement to publish some of the underlying data, but this could go further, such as sampling procedures and response rates. More than that though, this information needs to be presented in such a way that it can be understood by the layperson.  

2. Finding ways to ensure that more of the published data is based on a probability sample. Many of the 2015 General Election campaign polling results come from web access panels. A range of research suggests that they are better at understanding differences in a population but not so good at producing reliable population estimates.

We haven’t yet seen the findings of the British Election Study’s random probability face to face survey, but perhaps we should be asking if after the election isn’t it too late to publish research that could have put the polls in context?

This is an important time for those working in survey research. On the one hand, the 2015 General Election showed just how central our work is in modern society and politics. On the other, it shows the dangers of expecting or claiming that surveys have a level accuracy that is not justified. Now the dust has settled, I hope the inquiry can helps us make a considered but bold step forward. 

Kirby Swales, Director of Survey Research Centre, NatCen Social Research


9 years ago

Sad to see that a very valid and interesting article once again gets sidetracked by the red herring of "omg we must use probability sampling". As a discussion, it becomes less and less relevant - or actionable - by the day. The public doesn't care about probabilty sampling vs opt-in panels. It cares about whether the polls accurately predict behaviour or not. We need to grow up, stop this methodological elitism, and work on better modelling the behaviour of (for example) non-respondents - people who, no matter how they're sampled, will not respond to polls.

Like Report

9 years ago

I don't think the article is suggesting that anyone "must" use probability sampling, nor that a probability based survey would have predicted the final result, rather that the use and publication of surveys based on a wider range of methods including probability based surveys would be productive and informative. It may not be realistic of course for commerical pollsters but that's not the point. Anyway, we haven't had the inquiry yet so we don't know whether sampling or coverage bias was a factor or not. We do know, on the other hand, that the polls were consistently biased in one direction and that volunteer web panels form the basis of most polls. So the possibility of sampling and/or coverage error playing a part should definitely not be discounted at this stage.

Like Report