OPINION1 December 2014

2015 a crunch test for pollsters

Opinion

class="article_main_img" >

Penny Young, chief executive of NatCen Social Research continues the debate about determining voting intentions and takes on some of the issues raised in a previous blog on the subject.

A blog for research-live.com by ICM’s Martin Boon last week highlighted problems facing the pollsters who overestimated the eventual ‘yes’ vote in Scotland’s independence referendum. Martin also comments that pollsters might be overestimating UKIP’s support; and as Survation’s Patrick Brione has analysed, the polls ahead of the Rochester and Strood by-election overestimated UKIP’s lead over the Conservatives by at least five points, sometimes more.

Boon points the finger of blame squarely at social desirability bias: people not admitting what they know they really feel, and the challenge of sensibly correcting for this unquantified bias in a poll. To bolster his case for the phenomenon, he argues that even gold standard random probability surveys like British Social Attitudes suffer from these problems.

As the producers of this long running survey NatCen Social Research acknowledges that every survey can suffer from this problem. But we’re not sure that Martin’s examples support his case.

Take self-reported racial prejudice. It’s not clear why the fact that 2% of the population – (actually 3% in the latest survey – around 1.9 million people) – self identifying as ‘very prejudiced’ should preclude the possibility that 40,000 race crimes were reported in 2010/11. And, in fact, a further 27% admit to at least some racial prejudice – a figure the size of which has truly startled some commentators. 

In my view, the problems of estimating either the ‘yes’ vote in Scotland or the UKIP vote in by-elections are about much more than the shy-voter effect, and illustrate the headache pollsters have in designing weighting schemes, particularly as the political landscape becomes more complex.

The sources of bias are many. Online panels rely on people who have signed up to complete online surveys – individuals who spend their spare time incentivised to fill out questionnaires online can hardly be truly representative of the population. Some telephone surveys are land line only – an increasing problem in eliciting the views of the young.

Should weighting schemes take into account past voting behaviour or not and if so, how to correct for ‘false memory’? And what about correcting for turnout? This is hard enough at the best of times, but in Scotland 16 and 17 year olds were voting for the first time. And the profile of UKIP supporters is still changing which makes it hard to judge the results of a weighting scheme.

All the main pollsters engage in, and comment on, the methodological challenges of their work and argue about how they solve these dilemmas and are to be applauded for that. But even so, it’s not always easy for the public or commentators to judge the quality of different polls.

Whereas British Social Attitudes publishes its response rate ( 54% in the 2014 report), we have no idea how many people a pollster has attempted to recruit before finding someone prepared to participate in a phone survey, or how many people didn’t click through a link to join a panel. This makes it difficult to judge how hard a weighting scheme is having to work to overcome deficiencies in the sample.

Getting polls wrong is a problem in itself, but there are other potential issues. For example, academics have written about the bandwagon effect, whereby electors become more likely to vote for parties and candidates who are likely to succeed. This begs a question about whether over-representing the UKIP vote might influence voters. 

We’ve already overheard Cameron joking that he wanted to sue the pollsters for overestimating the ‘yes’ vote in the Scottish referendum.

My argument is not that polls are a problem. They are at the heart of our democratic debate, and from a commissioner’s perspective, are speedy and great value for money. My argument is that it is important that polls are not consistently proved to be wrong in important ways, or they are going to come in for some criticism for their impact upon our democracy.

The growing impact of UKIP and the SNP on the Westminster outcome, together with the pressure on Labour’s traditional vote, and the apparent decline of the Lib Dems, particularly given the first past the post system, means that the task for pollsters will be more difficult in 2015 and yet their reputation will be on the line. And that’s before we get to an in-out referendum on Europe.