OPINION27 January 2015

ELECTION BLOG: How ‘accurate’ is ‘accurate’ in political polling?


With 99 days to go to polling day, in the first of’s election blogs, Martin Boon looks at poll accuracy.

Whether the polling industry predicts any election accurately is of great interest to many people. But the complexity of 2014’s Scottish independence referendum and the wild card-strewn nature of this year’s General Election set me thinking about what we should realistically think of as ‘being accurate’. The hope and expectation on all pollsters to prove the predictive power of market research, but how should we judge our own polls once the ballot papers have been counted and the results called and analysed?

Well, for a start we should only focus on the single poll that counts for us. The final poll, usually published on the morning of the General Election becomes our ‘Prediction Poll’ – the one in the political cycle by which victory or defeat is confirmed.

Prediction Polls are traditionally measured by ‘average error’, defined as the error in the vote share for the main parties plus the net of others, divided by four. This is complicated in 2015 by the rise of UKIP et al, but I don’t doubt the British Polling Council will continue to adhere to this basic premise for measuring poll accuracy.

So let’s think about what level of average error is good, and what isn’t. History tells us a lot. In 2010, ICM earned the prize with an error of 1.25%, but in 2005, the best average error from NOP was a stunning 0.25%, which for me is bulls-eye territory. In 2001, the best average error was 0.6% and in 1997 it was 1.2% (humility prevents me from saying which polling company got it closest on those two occasions). But in 1992, the good old polling Waterloo of yore, the ‘best’ prediction was 2.25%, the worst 3.5%.

So what we can say with some confidence is that scoring under 1% on average error is historically exceptional in the modern polling age, with it only being achieved by the best poll (never mind any others) on two occasions out of the past five. For good measure, let’s say that anything under 0.5% is the stuff of polling dreams.

But where does mediocre or plain bad set in? Well, if the best prediction of 2.25% error in 1992 is still seen as the nadir moment, then for me it’s safe to think that anything over 2.5% is out of bounds. If that is accepted, then it seems logical to assume that anything between 1.6% and 2.5% ranges from ‘slightly disappointing’ to ‘disturbing’.

But is it easier to predict a General Election, over say, a second order election such as the Europeans which took place last year? Historically it’s been harder for ICM to accurately predict things other than General Elections, with constituency elections probably hardest of all. So yes, personally, I’d be grateful for a wider buffer if the election is not General.

Accuracy in 2015 will rest on many things. Online polling will be ubiquitous, and by bad luck or judgement online pollsters didn’t do well in 2010. But I’d suspect that all polls – irrespective of methodology – will converge come 7th May, which implies that we’ll get it right, or will be ‘slightly disappointed’ with our performance. Or worse…

Martin Boon is director of ICM Research