FEATURE2 November 2016

The Guessing Game

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

Features Impact Opinion Public Sector

At a time when shock election results are becoming commonplace, Justin Charlton-Jones, managing director of Blinc, and James Endersby, chief executive of Opinium, go head to head to discuss the respective merits of predictive markets and opinion polls for forecasting outcomes

Guessinggame

Justin Charlton-Jones:

In both the Scottish Referendum in 2014 and the General Election of 2015, opinion polls suggested that the final result was too close to call. In fact, both votes resulted in decisive outcomes, and the poll results were so far removed from the actual General Election outcome that a formal enquiry was carried out by the National Centre for Research Methods to try to understand why. 

However, an alternative research methodology – predictive markets – accurately forecast the outcome of both elections (months in advance) and has shown itself to be more accurate than conventional polling at anticipating the outcomes of a raft of other events.   

The earliest prediction market – The Iowa Electronics Market (IEM), which has primarily been used in the political domain – was developed in Iowa University in the late 1980s and is still running. A research paper from Stanford University (Wolfers and Zitzewitz, 2004) asserts that the market has both yielded very accurate forecasts and outperformed large-scale polling organisations.

Indeed, in June 2008, The International Journal of Forecasting looked at prediction market accuracy vs. traditional research techniques and concluded that ‘the market is closer to the eventual outcome 74% of the time’. 

James Endersby:

At Opinium we’re method-neutral and big fans of advancing new techniques. In fact, we’re in the process of developing our own predictive market methods. But, of course, we’re also one of the major players in the political polling arena and it’s important to recognise the role of each approach. 

Our recent success includes the most accurate final polls for both the London Mayor in May, and the EU referendum, when we were within 1% of the final result.

But political polls are snapshots of public opinion at the time that they are conducted and not a prediction of future events. Even polls that are conducted the day before polling day itself could be completely accurate in what they set out to do, but be made to look foolish by last-minute ‘on-the-day’ swings, as is believed to have happened in Scotland in 2014.

The judgements by those making the predictions are based on the available information at the time of the prediction and – in the context of any election – the most relevant information will be polling results. Predictive markets are simply an add-on to the information that polling companies provide, and an extrapolation into the future based on history and what polls tell you at a given point. 

In some cases this judgement will produce an accurate prediction while in others it may not, because ultimately those making the predictions are susceptible to unconscious biases, social norms and a vast range of other factors. 

JC-J:

All research is essentially a snapshot, whether it is a conventional survey or a predictive market. Nevertheless, the purpose of research generally – and of polling in particular – is usually to allow the user to make judgements about which idea to take forward; which product to launch; or what plans to make in the event of a particular election outcome. 

What is concerning about some polling results recently is that, despite asking the question on the day of the event, when all the factors that might influence people are in the public domain, the polling results have not reflected the results that they are attempting to measure.

Predictive markets provide a greater degree of certainty than conventional research, because: of the way the questions are framed; they ask for outcomes not opinions; there is jeopardy involved (the individual will lose money if they are wrong); panellists only answer a question that they want to, because they think that they know the answer and can win.

The panellists who take part in our predictive markets are indeed subject to the unconscious biases, social norms and other factors that we are all prone to, but the methodology mitigates against this, and consistently offers greater accuracy than conventional research methods.

JE: 

Prediction markets do not give a greater degree of certainty; they offer the illusion of certainty and this is what is more attractive than the reality of uncertainty. At present, polling agencies are quite upfront about margins of error and uncertainty. The same cannot be said for the way prediction markets and betting markets are presented in elections.

While predictive markets can be a useful technique for brands, the approach had a particularly poor time in the EU referendum. 

While Opinium, ICM and TNS’s final opinion polls successfully called the right outcome, predictive markets fared less well. The day before the referendum, Hypermind gave Remain a probability of 75%, and Brexit only 25%. An hour before polls closed, PredictIt showed a 65% chance of a Remain win. And, rather amusingly, at 8.29 the next morning, after the results were conclusive, they had a 25% chance of a Remain win. 

Ultimately what we are talking about here is what is more useful to clients who need to plan for all eventualities, and this is not served by implying certainty where none exists. 

Polls may have their flaws but, like democracy, they’re the worst way of predicting things, apart from all the others. 

JC-J:

I certainly don’t mean to imply that predictive markets are infallible, just that they seem to predict the actual outcome of questions more reliably than other methods. Prediction markets deal in probabilities: a 70% probability doesn’t mean total certainty – it means a strong likelihood – like a 70% chance of rain. In fact, if we look at predictions over time that were 70-30, the data shows that the confidence levels are calibrated, meaning that the alternative outcome does happen about 30% of the time.

At Blinc we ran two questions on the referendum. The first: “Will the UK vote to leave the EU or remain in the EU when the referendum is held on 23 June?” ran for 12 months and consistently showed a probability of the UK leaving of between 25% to 30%. However, in May, we added a new question: “Will the Leave campaign motivate more people to go out and vote for them than the Remain campaign?” 

This question never showed less than 50% probability that the Leave campaign would get out more voters. The average was 57%. Why one question predicted the referendum outcome correctly and the other not is currently the subject of a more detailed analysis.

JE:

Fascinating questions, with interesting outcomes, but neither of your questions predicted the outcome of the EU referendum correctly, because asking about which campaign was the most motivational is different to asking which side will attract more votes. 

Your second question would have correctly applied to the pro-independence side in the Scottish referendum; it arguably motivated more people than the pro-Union campaign, but still lost because so many people already knew how they planned to vote, regardless of how the campaigns were conducted.

Ultimately though, the real issue is whether or not predictive markets would work at all in this space, if their participants didn’t have any polling data available to establish the lie of the land. 

The Oldham West by-election last year was a useful test case of what happens when you have an election informed by no polling whatsoever. The conventional wisdom was that UKIP would run Labour very close and perhaps win the seat. The extrapolated prediction, based on bookies odds, was of Labour narrowly beating UKIP by 41% to 38%. 

Labour ended up winning by 62% to 23% and we saw how, without any data to ground things, pundit predictions and the conventional wisdom can go seriously off the rails. 

0 Comments