FEATURE1 January 2012

Whatever next?

Features

Visualising what tomorrow will bring is crucial to what researchers do. But it’s also notoriously difficult. Robert Bain takes a look at the business of prediction.

Res_4006746_Future_predictions_car_whatever_next

“We are already on the verge of discovering the secret of transmuting metals… before long it will be an easy matter to convert a truck load of iron bars into as many bars of virgin gold.” Those were the words of Thomas Edison in 1911 predicting scientific advances in the century ahead. By 2011, Edison said, “there is no reason why we should not ride in golden taxi cabs” or “why our great liners should not be of solid gold from stem to stern”.

Edison didn’t live to see his golden taxi cab, and 101 years on from his forecast it still hasn’t shown up. To be fair to him, it’s not difficult to find examples of predictions from the past that were woefully off the mark – and people aren’t getting any more prescient. A 1962 advertisement for Michigan Mutual Liability predicted its customers of 2012 would need insurance for their “picnics on Mars”.

More prosaic predictions can be just as wrong. Many 1960s scholars predicted food shortages in the decade ahead which never came to pass. In the 1980s it was widely believed that Japan was poised to topple the US as the number one world power. And we all remember the anticlimax that was the Y2K bug. Meanwhile hugely significant global events such as the financial crisis and the 9/11 attacks continue to catch us off guard.

In fact half the time we don’t even know what’s happening today, let alone what will happen tomorrow. On New Year’s Eve 2007 the Financial Times predicted that America would not go into recession in the year ahead. They were sort of right – in the sense that the country already was in recession (a fact that would only became clear in economic data released later).

Ask the experts
Clearly, predicting the future is hard – even for experts. In an experiment that ran for 20 years psychologist Philip Tetlock put 28,000 questions about future events to 284 experts in various fields and compared the results with real-world outcomes. Looking back on their performance, Tetlock compared the experts to “dart-throwing chimpanzees”.

Yet despite these failures, we keep making and seeking predictions. We place bets, we listen to pundits, we conduct market research. In his latest book Future Babble journalist Dan Gardner reviews predictions from recognised experts in politics, economics, technology and climate science. “No matter how many times the best and brightest fail, more try,” says Gardner. “And they never doubt they are right.”

Thanks to various psychological quirks from which we all suffer, people whose predictions go awry often don’t accept, or even realise, that they were wrong. But even more worryingly, failed predictions rarely dent an expert’s credibility – people keep listening to them.

Cognitive illusions like these can lead to bad predictions of the value of a company’s stock or of an individual’s professional performance, as Nobel Prize-winning psychologist Daniel Kahneman describes in his new book Thinking Fast and Slow. For most fund managers, choosing stocks is “more like rolling dice than playing poker”, says Kahneman, because despite the knowledge and expertise that traders undoubtedly possess, very few consistently beat the market. The correlation in performance from year to year at a company Kahneman studied was close to zero, suggesting that they were “rewarding luck as if it were skill”.

Similarly, when Kahneman had to evaluate soldiers in the Israeli army for their potential to succeed as officers, he realised that the ones he and his team picked out as the best didn’t do any better than the rest in the long run. But that didn’t stop him. “We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each of our specific predictions was valid,” he says.

People whose predictions go awry often don’t accept, or even realise, that they were wrong. But even more worryingly, failed predictions rarely dent an expert’s perceived credibility – people keep listening to them

So predictable
All this might sound a bit disheartening. But market researchers haven’t got much choice other than to keep trying to predict the future. And even when the questions at hand are confined to the minutiae of individuals’ decision-making, it’s still not an easy business. So can market researchers really help clients face the future, or are they just chimps throwing darts?

Critics of market research enjoy citing the percentage of new products that fail – a figure that’s different every time you hear it, but is generally agreed to be a large majority. But flops aren’t just a function of bad research – markets can clearly only support a certain number of new products, so research can only reduce the odds of failure so far. Besides, a bit of healthy trial and error can do you good – just look at Google.

SPA Future Thinking forecasts sales for new consumer products, and claims margins of error of +/-9% (or sometimes as low as 4% for brand extensions). But a product’s actual success depends in large part on factors going beyond how consumers respond to it, such as whether distribution goes according to plan, how ad campaigns perform and what competitors do – so forecasts need to be revised after the product is launched.

Another area where the predictive ability of survey results can be tested is election polls – and in the UK’s last general election the prize for most accurate prediction went to ICM. Martin Boon, head of social and government research at ICM, says: “The opinion polls are the only things which are evaluated against a real-world outcome the next day. Nothing else in market research can be treated like that, so it’s critical not only for the polling agency’s reputation but for the reputation of market research in general.”

Opinion polls generally arrive caked in the mud of controversy, but on the whole they do a pretty good job of predicting election results. But there are always unexpected factors that can send things off course – in 1992 it was the infamous ‘shy Tories’, in New Hampshire in 2008 it was Hillary Clinton crying on TV, and in 2010 it was the Cleggmania phenomenon, which led to significant overestimation of the Lib Dem vote. The deceptive simplicity of a voting intention question hides a wealth of complexity, says Boon, as emotional and social factors play on people’s responses.

New methodologies offer hope of improved predictive power. Surveys designed to measure respondents’ implicit attitudes have been shown to help predict their behaviour, while analysis of social media buzz can be used to track epidemics and forecast elections (see box below).

Claims for the predictive ability of survey research techniques have at times proved controversial. Net Promoter Score, the recommendation metric made famous in Fred Reichheld’s book The Ultimate Question, is billed as a predictor of a company’s future sales growth – but a 2007 study by Tim Keiningham and others concluded that there was no evidence for its supposed predictive power. Still, companies haven’t stopped using it, and Reichheld published The Ultimate Question 2.0 last year.

Facing the future
Some researchers make a living from visualising the future. Martin Raymond is co-founder of The Future Laboratory, which runs the trendspotting network LS:N Global and a research division, Future Poll. The firm runs twice-yearly trends briefings, and members of the LS:N Global network constantly discuss, update and refine trends that it highlights, in response to events.

Facing the future can be scary, and Raymond has periodically had to turn clients away who weren’t open to considering certain scenarios. This has included auto manufacturers who didn’t want to discuss electric cars, retail clients who weren’t willing to talk about e-commerce and hotel clients who couldn’t get over the distinction between mainstream and luxury. “The process is always about collaboration and if a client thinks their view is correct, then we say, OK, we can’t work with you,” says Raymond. “The whole point of a network is that you’re being buffeted constantly by new stimulus and the point is to analyse those from a point of view of where the customer will be when you get this to market – not what you think your brand’s needs are.”

Part of the firm’s job, Raymond says, is to shock clients into realising that change is happening, so that they are ready to recognise and respond to it. One client, he says, described the trend briefing as “like having your head pushed into a basin of cold water” – which, for Raymond, is a good review.

Crowd dynamics
Meanwhile, some researchers seeking to predict the success of new product ideas are putting their faith in the wisdom of crowds. This is the idea that the predictions of everyone in a group about what the group as a whole will do are better than the predictions of each individual about what they themselves will do. The principle can be put in to practice through prediction markets, where participants trade ‘shares’ in a product, proposal or election candidate (see box below). It’s a technique that harnesses the predictive power of the market itself – rather than trying to outsmart it like the traders whom Daniel Kahneman observed.

BrainJuicer CEO John Kearon says prediction markets beat standard survey research hands down in predicting the success of new products and services. He rates market research’s current predictive ability at 7 out of 10, but thinks it could improve to an 8½ or 9 if techniques like this were more widely applied. The difficulty in selling prediction markets as a research methodology, he says, is that “it doesn’t seem plausible”.

By assuming that we are reliable experts on ourselves we fall into the same trap as the experts in Philip Tetlock’s experiment, who were overconfident in their abilities and unable to see their failings. The notion that you can predict what other people will have for lunch tomorrow better than you can for yourself is hard to accept. “It just happens to be true,” says Kearon.

As part of BrainJuicer’s efforts to win clients round, the agency has set aside £30,000 for an investment fund that will use markets to bet on the outcomes of public votes such as The X Factor. But it’s not lack of evidence that holds clients back from embracing prediction markets, Kearon believes – it’s inertia.

“Human beings are odd creatures,” he says. “We do odd things, and the way we buy and use research is no exception. We’re trying to work out ways to get people to change behaviour and buy into these methods, and one thing we already know is that evidence of it being more accurate isn’t the way. You’d think it would be but it isn’t.”

Kearon sees research as “part of a game” – a tool deployed when decisions have to be made, but also when boxes have to be ticked, backs covered and past decisions justified. On top of that comes the risk and hassle of changing from one method to another. Kearon says he is reminded of the observation that science only moves forward when the old professors retire or die. It sounds like a bleak view of the world of research, but Kearon doesn’t let it get to him. “You’ve got to work with human nature rather than throw your arms up about it,” he says. “It’s endlessly fascinating.”

“Human beings are odd creatures. We do odd things, and the way we buy and use research is no exception. Evidence of a method being more accurate isn’t the way to get people to buy into it”

John Kearon, BrainJuicer

Known unknowns
Also fascinating, and just as problematic for research providers, is the value we put on the certainty of pundits when judging their predictions. Agency researchers know that clients pay for clarity and confidence, and interpret ifs and buts as signs of weakness, not wisdom. Similarly, former chancellor Norman Lamont once remarked that he enjoyed reading William Rees-Mogg’s column in The Times because he’s “often wrong but he’s never in doubt”.

But one of Tetlock’s most intriguing observations about predictive ability was the distinction he drew between people who know “many things” and those who know “one big thing”. Broadly speaking, people who have a single overarching belief tend to be more certain about their predictions – and less accurate. Those who can triangulate knowledge from many sources, are capable of self-criticism and are not married to any viewpoint or ideology, will be more cautious in their predictions – and more accurate.

For researchers, this is an argument for a pluralist approach to methodology, an openness to new ideas and a healthy scepticism of received wisdom about both research methodology and consumer behaviour. It also means agencies must strike a balance between communicating a compelling story and not overstating or oversimplifying their case.

Will that approach pay off? We’d be lying if we said we knew.


Tomorrow’s world

Some methodologies that offer hope of improved predictive power

  • The Iowa Electronic Markets, run by the University of Iowa’s Henry B Tippie College of Business, are perhaps the best-known example of prediction markets in action. Participants use real money to bet on the outcomes of elections and other events by buying or selling ‘shares’ at between $0 and $1 – with the promise of receiving a full $1 for every share they hold in the correct result when it is announced. As people trade, the price fluctuates and is used to predict the final result. Since the markets were set up in 1988, they have come up with more accurate election predictions than the opinion polls in about three quarters of cases.
  • Specialised online surveys can measure people’s gut reactions to quickfire words and images. These tests analyse not only what choices people make but also other factors such as how quickly they choose. Head-to-head tests of implicit and explicit responses have shown implicit measures to be a better predictor of subsequent behaviour.
  • Social media and the rise of ‘big data’ have created an unprecedented view of what people are doing at a given moment. Google’s Flu Trends service, launched in 2008, showed that monitoring the number of people searching for information about flu symptoms could provide a rapid and accurate way of tracking the spread of the disease – closely matching the official data but available much
    more quickly.
  • Experiments have been carried out to predict election results based on the volume and sentiment of social media chatter about political candidates. A 2010 study at Carnegie Mellon University found that analysis of Twitter data could rival polls as a way of tracking US public opinion over time, while social media service Tweetminster analysed Twitter buzz to predict the last UK general election result with an average error of just 1.75 points – better than some of the traditional polls.

4 Comments

12 years ago

With all due respect: this article is too long. Maybe chop it into 3 and post separately?

Like Report

12 years ago

In this piece, you make reference to the capacity or otherwise of opinion polls to predict elections. However, only exit polls should be evaluated in this way, and conducted expressly for the purpose of predicting the election, albeit for only less than 24 hours. Opinion polls conducted further in advance, by even a week or a few days, should not be regarded as predictors, but rather as estimates of public opinion and party support at the time of the poll. To regard them as predictors is also to assume that party support is fixed at the time of the poll, an assumption which is surely fatuous. If party support were so fixed, then why do political parties reserve so much cost and effort right up to election day? If party support were crystallized, by say, a week in advance of the election, then surely such last-minute expenditure on their behalf would be a waste of money? This argument is even more profound in the case of polls carried out as much as a month or more in advance - such pre-election polls should stand on their own two feet, not as predictors of a dynamic state of public opinion, but as estimates of that public opinion at the time of the poll.

Like Report

12 years ago

Thank you, Richard. And you are, of course, correct. However, you'll no doubt be aware that pollsters are quite quick to boast of their predictive ability when the outcome of an election matches that of their final (non-exit) poll. We usually only hear the "snapshot in time" line when there is some discrepancy.

Like Report

12 years ago

The articles seems to focus on market research's ability/attempts to predict if a product will be successful, gauging or guessing at if people will like a product. There's actually whole field of research, Design Research, trying to improve the ideas being generated on the 'fuzzy front end'. Design Research seeks to understand and communicate people's needs and desires up front to better focus concepting and design, ultimately improving your success rate of figuring out "what's next".. It's amazing how, given the right tools, the average person can communicate a realistic but futuristic vision for what they want from their products and experiences in 2, 5, even 20 years. It isn't that any one consumer predicts the future right, but collectively understanding what people want reveals patterns of needs and desires that can inform and inspire design. I don't want to belabor the point, but I'd be happy to talk more with anyone interested.

Like Report