FEATURE1 December 2009

Lies, damned lies...

The journey that a statistic takes from a research report to a member of the public’s brain is long and fraught with difficulty. Tim Phillips meets some of the people trying to improve how stats are communicated and understood, and asks what role the market research industry should be playing in making sure the numbers add up.

Res_4001765_stats_lies_statistics

David Spiegelhalter, professor of the public understanding of risk at the University of Cambridge, is enjoying what for most of us would be a slow descent into hell.

“There are a staggering number of opportunities,” he says of an open-ended job that tasks him to improve the lamentable public understanding of the meaning of statistical data. “I am talking to some A-level students this Friday, OAPs last week, business leaders, academics. We’re raising the level of discourse in schools, in the media and in policy-making.”

Although he admits that “I shout and I scream and throw things at the TV” when he sees data being misreported, it seems he is the man for the job of spending the £3.3 million hedge fund grant that created his post, because he is optimistic that he can help to create insight, questioning minds, and a new attitude to statistics, probability and risk.

“I want to get people to find uncertainty sexy,” he says (though not too sexy: one of his lecture topics at schools is about the risks and chances of teenage pregnancy). “Dealing with risk gets put into maths in schools. But it’s a life skill, and as such does not fit easily into the curriculum.”

He thinks he can make a difference by encouraging his listeners not to be afraid of uncertainty. It’s the fear that shuts down debate about probability, he says, and leads to bold assertions of certainty where certainty doesn’t exist.

“The young people I speak to face the risks themselves, and I teach them that being uncertain isn’t the same as saying, ‘I don’t have a clue.’ I want to show people that risk and uncertainty are to be valued. Risk is a good thing, not something that has to be avoided.”

First, Spiegelhalter admits, he has to conquer the problem of the misreporting of statistical data in the press. “Their performance indicator is attention, and you get more attention if you make bold, certain statements. It’s the classic Daily Mail idea that everything causes cancer. The standard media coverage exaggerates risks. A lot has to do with framing: a 5 per cent death rate is perceived very differently from a 95 per cent survival rate.”

This is standard media practice for the majority of the 25,000 news stories that quoted surveys in the UK in the last 12 months. Changing the tone of reporting overnight would be ambitious even for someone with Spiegelhalter’s energy, but he has a simpler solution that would at least raise the standard of debate: avoid relativity in reporting. Where possible, represent statistics in the media as the effect on a group of 100 individuals.

?”The science and health correspondents are very good indeed. It’s when things get into the hands of the general correspondents and on to the front page that we have problems”

David Spiegelhalter

This would remove the confusion about comparative statistics (if HRT treatment makes you twice as likely to contract breast cancer, does this mean it is bad for you?), and would mean that the general population would be less susceptible to a special interest group’s framing technique – or reporter working backwards from a conclusion. It follows that simple rules on how statistics are presented could be made part of a newspaper’s house style, just as newspapers all have rules on the correct placement of commas and the correct way to describe royalty.

Good stats and bad stats look similar on the page: we are ultimately in the power of the journalists. “The science and health correspondents are very good indeed. It’s when things get into the hands of the general correspondents and on to the front page that we have problems. Then when it turns out the risk isn’t what they said it was, that gets reported on page 8 three days later,” Spiegelhalter says.

He has plenty of support from the scientific establishment. Earlier in 2009 the science writer Simon Singh used his keynote lecture to British Science Week to complain about the scourge of fake ‘formula for’ stories and the curse of scientifically illiterate journalists.

We are becoming unable to distinguish the flood of fake statistics, commissioned and paid for by an interested party or a PR company, from real research, Singh says. “Some of them [the PR companies] seemed genuinely astonished that prejudging issues and then providing proof is not science,” he says, “It would not happen if there were more mathematically literate people working in newsrooms who could put a stop to it.”

Which raises the question: given the chance, would journalists all want a clearer understanding of their statistics? Given the choice between nuanced, uncomfortable uncertainty and a strong headline, how many will choose to tell the public that things are complicated?

Also, even if they wanted to, how much time do journalists have to unravel complex information? Even if a quarter of schools didn’t have under-qualified maths teachers, and even if basic numeracy was a more highly prized skill for humanities-educated writers, deadlines outrank analysis. In ‘The story behind the story’ in the October 2009 Atlantic magazine, veteran journalist Mark Bowden found that newsrooms are now so short of staff – any staff – that analysing stories is often not practical. And so anyone providing a pre-packaged set of ‘facts’ can get them straight on to the screen or the page. He concludes that statistics are increasingly being used not by people who want to inform but by people who want to win an argument. Readers “increasingly choose to listen only to their own side of the argument, to bloggers and commentators who reinforce their convictions and paint the world only in acceptable, comfortable colours”. A link to the feature was passed around the BBC newsroom, with the message “This should be a warning to all of us” appended.

?”I get frustrated when the MR industry absolves itself from all responsibility. Ben Goldacre absolutely piled in on us earlier this year, and basically told us we whore for the journalism business, and sometimes he’s right”

Alistair Leathwood, FreshMinds

Market research is sometimes the victim, sometimes the instigator. “I get frustrated when the MR industry absolves itself from all responsibility,” says Alistair Leathwood, managing director of FreshMinds Research. “Ben Goldacre absolutely piled in on us earlier this year, and basically told us we whore for the journalism business, and sometimes he’s right.” (Goldacre, writer of the bestseller ‘Bad Science’ and the Guardian column of the same name, said market researchers hope “to get away with it and then run like buggery,” at a Debating Society event in March.)

Part of the problem, Leathwood says, is the progression by which statistics go from the MR industry to the press. “The chain of communication can be long, which leads to problems. The researcher who understands the nuances of the statistics has to communicate them to a marketing department or a client. Then they have to write a press release, and more gets lost there. Then it goes to a journalist who has to get something out by 5pm. So a lot of the information has leaked away before anyone sees the story.”

The charitable trust Sense About Science is doing its bit to address this problem with a public guide on making sense of statistics, which will be published in the next few months. It’s aimed at journalists, PR people and the other characters who form the chain between the source of a statistic and the public.

Chrissie Wells, head of quant at Leapfrog Research and Planning, says that quant researchers who simply present figures help cause a problem. They should use the figures to tell the complex story, she says. “There’s a huge mystique about stats, but the ability to understand is hard-wired into our brain. Any quant researcher worth their salt should be able to communicate in ways that are clear and accessible.”

Wells encourages Leapfrog’s researchers to come up with metaphors when they communicate, rather than just deliver figures to be interpreted (“we know how to understand football scores, train timetables and temperatures”).

Even stronger views come from Liz Nelson, the chairman of Q Research Ltd and founder of Taylor Nelson Sofres, who (in Wells’ terms) thinks the journalism business is 2-0 down, running half an hour late and lukewarm – and urgently in need of MR’s help.

“My gripe is that journalists are no longer so well-trained in how to report statistical data. There is a lot more slapdash reporting,” she says – adding that training should be provided by the MR industry.

Ultimately it’s in the interests of the reputation of market research to do this, Nelson explains: “I have been talking to some journalists, and I know that in News International for example there would be interest if we put something like this together. The MRS polices its members, but is it doing enough to influence others?”

Robin Nash, the training and development manager at the MRS, points out that this is exactly what the MRS has been doing for many years – in either introductory courses or custom-designed training days. “For our introductory statistics course journalists and marketing people would not be out of their depth, and we are running one-day events in February, March and September next year. But we can also design and deliver something to match their requirements. When you’re talking about £3000 between 15 attendees, it’s pretty cost-effective.”

The BBC, among others, has already used MRS training, but to give other media organisations a push to follow, Nelson favours a stronger role from the MRS in calling out bad reporting, so that more people are inspired to learn about how to use statistics. “I think that the MRS has an obligation to say something about it. We should be speaking out. The fact that we don’t might be a lack of self-confidence from the industry, or it might be arrogance: saying, well we know what we are talking about even if you don’t,” she says.

The often-repeated insult is that people use statistics like a drunk uses a lamppost: more for support than illumination. It is a useful reminder of the danger of vested interests, either from slanted reporting or biased framing. But there are other opportunities. Perhaps some of the deceptive statistics in the public domain could have been explained more clearly by the researchers who put them there. Or maybe the MR industry should point out who is responsible. Perhaps market researchers can support statistical literacy best by joining Professor Spiegelhalter in helping to teach it.

0 Comments