OPINION18 February 2011

They've got your number

“It’s time to curb media hyperbole when reporting on statistics,” says the New Scientist.

The magazine dedicated its editorial last week to having a go at dodgy stats. But the interesting thing is, they didn’t use this as an opportunity to rail against those who take scientific stats and run with them. Instead they say: “Journalists and policymakers can only work with what they are given. Scientists need to bear in mind how their statistics can be abused – or perhaps unintentionally read the wrong way.”

Rather than pointing the finger of blame the New Scientist highlights an unhealthy set of relationships. “The interests of researchers in search of funding, scientific journals in search of wider media exposure, and journalists in search of compelling stories often coincide to create a military-industrial complex for the production and propagation of dodgy statistics.”

Vigilance against the misrepresentation of stats “must start at the source”, they conclude, “cutting off questionable figures before they receive a bad press”.

The examples of statistics abuse cited by New Scientist are mostly from clinical trials, but the same issues apply to survey research results. In this month’s edition of Research, Outlook’s Adam Curtis berates advertisers about their use of statistics from surveys – including grand claims based on tiny samples and ill-defined statements about “longer-looking lashes” and “shinier-looking hair” presented as if they were objective fact.

The comments we’ve received in response to Curtis’s article suggest that many in the industry feel the same – but there are some who believe research needs to get its own house in order before complaining about how survey data is misused in other fields.

Philip Graves, whose book Consumer.ology sets out to explode the “myth” of market research, commented: “At least the advertising brands know what they’re doing when they conduct and publicise them. This, it strikes me, is somewhat more justifiable than the rest of the quantitative survey industry pretending that its results represent some kind of objective measure of something meaningful; when in fact the vast majority of survey responses are a by-product of a wide variety of influences brought on by the process of asking questions and the timing and location in which they are asked.”

Another commenter took the opportunity to criticise agencies that continue to describe samples taken from access panels as ‘representative’.

When it comes to communicating research results, we mustn’t forget the human factor: once statistics are out in the world, people are going to share them, talk about them, try to understand them and maybe use them to try to achieve something. Researchers are right to criticise those who misuse their data. But if they want to be taken seriously, they also need to think about their own role and relationships with the people whose hands their work might end up in.