Res_4012955_numbers

OPINION25 February 2015

Significance – statistical or practical?

Opinion

With the common misinterpretation of statistical ‘significance’, as well as the seemingly arbitrary nature at which we have set the levels something is deemed significant, Bonamy Finch’s Leigh Morris, says researchers and marketers must better understand the difference between something statistically significant and actually meaningful.

I remember being in the US and watching a television advert for shoe inserts that promised an instant height increase of three inches. The advert showed a man standing next to an attractive woman who he clearly fancied and who equally clearly was not the slightest bit interested in him.

A quick visit to the ‘Magi-Lift insoles’ website, and a three inch lift later, the same man walks up to the same woman and she is all over him like a bad suit. Watching US TV adverts can quickly immure you to the asinine, but what particularly seized my attention was the man’s ‘before’ height: five feet nine inches, the same height as me.

This gives us a glimpse of the practical (as opposed to statistical) significance of differences we might report in our research. At five feet nine inches I am, (by US TV advert standards at least), short, but if I had told you I was six feet in my opening sentence, you might well have described me as tall. Yet the latter is only 4% greater than the former. So we have a small difference in absolute terms, but one of very practical significance in terms of how I am perceived by others. (There is much research showing taller people are perceived as more confident, successful and attractive than shorter people, are more likely to be given a job interview and get more responses from personal adverts).

The research community has traditionally focused attention rather indiscriminately on things it deems to be ‘statistically significant’. But statistical significance often doesn’t really mean anything more than the finding in question is reliable (and should show up again if the research were repeated). 

It doesn’t follow that it is necessarily meaningful from a practical ‘here’s what we should therefore do to make a difference to our business’ perspective. The true role of the researcher is not merely to uncover something that is ‘significant’ but to understand if the insight is meaningful, and what the implications for the client’s business might be.

It is our role to help a client not only to recognise a particular statistical difference but to appreciate its potential relevance to its strategic plan.

Now consider a research debrief in which one new product concept has 4% higher appeal than another. Or Brand A has 4% higher awareness than Brand B. Or the Southern region has after-sales service satisfaction 4% higher than the Northern region.

Whether or not these differences are statistically significant will depend largely on how many respondents the research budget could stretch to, and the variation in peoples’ responses. But are they practically significant?

Will the first concept perform better in the marketplace? Does Brand A convert more people to purchase than Brand B? Should the Northern region follow the practices of the Southern region? Who’s to say?

Well, we are. Professional researchers need to understand the limitations of statistical significance, and how to draw conclusions about the practical significance of differences and patterns in our data. If we can’t do this effectively, then we are not in a position to make well-founded business recommendations to our clients.

My feeling is that it’s not something our industry excels at, but I could of course be wrong. Although that might not be significant.

Leigh Morris, founder and managing director of consumer insights and analytics agency Bonamy Finch

4 Comments

9 years ago

"But statistical significance often doesn’t really mean anything more than the finding in question is reliable (and should show up again if the research were repeated)." With the same structured sample too! I agree with Leigh's view. There are a multitude of tests available in different analysis packages (even just in Excel), but quite often, all that is asked for by researchers is just "sig testing", and not any brief on the type. All against total? Just within a group? Is it applicable to multi-coded groups? Brilliant article and hopefully, one that will wake a few people up too!

Like Report

9 years ago

Yep. Best (statistical) story I have ever heard was the following: Client: "Is that a statistically significant result?" Researcher: "Well yes, if you would like it to be." Moral: _Anything_ is statistically significant at some level of (im)precision.

Like Report

9 years ago

As I often have to explain to clients, stakeholders and (sometimes) other researchers: "something can be statistically significant and utterly meaningless; or it can be a statistically insignificant difference that's part of a meaningful trend". In a way, it's not too dissimilar from Margin of Error, in that people keep asking for it as if it suffices in telling them about the data set.

Like Report

9 years ago

I fully agree. Statistical significance isn't equivalent to practical significance. Quoting NickD: "something can be statistically significant and utterly meaningless". I guess everyone with fair for data analysis has tried to build models that was statistical significant, but utterly meaningless. On the other hand, I have in my part of the world (SEA) also been in meetings where the customer was ready to make significant decisions on a highly statistically insignificant foundation. I have seen published research reports where statistically insignificant results have been over-interpreted and misled the reader. What I'm trying to say is that it is balance. Truly understanding statistics and how to draw business relevant conclusions is what makes one a professional researcher

Like Report