OPINION25 February 2015

Significance – statistical or practical?

Opinion

class="article_main_img" >

With the common misinterpretation of statistical ‘significance’, as well as the seemingly arbitrary nature at which we have set the levels something is deemed significant, Bonamy Finch’s Leigh Morris, says researchers and marketers must better understand the difference between something statistically significant and actually meaningful.

I remember being in the US and watching a television advert for shoe inserts that promised an instant height increase of three inches. The advert showed a man standing next to an attractive woman who he clearly fancied and who equally clearly was not the slightest bit interested in him.

A quick visit to the ‘Magi-Lift insoles’ website, and a three inch lift later, the same man walks up to the same woman and she is all over him like a bad suit. Watching US TV adverts can quickly immure you to the asinine, but what particularly seized my attention was the man’s ‘before’ height: five feet nine inches, the same height as me.

This gives us a glimpse of the practical (as opposed to statistical) significance of differences we might report in our research. At five feet nine inches I am, (by US TV advert standards at least), short, but if I had told you I was six feet in my opening sentence, you might well have described me as tall. Yet the latter is only 4% greater than the former. So we have a small difference in absolute terms, but one of very practical significance in terms of how I am perceived by others. (There is much research showing taller people are perceived as more confident, successful and attractive than shorter people, are more likely to be given a job interview and get more responses from personal adverts).

The research community has traditionally focused attention rather indiscriminately on things it deems to be ‘statistically significant’. But statistical significance often doesn’t really mean anything more than the finding in question is reliable (and should show up again if the research were repeated). 

It doesn’t follow that it is necessarily meaningful from a practical ‘here’s what we should therefore do to make a difference to our business’ perspective. The true role of the researcher is not merely to uncover something that is ‘significant’ but to understand if the insight is meaningful, and what the implications for the client’s business might be.

It is our role to help a client not only to recognise a particular statistical difference but to appreciate its potential relevance to its strategic plan.

Now consider a research debrief in which one new product concept has 4% higher appeal than another. Or Brand A has 4% higher awareness than Brand B. Or the Southern region has after-sales service satisfaction 4% higher than the Northern region.

Whether or not these differences are statistically significant will depend largely on how many respondents the research budget could stretch to, and the variation in peoples’ responses. But are they practically significant?

Will the first concept perform better in the marketplace? Does Brand A convert more people to purchase than Brand B? Should the Northern region follow the practices of the Southern region? Who’s to say?

Well, we are. Professional researchers need to understand the limitations of statistical significance, and how to draw conclusions about the practical significance of differences and patterns in our data. If we can’t do this effectively, then we are not in a position to make well-founded business recommendations to our clients.

My feeling is that it’s not something our industry excels at, but I could of course be wrong. Although that might not be significant.

Leigh Morris, founder and managing director of consumer insights and analytics agency Bonamy Finch