OPINION21 April 2016
Testing significance
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
OPINION21 April 2016
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
Making a proper business decision can often mean trying a ‘slightly wrong' option, rather than a ‘boringly right’ one, argues Rory Sutherland.
If you were to ask me what is the most valuable thing I have done in my working life, I think the answer’s quite easy. I once asked a client in the restaurant trade what they did when an item on the menu wasn’t selling very well.
“We drop the price, ” they replied.
“Good idea. But do you try raising the price first?”
“No.”
“Go on, try it occasionally.”
They did. The first time they took my advice, demand went up. A lot.
To be honest, I was lucky. The tenets of mainstream economics are mostly correct in that increasing the price of something will depress demand, but it’s not an iron-clad rule. For instance, when people choose from a menu, the usual price-demand relationship is weakened. Some hungry diners may be disposed to buy pricier items. Also, since restaurant visits are often seen as a guilty treat, there may be a disposition not to skimp.
Being boringly ...
2 Comments
David Alterman
8 years ago
Our industry's obsession with significant testing at a 95% confidence level is bizarre, but so engrained that people assume it is sacrosanct. One free-thinking client said to me a few years ago that if research tells him that there is a 67% chance that the blue one is better than the red one that was enough for him to make and justify a decision - especially when lives aren't at risk. But he is unfortunately the exception. It is one more example of quantitative researchers hiding opinions behind a wall of data - and as we rarely see significant differences in the sorts of sample sizes we tend to work with, its a perfect excuse for not making any decisions at all. And thereby not upsetting anyone. And having an easy life. And we wonder why market research doesn't have a prominent enough voice in client organisations....
Like Reply Report
Annie Pettit, Peanut Labs
8 years ago
What I also find interesting is how we continually ignore effect sizes. We get excited when a small effect size is statistically significant and disappointed when a large effect size isn't quite significant. We need to do a much better job interpreting data using all of our knowledge about statistics. It's not JUST statistical significance or JUST effect sizes. It's those plus context plus actionability plus reliability and validity.
Like Reply Report