OPINION23 April 2019

Significance testing is insignificant to modern marketing

Innovations Media News Trends UK

Market researchers’ continued use of significance testing hinders innovation and should be consigned to the industry history’s books, writes Jack Miles.

We all use significance testing – the method to show differences in results aren’t caused by chance. It’s a hallmark of quantitative research and gives research users confidence. This is despite significance testing being irrelevant – ironically insignificant – to modern marketing and market research.

Yes, that’s right. The green circles and red squares used by researchers in communication and concept tests are irrelevant. Significance testing is outdated and has origins irrelevant to modern day businesses. Furthermore, the modern marketing landscape promotes innovation, boldness, risk and embraces failure. Significance testing seeks to avoid all of these while feeding our metric obsession.

Outdated and irrelevant origins
Ronald Fisher is credited as being the founding father of significance testing. Fisher’s breakthrough work took place in 1925. Should we be using a 94 year-old process to make modern marketing decisions? Fisher notably also comes from a medical background. The medical profession is cited by Matthew Syed’s book ‘Black Box Thinking’ as being the least embracing of the fail-and-learn culture currently embraced by tech companies. Whose thinking would research rather be aligned with – Google Ventures in 2019 or the Rothamsted Experimental Station (where Fisher founded significance testing) in 1925?

Opposes modern business culture
The rise of fast moving, fail-and-learn tech companies is threatening corporate giants. To defend against this, corporates such as Coca-Cola and GlaxoSmithKline now promote a ‘fail fast’ culture themselves. Within this, there’s simply not the time to have product development halted due to results which aren’t significantly higher than competitors. The culture of red squares and post-debrief procrastination are now replaced with a fail-fast-learn-revise-relaunch approach. Risk-averse research recommendations based on significance testing don’t fit into this culture. As Boots marketing director Helen Normoyle recently said: “The art of great marketing and great insights and research is a combination of really deep customer insights and data with instinct and intuition.” This way of thinking means we need to look beyond boxes and circles when informing marketing decisions.

Supportive of this, Jake Knapp of Google Ventures claims that ROI on research drops after n=5. This is a sample size where significance testing isn’t possible. Although Ronald Fisher would disapprove, this approach to testing has helped Google Ventures become a $2.4bn business.

Builder of false confidence…
Significance testing is designed to give confidence in research results, such as reassuring creatives that their new advert will outperform the existing one. This is dangerous as we all suffer from overconfidence bias. Do we need a dated approach to increase this false confidence? Placed in the context of living in a society prone to black swan events (events which come as a surprise and are inappropriately rationalised with hindsight), falsely installing further confidence about marketing success is dangerous.

…Or killing confidence altogether
When significance testing isn’t giving false confidence, it’s killing it. John Hegarty claimed that data means advertising is no longer engaging with people’s imagination. To stand out in a cluttered world, advertising needs to be bold and distinctive to the point of making its commissioning marketers uncomfortable. Significance testing and its use of red – the indicator of danger – dissuades bold, imaginative creative work through fear of it failing – a fear that is built based on research methodology, not the dynamics of modern advertising.  

Fuelling an (irrelevant) metric obsession
Digitisation has caused marketing to become obsessed with metrics. Dwell time, likes, shares, CTRs, ‘engagement’ and traffic, to name a few. This has kept the term ‘significant difference’ embedded within metric-obsessive marketers’ lexicon. However, there’s only one measure that really matters to marketers – profit.

Does a positive significant difference mean increased profits? No.

Does a negative significant difference mean decreased profits? No.

Therefore, does significant difference as a measure matter? Arguably not.

None of the above should come as surprise. Why? Because innovation is the cornerstone of our industry. It’s the criteria for industry awards and the focus of industry reports. And at the heart of innovation sits risk and failure – the opposite of what significance testing promotes.

Researchers’ continued use of such a traditional method also hinders how we build the ‘market research brand’. Being a more creative and commercially savvy industry is only a good thing. Restricting ourselves to risk-averse recommendations and overly-focusing on the supposedly ‘significant’ differences between vanity metrics hinders this. So let’s think more outside the (red) box, and move significance testing from industry hallmark to industry history.

Jack Miles is senior director at Northstar Research