OPINION16 October 2020

Why commission researchers if you have AI?

AI Data analytics Opinion

Artificial intelligence is seen in some quarters as a replacement for market research. But Ryan Howard argues that the reality is AI is dependent on researchers. 

AI circuit board

The question in the title of this article may or may not be intentionally undermining, though you have heard it asked often enough that you may find yourself doubting your worth. It has travelled widely, growing from speculation to presumption. One is hard-pressed to find recent industry commentary running contrary to this received wisdom: artificial intelligence (AI) is busy replacing the need for market researchers so we’re witnessing the final throes of research as we know it. In its wake, there is still no shortage of hope that software is a wholesale, practical and cost-effective alternative.

The digitisation of data has certainly narrowed the scope of primary research a great deal, but beyond this, the combination of big data and AI hardly miracles up business intelligence. It is the same faulty narrative; this time especially careful to ignore why and when business calls for insight. Few of our problems arrive with pre-categorised data, the type needed to train artificially intelligent answers. By the time enough data is collected, the outcomes are already clearly observable, and we’ve long moved on, grappling with the next tech disruptor, social phenomenon, competitor move or business model.

We have championed the synergy of research and new technologies, sought it out as if by force of habit and fixed our spotlight firmly upon it. Until now, attention and venture capital were so generously poured into anything that said AI on the tin, that AI now appears on every tin. For this reason, the waters are muddied enough that the doomsters may be forgiven.

AI is said to be driving the automation of data collection and processing. In truth, investment has been directed toward web development and data engineering. Similarly, classical marketing science techniques such as regression, cluster, time series, basket and survival analysis have been rebranded as cutting-edge AI solutions. Others rely on word matching or explicit rule-based logic. They are well-tailored to their tasks and showcase the smart and innovative thinking of their creators who will manually tweak them in search of an attainable perfection. They deliver on their promise without skipping a beat because they are the stuff of spreadsheets, bearing the same relationship to AI that a brick does to a row of terraced houses.

More cavalier examples gush about how AI is used to generate actionable insight, where behind the curtain sit a team of thoughtful analysts combining domain knowledge with machine learning. At the infamous end of this group we find AI claiming to untangle customers’ thoughts, intentions and emotions. A plainly comical assertion given humans find this challenging at the best of times, doubly so when coming from the same freely available algorithms that take great pains not to crash cars.

This year, the winds of change blow ever stronger. Society is sensitive to AI’s scandals and embarrassing overreaches, growing acutely aware how fantastically it bungles future-facing ad hoc problems, the kind researchers are tasked to answer. With burnt fingers, our investors think twice, and decision-makers take nothing at face value. This is the price paid for what, on any other day, could be characterised as defensible and enterprising spin.

By lauding AI where there is none, we downplay the role of research craft and discredit our widely trusted and battle-hardened analytical techniques. We overwrite our easily forgotten credentials: before digital data collection, there were big, multinational, crazily complex surveys, the kiln in which applied statistics was fashioned. We were routinely solving commercial problems with ‘data science’ decades before the term was coined.

Moreover, we drown out the examples which refute the self-fulfilling narrative. The reality is that AI has only opened doors for researchers, both qualitative and quantitative, gifting rich and plentiful data sources, real-time sensory, video, image, voice and text. Notice too, that without fail, these genuine and meaningful applications celebrate researchers as architects, investigators, and synthesisers – not button clickers.

We expect AI to gather momentum and reach unimaginable possibilities but reject the lazily conceived plotline which accompanies it. Rather than politely shown the door, research craft has been identified as the critical ingredient – the very skills we hope to teach the next generation: hypothesis testing, design, experimentation, reasoned and lateral thinking, after the judicious collection and exploration of evidence. These are the competencies that sort fact from fiction which now, rid of doubt, reply in a flat and equally dismissive tone, ‘Why commission AI if you don’t have researchers?’

Ryan Howard is a freelance data science consultant.