OPINION8 December 2016

AI – is Mark Carney right?

AI Finance Healthcare Innovations Opinion Trends UK

Instead of fearing AI, market researchers should be embracing intelligent systems to improve the service on offer says Frank Hedler.

robots lined up typing on computers

There has been a lot of noise about Artificial Intelligence (AI) recently, in particular about its potential to cause huge redundancies in the current work force – with Bank of England governor Mark Carney’s recent cataclysmic forecast the latest doom-laden prognosis.  

But even before the governor jumped on the AI bandwagon, everybody was talking about AI, particularly the mainstream media. Print and broadcast coverage ranges from optimistic utopias to gloomy predictions exploiting ‘Frankenstein anxiety’ and the possible consequence for our lives.

Let’s have a look at Carney’s doomsday warnings about the consequences of a widening application of smart technology on the demand for human labour. It is a fact that AI can lead to massive improvements in productivity. In 2014 a study by Deloitte and Oxford University estimated that 35% of UK jobs are at high risk from automation over the next two decades. Obviously, low-paid, low-qualified workers are more vulnerable to being replaced than high qualified professionals. But there is also an argument that by replacing high-pay, higher qualified functions within a business, AI could have an even bigger impact on the bottom-line.

So from a market research perspective, what is our place in a world that is increasingly shaped by the application of AI?

Firstly, I have a problem with the thoughtless and often incorrect use of the term ‘AI’ in all this. There is a tendency now to call any kind of algorithmic automation ‘intelligent', as if the automation of human tasks was a completely new development.

But it is the notion of ‘artificial’ intelligence that seems to frighten us, and that leaves us with the prospect of being substituted by machines. In reality, we are still far away from creating a system that could be truly classified as intelligent. Rather, what we currently see is the increasing use of machine learning designed to help humans in very specific tasks – and these are often tasks that humans do not, and cannot, actually perform.

For instance, a recent article in the Stanford Medicine News suggested that a computer program beats human pathologists in predicting the severity of lung cancer.  It might be true that a machine learning algorithm, trained on thousands of historical cases, can more accurately predict the survival rate after a given time span than human doctors can. But predicting survival rates is something that doctors do not do! They decide on the treatment, surgery or no surgery, chemotherapy or radiation. They are not trained to predict how much time a patient has left. And this is just one of many examples where machines and algorithms might outperform humans without making them redundant.

So rather than being scared of AI, we have to identify areas within market research where AI can help us to do something better, more productively, at a larger scale.

An obvious opportunity surely lies in the way we gather data. Let’s be honest: we, as an industry, have not been very creative so far in exploiting the true opportunities that online research offers us. We have simply taken our pen and paper surveys and put them onto websites, using the same rigid process of walking a respondent through dozens of questions.

Meanwhile, the world around us is increasingly populated by virtual chatbots which numerous organisations now use to answer customer queries.  RBS and Natwest, for example, have recently announced that they will use these virtual customer service agents to help them sorting out issues with their credit cards and accounts.

So why not turn it around and use chatbots to ask questions?  Bots could become the new survey tool, in particular in customer satisfaction research. Instead of asking a customer to fill in a boring survey, chatbots offer the ability to gather customer feedback in an engaging, conversational way.

The algorithms can be trained and optimised to specific domains and can control the interview flow, asking for more detail when responses are too generic, and digging into pain points detected via sentiment analysis.  Let’s defy the governor’s fears and use opportunities like this to take online research to a truly new level.

Frank Hedler, director advanced analytics, Simpson Carpenter


8 years ago

I agree with Frank. At scale, we've not been very creative in how we utilise these technologies. Mark Carney has a point though. We should all be worried - thinking about how to embrace these capabilities in our work. Believe me, there are plenty of other agencies moving into the MR space doing just this. I work for one. To me, big barriers to using AI and predicitive analytics in MR are more about legacy agency organisational structures (we need to feed the fieldwork machine), cost of change and lack of experimentation from clients (baselines / don't break it / no budget / risk averse) and not enough people that are fluent aross different methods and data sources (trained by method / limited by agency toolset). The industry needs to embrace and replace; dated methods simply won't keep up on their own and will be replaced unless they integrate and find a new place in the value chain. For example, digital analytics teams hold more and more marketing budget and have been self-serving customer insights for years. This trend is moving to other teams within our clients. I've seen client innovation teams gain no additonal insight from endless focus groups (at great expense) and then find 10+ undiscovered innovation routes though the application of machine learning to an existing dataset they held. Don't get me wrong, this won't always be the case and I'm a big fan of fusing more established methods with the new. We we need our people on the front line helping clients with their problems, learning to use these new tools and datasets, not cranking the handle on a one-method machine.

Like Report

8 years ago

I'm sure we'll get there in the end but based on my experience of chatbots: Chatbot: "I'm sorry, I didn't quite catch that. Please answer A, B or C" Respondent: Hangs up So we'd need a lot of sample, we might end up alienating our respondent base, and then get MR legislated into the "do not call" register, which so far we've managed to avoid. But I'm sure it will come, and the general thrust of the article about AI is spot on. Thank you.

Like Report