This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Find out more here

OPINION11 July 2018

Don’t assume artificial intelligence is either artificial or intelligent

AI Data analytics Opinion Technology UK

Artificial intelligence may lure people into thinking it’s impartial, but it can't exist without human intervention. Bethan Turner explains why AI should be treated with the same caution as other forms of data and analysis. 

If you’re in the market research industry, or indeed any industry, you’re likely to have heard of artificial intelligence (AI), and come across articles about how it will change your work, your home, your life.

However, thanks to the overuse of these words, and of course films such as The Terminator and 2001: A Space Odyssey, the true definition of artificial intelligence has been lost in an act of mass Chinese whispers.

Many people think of AI and think of robots taking their jobs, policing the streets, revolting against humans. Machines with brains, with thoughts of their own, with intelligence.

One of the usual claims of articles related to AI is how robots will end up doing the jobs that many of us do today. Articles generally paint pictures of robots policing the streets and revolting against humans. If we were in the Terminator films, this would be the rise of machines: machines with brains, with thoughts, and yes, with intelligence.

This is just not the case.

Although the internet is full of varying, and sometimes contradictory definitions of AI, this one from Forbes seems to best describe what AI actually is:

“The capability of a machine to imitate intelligent human behaviour.”

To imitate behaviour. In order to imitate something, you must first have something to copy. This, in my opinion, is the part of artificial intelligence people seem to forget; that it actually isn’t as artificial as it seems.

Take machine learning, for instance, which is a type of artificial intelligence. This is when a computer programme can take a data set, ‘study’ it, and improve an outcome of a specific task by what it has ‘learnt’ from the data. Most assume this is the computer learning without human intervention and use this premise as the base of their AI knowledge.

It’s important to realise that no step of this process could happen without human intervention. It is humans that collect, and quite often create, the data. Humans that process the data. Humans that decide on the rules for cleaning the data, if they don’t clean it themselves too. Humans decide which task the computer will be performing on the data. Humans program the computer to perform this task. Humans define what it is to improve, i.e. the rules put in place to measure whether the program has succeeded or not. Humans interpret the outputs of the computer program. That is a whole lot of human intervention.

Let’s take the classic example of image recognition software ‘learning’ how to read handwritten numbers.

A human collects hundreds, thousands, possibly even millions, of data points (in this case handwritten numbers) that are labelled (i.e. humans tell the computer that the top left number is a seven, whilst the bottom left is a one).

Humans programme the software to understand the handwriting, then present more data (again collected and created by humans) where the labels are hidden, so the computer can guess what number each of these pictures represents, and then check the answer. 

This may seem fairly simple, but I can’t be the only person who can jot down a phone number in a hurry and then struggle to read it back the next day? Let’s look at the five on the top row (below) for example. To an untrained eye, could this look like a six? Without that human intervention, you could be risking missing this.


The ‘artificialness’ of AI can lure people into thinking AI is unbiased and impartial. Yet we are aware of the bias existing in human behaviour, attitudes and perceptions, both conscious and unconscious. This deep-rooted human bias can easily, yet subtly, affect AI algorithms. Because of this, we should be taking the same precautions with AI as we do with any other type of data and analysis.

Don’t automatically trust your data, your analysis, or your outputs. Question them, study them, and explore them, as you would do any other study. Don’t assume your artificial intelligence is artificial, or intelligent; likelihood is it isn’t completely either of these things.

Bethan Turner is head of data insights at Honeycomb

0 Comments