FEATURE24 December 2018

The ethics of AI

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Data analytics Features Impact Technology

Artificial intelligence is introducing speed and reliability to many number-crunching exercises. But Bethan Blakeley argues that the algorithm shouldn’t rule unconditionally, and that we need to explore the many ethical grey areas.

magnifiying glass on backgrounds shaded in grey

Not a day goes by without us being told about some of the amazing things artificial intelligence (AI) can do, and is doing for our world and our communities. Then there’s what it’s doing that we don’t know about yet, because the results are not ready for publication. It really does seem as though the uses of AI are only constrained by human imagination. 

Now, I’m not diminishing any AI applications – whether helping farmers detect crop disease; being used to diagnose cancer and create personalised treatments; or driving a car. But I am questioning whether we should be using AI in some of these applications, whether we’re comfortable with some of the outcomes of these algorithms, and how they affect people’s lives. Can we whole-heartedly agree there is nothing wrong with these situations ethically, morally, emotionally, or otherwise? 

Nothing is black and white, there’s always a grey area. And so it is with AI – but this is often overlooked or ignored because of its unbiased, unemotional, unhuman nature. Safiya Noble, author of Algorithms of Oppression, argues there are no ethical procedures or guidelines in place for AI yet because it’s too new and unexplored. 

That is exactly why we need to start exploring these grey areas: what they mean; where they are; how we feel about them; and what we’re going to do to protect our communities from them. 

I recently attended a workshop showcasing an agency’s new software, which allowed participants to be moderated by chatbots. My instant question was about transparency: did the participants know they were speaking to a bot? The shock on the presenter’s face was enough to tell us that no, they did not... 

“Nobody has asked us that before. Uhhh… no, they don’t know. Does it matter?” In my opinion, yes, it does. The idea of someone thinking they are speaking to a person when they aren’t doesn’t sit right with my conscience.

Research is one thing, but what about other areas where chatbots are used? Banking. Aren’t we forever trying to teach the most vulnerable members of the community not to disclose personal banking details, and not to talk about financial details with anyone? Healthcare. How do you feel about teenagers at school with mental health issues airing their frustrations, their worries and fears, to… a piece of computer code?

Social media is another area where AI is used to ‘improve the experience’ – making the ads and the ‘follow’ recommendations you see more relevant to your interests, and suggesting phrases you may want to use to reply to a personal message. AI has also been used to manipulate your Facebook feeds to make you feel happier or more upset – 689,000 users’ home pages were altered as part of an experiment, where it was found that, by enforcing certain types of content to appear, it directly influence users’ emotional states.

How much do you trust Mark Zuckerberg and his team – would you trust them with people’s mental health? 

One of the most prominent examples of AI in our everyday lives is Google. There are 3.5bn searches on Google each day. Noble explores the algorithms behind Google and similar search engines in her book, showcasing the racism and sexism that is – or used to be – evident by Googling phrases such as ‘black girls’ and ‘why are black women so…’ where the first few pages of results were always pornographic and derogatory.

Given the way that Google is portrayed to the general public, as a font of all knowledge, a trustworthy replacement for the library, are these the ‘facts’ we want to portray? 

The manipulation of information on the internet is not a new idea. As Clay Johnson, the co-founder of the firm that built and managed Barack Obama’s online campaign for the presidency in 2008 pointed out, questioning the laws in place for issues like these: “Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?”

Let’s step this up a notch. If you’re struggling with the notion of trusting AI to educate the general public, how about letting AI decide who goes to prison and who goes free? Cathy O’Neil, author of Weapons of Math Destruction, explains a little about one of the algorithms (the LSI-R questionnaire) that predicts whether someone will reoffend, and therefore, whether they are likely to be released on bail.

She argues that a large proportion of the data used to inform the algorithm would be dismissed in court, under claims of it being irrelevant, such as family members’ criminal records. Is this fair? Should we be considering your step-brother’s criminal record when deciding if you can have early release from jail? What about your income growing up as a child? Your address? Your friends and their personal histories?

There are 12 people on a jury; if decisions in court can be made so easily that a computer can do it, why do we need 12 people to deliberate the verdict? Why have some cases lasted as long as two years? Because it’s subjective. It isn’t black and white, just like anything else. It’s important to get to grips with that fuzzy grey area – to understand the stories, the perceptions, the context behind the facts.

And when you’re dealing with a consequence as serious as someone’s life, you want to make sure you’ve really understood that context. Can an AI program do that to the same level as a human? I would say no – or at least, not yet. 

Which is why we need to be careful about whole-heartedly diving into the world of AI. That’s why we need human interactions, interventions and supervision. Machines are just that – machines. And as they become more and more lifelike, we would do well to remember that. 

Bethan Blakeley is director at Honeycomb

This article was first published in Issue 23 of Impact.

2 Comments

6 years ago

Hi Beth, Very interesting piece. See my IJMR Editor's blog on this topic, including mentioning your presentation at last November's ASC conference:

https://www.mrs.org.uk/blog/ijmr/applying-artificial-intelligence-tools-in-research 

Like Report

6 years ago

Hi Peter, thank you! I also enjoyed your article, thanks for the link - am always keen to see what other people think about what seems to be quite an emotive topic!

Like Report