FEATURE20 November 2023

Mhairi Aitken in seven

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Data analytics Features Impact People Public Sector

Mhairi Aitken, ethics fellow in the Public Policy Programme at The Alan Turing Institute, discusses the risks and benefits of artificial intelligence and introducing ethical data practices.

a black and white portrait photo of Mhairi Aitken

1: Are we too focused on the potential risks of artificial intelligence (AI) rather than the reality?

It’s right to be discussing the risks, but it’s important to focus on real risks. Recent media coverage has emphasised hypothetical, far-fetched scenarios, where risks come from super-intelligent AI. That worries me, because it’s distracting from the very real, present risks that AI poses. It suggests AI could somehow be accountable for its actions and shifts attention away from the decisions of big tech companies. When we talk about risks from AI, it’s important we also talk about who is responsible for those risks and who should be held accountable when harms occur.

2: What is the best way to develop ethical data practices?

It’s really important to engage with potentially impacted communities – that includes people who may not use a service or product directly, but who might be affected by decisions made because of data practices. Engaging impacted communities in early conversations helps to ensure that data practices are designed, developed and deployed in ways that reflect actual experiences and address concerns, and that, ultimately, are more sustainable and appropriate. It can also provide new insights and creative ideas to maximise the value of data practices.

3: What is the biggest challenge for governments and businesses behaving ethically when it comes to AI?

Effective regulation will require upskilling public sector bodies and ensuring regulators have access to state-of-the-art expertise on AI, so they can scrutinise claims about compliance properly and know what questions to ask – so regulation can anticipate what might be possible in a year or five years’ time. For businesses, the big challenge is resisting the FOMO [fear of missing out], stepping back from the commercial drive for faster and faster innovation and, instead, adopting slower, more considered and responsible approaches focused on creating value.

4: How can researchers better engage the public with AI and data issues?

We need to have open and honest conversations – more dialogue, rather than PR – acknowledging areas of uncertainty, addressing risks, and welcoming public input and ideas to inform future approaches. This touches everyone’s lives, so it is important to provide opportunities for people to learn more about AI and data and to get involved in shaping practices.

5: What is the biggest gap in understanding of AI?

AI is often described as being highly technical or complex. Describing it in those terms closes down discussions and limits who can contribute. In reality, you don’t need technical expertise to engage. The processes of designing, developing and deploying AI involve a great deal of human decision-making and judgement; these are human, social processes, the outcomes of which impact our lives in many different ways. Recognising that AI is a human endeavour rather than a purely technical field highlights the value and importance of wider public conversations.

6: Is more regulation the answer to ethical concerns about generative AI?

Alongside regulation, we need culture change in the AI industry. With the explosion of interest in generative AI over the past year, we’ve seen a return to the ‘move fast and break things’ culture of big tech. While regulation will define the limits of what is permissible, ethics requires going beyond regulation to address ambiguous questions of what you should and should not do. It’s in the responses to those questions that we learn a lot about the values and priorities of organisations.

7: What is the most surprising ‘little known’ social impact of AI?

While attention is paid to the impacts of how AI is used, much less is given to the impacts of how it is developed – for example, the low-paid, outsourced labour that goes into training models, such as workers in Kenya identifying harmful content for less than $2 an hour. It’s really important to raise awareness of how these systems have been developed, and the social impacts of their business models, to enable informed choices and demand that business practices align with social values.

Mhairi Aitken is an ethics fellow in the Public Policy Programme at The Alan Turing Institute. A sociologist, her research examines social and ethical dimensions of digital innovation, particularly relating to uses of data and artificial intelligence.

This article was first published in the October issue of Impact.

0 Comments