OPINION22 April 2020
Time to think ethically about AI
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
OPINION22 April 2020
x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.
Camilla Ravazzolo, data and privacy counsel at the MRS, writes about reframing the discussion around artificial intelligence.
Stephen Hawking once said: “Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. I am an optimist and I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”
The discussion about artificial intelligence (AI) is one of the most fascinating philosophical debates of the past century. It has managed to evolve from the testing of a product – limited to engineers and hackers – to a business model analysis, carried out inside tech companies by engineers, salespeople and executives. And now it’s gone one stage further to a societal issue – questioned by an anxious public who do not understand how they are impacted by it, suspicious governments that ignore how to deal with it, and cutting-edge companies that thrive with it.
The ethical discussion around AI is significant, and is key in ensuring it can be deployed as a force for good and not undermine individuals or societies.
International organisations, nation states and global powers have all contributed toward ethical frameworks – unsurprisingly, they are a reflection of each one’s role on the global stage, and, inevitably, they end up being little more than a list of non-binding principles.
If China and the United States focus on AI as part of their heated race for global dominance, the European Union (with the UK) stands out for trying to develop a human-centric approach to AI that is respectful of European values and principles.
Much of the debate is still focused on the spurious assumption that AI can be viewed as ‘good’ or ‘bad’.
Precious hours have been devoted to agreeing on the definition of AI. As argued by Taddeo and Floridi in their article in Science ‘How AI can be a force for good’, if you get the definition wrong the assessment of the ethical challenges becomes science fiction at best and an irresponsible distraction at worst.
But it somehow also gives the impression of being a digression exercise. In this sense, two ways to look at it have evolved: first, as the old mantra goes, an algorithm is only as good as the person who writes it; second, focusing on AI as an autonomous self-learning system.
On one hand, training the human intelligence behind AI is essential – the algorithm reflects a design, a choice that encodes ethical values. The designer needs to understand the responsibility of these choices as a form of civic duty.
When, in the early 1940s, Richard Feynman was invited to take part in the Manhattan Project, his moral reasoning was centred on the inevitability of building an atomic bomb. If it was theoretically possible, then it was practically inevitable, and if it was inevitable then American before German was a good trade-off.
But a good decision taken on one day, might be a bad decision on another. So, when Germany was defeated, why did he stay? Because, as Feynman put it: “I didn’t think, ok?”
As the academic Dan Munro notes, ethical thinking is not a one-time task. The responsibility of revisiting and rethinking the ethical implications of a decision adopted in a specific socio-economic scenario is an ongoing action.
Designers need to understand their role in designing AI as actions bearing consequences – intended and unintended, short and long term, one time or recurring – on humans, today, tomorrow and in 100 years.
So, for instance, in Europe, handling and processing datasets is heavily regulated, but because of the long tradition and significant consideration for the concepts of privacy and data protection these regulations are really principles that require all stakeholders involved to ensure that citizens have full control over their own data. Which means that, when it comes to AI, system design will point at the nirvana of ‘ethics in design’ and ‘ethics by design’ – algorithms that are powerful, scalable and transparent.
On the other hand, AI as a new form of agency relies on delegation and responsibility. Delegation is assigning tasks to the machine to lower costs and improve results. Responsibility is the necessary human oversight that intervenes before problems occur. That’s a key – unresolved – point since the 1960s.
When it comes to delegation, the question remains: when AI fails, who takes the blame? The programmer, the company, the end user? Is it possible to identify one person?
What is known as distributed agency turns into distributed responsibility – I am referring to both moral and legal responsibility. The Council of Europe, building on the extensive work of Floridi et al., notes how AI distributes responsibility among designers, regulators and users – and pinpoints the path towards a reasonable future.
This is all in vain if not accompanied by sensible national (and supranational) legislation, adequate enforcement and collective complaint mechanisms.
It is too easy to list the many times AI has already gone wrong – for every Cambridge Analytica, Compas and Black Mirror, there is a Rainforest Connection, cancer detection and Avatar Kids. For every new nightmare there is a new hope. Set aside for a second the impact on society at large – when was the last time you took a moment to reflect on the impact of AI on yourself?
This article was first published in the January 2020 issue of Impact.
0 Comments