AI buzzword_Crop

FEATURE19 March 2018

Ethical dimension to AI

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Data analytics Features Impact Privacy Technology UK

The benefits of artificial intelligence (AI) need to be balanced with potentially problematical data use. Dr Michelle Goddard looks at the rules in place to help ensure ethics are part of the AI equation.

Artificial intelligence (AI) – with its complex software systems, allowing accelerated analysis and learning by machines at a much higher rate than humans – promises much in our digital and data-driven environment.  

Its impact is felt in all aspects of our lives, from smartphone digital assistants and fraud-detection tools, to product-recommendation services  and credit applications. AI is at the centre of not only recommendations that we receive, but – more critically – the decisions that are made about us. 

Exponential growth and access to personal and non-personal data is driving AI use across all sectors. In research, text-data mining and analysis – with developments in natural language processing (NLP) and machine-learning models – aid the automation of data analysis, data collection and report publication. However, as AI permeates our economies and communities, the promised benefits from the technology need to be balanced against the possible harms; it must be appreciated and recognised that AI can create new injustices or embed old ones, and that the data used to train machine learning can contain a range of biases, especially around race and gender. 

DATA–SHARING ATTITUDES

A body of research shows a gap in public trust, and differences in attitudes towards data sharing (often dependent on how that data will be used). For AI to be successful, concerns must be addressed and the public  brought along. If people become more afraid of sharing their personal data, it could have long-term implications for research participation, development of innovative commercial solutions and society – as well as for the use of AI, where opacity of processing and varied levels of human intervention raise specific concerns for individuals. So, what is the solution?

The current data protection framework – and new rules that come into effect in the UK on 25 May 2018 – offer some legal oversight and control of AI. The EU General Data Protection Regulation (GDPR) strengthens individual rights and focuses on transparency and accountability, as well as automated decision-making. This legal framework will be underpinned by regulatory guidance from the Information Commissioner’s Office and targeted enforcement. Core to GDPR will be: 

  • Data protection impact assessments (DPIAs) – these are a methodology for identifying and mitigating risks to privacy and fundamental rights. Integrating them into project-management processes, with buy-in across an organisation, can help identify privacy issues in AI use  
  • Greater algorithmic transparency and accountability – legal responsibilities under the data protection framework include obligations to ensure meaningful transparency, so that individuals can be informed, and unintended harm can be undone. 

Hard-wiring ethics into AI 

However, data protection law by itself will not suffice. Although opinions vary about the level of oversight or regulation necessary to address concerns about AI, a wider cultural mind-set will be required. A key element will be demonstrating trustworthiness in data stewardship by embedding ethics. 

All organisations – public, private and not-for-profit – using AI need to understand the nuances of consumer privacy preferences as it applies to their market and organisation. A core question for the research sector in maximising the use of AI is how to work with others to ensure the benefits of these innovations are delivered in a robust, ethical way.

Ethical approaches that go beyond the law will help ensure that industry sectors and organisations take their obligations seriously, and implement AI – and use data – in a fair and transparent manner. A mix of tools will be appropriate, including:

  • Self-regulatory schemes and codes, with ethical principles and expectations, that can give industry-specific guidance  
  • Consumer-facing marks with consumer recognition, such as the MRS Fair Data scheme, to build trust across markets  
  • Internal ethics boards and review committees, and processes that consider wider organisational data management when using personal and non-personal data  
  • Anonymisation techniques that protect privacy by ensuring there is no reasonable likelihood of identification of individuals in data sets  
  • Data trusts, where individuals pool their data, stipulating conditions under which data can be shared. 

Data privacy, compliance and building consumer trust are critical. The landscape for developing and using ethical tools and principles is complicated by the range of sectors and players involved. AI is reliant on data scientists, as well as other scientists and experts from diverse fields. All have different ethical underpinning, and some are at an embryonic stage in considering ethical frameworks.  

There are many ongoing domestic initiatives – for example, the House of Lords Select Committee Inquiry, new government AI strategy, Royal Society and Royal Academy reports on data governance – as well as European and international schemes. Suggested solutions range from proactive regulation to general oversight – but, across these initiatives, there is a clear need for a focus on embedded collaborative, ethical approaches that raise awareness about the collection and use of data, and assist firms and consumers in benefiting from AI. 

Dr Michelle Goddard is director of policy and standards at MRS

0 Comments