NEWS1 December 2020
All MRS websites use cookies to help us improve our services. Any data collected is anonymised. If you continue using this site without accepting cookies you may experience some performance issues. Read about our cookies here.
NEWS1 December 2020
UK – Government should clarify how the Equalities Act applies to the use of algorithms and how organisations can use data to identify algorithmic bias, according to a review by the Centre for Data Ethics and Innovation (CDEI).
In an analysis of the use of algorithms in financial services, local government, policing and recruitment, the CDEI recommended that guidance should be issued to clarify best practice in the collection of data to measure bias, and the lawfulness of bias mitigation techniques such as positive discrimination.
Organisations should also understand the capabilities and limitations of algorithmic tools, and carefully consider how to ensure everyone receives fair treatment when algorithms are in use, the CDEI said.
The review also recommended placing mandatory transparency obligations on all public sector organisations using algorithms, including in how they are used to make decisions and how potential bias is tackled.
The report follows a number of recent controversies in the use of algorithms, most notably with the heavy criticism of this year’s A-level results, which saw algorithm-generated results abandoned after 40% were marked lower than teacher assessments. The system was put in place after exams were cancelled due to the Covid-19 lockdown.
There have also been significant criticisms from a number of different sources about how algorithms can reinforce biases towards minority and underrepresented groups in society.
The CDEI, a government body that advises on the responsible use of artificial intelligence (AI) and data, has published a roadmap to increase fairness and reduce bias in the use of algorithms, while also ensuring that appropriate regulation is in place. The CDEI is also supporting the Government Digital Service to pilot an approach to algorithmic transparency within the UK public sector.
The report’s findings are based on evidence from a range of sources, including research from the Royal United Services Institute on data analytics in policing, public polling by Deltapoll on attitudes to AI, semi-structured interviews with companies using algorithmic tools, and an open call for evidence.
Adrian Weller, board member at the Centre for Data Ethics and Innovation, said: “It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases.
“Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.”
Simon McDougall, deputy commissioner – regulatory innovation and technology for the Information Commissioner’s Office, said: “When developed and used responsibly, algorithms can transform society for the better. But there is also significant risk that algorithms can exacerbate issues of fairness and inequality.
“Data protection law requires fair and transparent uses of data in algorithms, gives people rights in relation to automated decision-making, and demands that the outcome from the use of algorithms does not result in unfair or discriminatory impacts.”
Related Articles
0 Comments