FEATURE7 February 2024

Navigating AI regulation: the EU vs the UK

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Features Impact Privacy Technology Trends

There is large disparity and divergence in how the UK and the EU are regulating artificial intelligence. Kaleke Kolawole, policy manager at MRS, reviews the differences between the UK and the EU’s approach to regulation.

haunting black and white image of a cybernetic eye

Artificial intelligence (AI) is a rapidly evolving technology, transforming industries at a significant scale, automating repetitive tasks, and improving efficiency and decision-making. With the advancement of AI, the regulatory and legislative landscapes have quickly adapted to appropriately respond to the market and produce frameworks and standards for ensuring civil rights, ethical practices, and fostering innovation.

There are two notable approaches to date: the European Union’s AI Act and the UK’s ‘pro-innovation’ approach (this document will not be implemented on a statutory footing initially, but the government anticipates introducing a statutory duty on regulators, requiring them to have due regard to the principles).

So, what are the key differences between the documents, and what are the implications for market research?

The EU approach

The EU Artificial Intelligence Act is a comprehensive and ambitious framework introduced by the European Union. The act is the first of its kind in Europe and the first AI legislation in the world.

The objective of the rules is to ensure that AI systems are overseen by people, rather than by automation, to prevent harmful outcomes. The cornerstone of the act is the classification system – it aims to govern AI systems, addressing a wide range of applications, from chatbots to complex machine-learning algorithms.

The EU classification system is as follows:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal or no risk.

Unacceptable risk AI systems are ones considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups – for example, voice-activated toys that encourage dangerous behaviour in children
  • Social scoring – classifying people based on behaviour, socio-economic status or personal characteristics
  • Real-time and remote biometric identification systems, such as facial recognition.

All high-risk AI systems will be assessed before being put on the market and throughout their life-cycle. Generative AI systems, such as ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training.

Market research agencies using high-risk AI systems, such as those affecting fundamental rights, will need to undergo conformity assessments, maintain detailed documentation, and provide explicit user consent. This can lead to increased operational costs and potential delays in project execution.

Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. For example, an individual interacting with a chatbot must be informed that they are engaging with a machine so they can decide whether to proceed or request to speak with a human instead.

Minimal-risk applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games, and inventory-management systems.

The UK approach

The UK white paper was published for consultation on 29 March 2023. It sets out an innovative and principles-based approach to regulating AI and defers to regulators (market/industry expertise) to implement the principles and issue guidance and resources. The objective of the UK approach is to drive growth and prosperity, ensure public trust in AI and strengthen the UK’s position as a global leader.

The UK approach is aimed towards improving business and innovation, rather than enforcing rigid and onerous legislative requirements, which could hold back AI innovation and reduce the UK’s ability to respond quickly, and proportionately, to future technological advances. Instead, the direction makes use of regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.

The five principles of the UK approach are:

  • Safety, security and robustness: AI systems should operate “in a robust, secure and safe way throughout the AI life-cycle”, especially considering the autonomous nature of AI decision-making. Risks present at each stage of the AI life-cycle should be spotted, assessed and managed
  • Appropriate transparency and ‘explainability’: Transparency is defined as the provision of appropriate information on AI systems (eg, the purpose of the system and how and when it will be used) to relevant parties. Explainability relates to a relevant party’s ability to access and understand the decision-making rationale of an AI system
  • Fairness: This pertains to protecting the legal rights of individuals and organisations. AI systems should not weaken these legal rights, nor should they result in discriminatory market outcomes. For example, errors in an AI-generated credit score can negatively affect an individual’s livelihood
  • Accountability and governance: Governance measures should be implemented to oversee the supply and use of AI systems, and lines of accountability should be clearly demarcated throughout the AI life-cycle
  • Contestability and redress: Affected third parties and actors within the AI life-cycle should be able to make a complaint about, or contest, AI that creates harm or a material risk of harm.

Conclusion

There is large disparity and divergence in approach to AI regulation between the UK and EU. On the one hand, the EU AI Act introduces a degree of liability for both AI developers and users. For example, market research agencies may be required to re-evaluate their contracts, privacy notices and partnerships to ensure they are not unduly exposed to legal risks. Furthermore, organisations will be required to disclose which AI systems they use, and provide transparent information about the types and functions of algorithms, sources of data, and decision-making processes.

Conversely, the UK approach fosters innovation and advancement. However, in the absence of legislation and enforceable rules, there may be a lack of trust from consumers and increasing concerns around privacy protections.

It is important, though, to have harmonisation between the UK and EU approach, so that there is congruence for those working across different markets – this is something MRS has encouraged in our consultation responses to the UK government. For now, market research agencies operating in the EU must be mindful of the regulatory requirements of the EU AI Act. Failure to comply may limit their access to the European market.

The industry must adapt and quickly invest in compliance, innovate ethically, and navigate the complexities of the global market. The future of AI regulation is likely to involve a measured balance between these two approaches, ensuring that responsible AI deployment and economic growth coexist harmoniously.

MRS has published guidance on the use of AI in research, which can be found at mrs.org.uk.

This article was first published in the January 2024 issue of Impact.

0 Comments