NEWS2 February 2024

UK ‘too focused on safety’ in AI and LLM policy, says Lords

AI News Privacy Public Sector Technology UK

UK – The UK government’s approach to artificial intelligence (AI) and large language models (LLMs) are too narrowly focused on safety and risk missing opportunities presented by the technologies, according to a report by the House of Lords Communications and Digital Committee.

Palace of Westminster as seen from Whitehall

In its report on AI and LLMs, the committee said that the UK should rebalance towards boosting opportunities while tackling near-term security and societal risks.

The committee warned the government that without action to prioritise open competition and transparency, a small number of technology firms could “rapidly consolidate control of a critical market and stifle new players”, which it said could mirror challenges seen elsewhere in internet services.

Key measures supported in the report include more support for AI start-ups, boosting computing infrastructure, improving skills and exploring options for an ‘in-house’ sovereign UK LLM.

The committee also considered the risks around LLMs and said the “apocalyptic concerns” about threats to human existence are “exaggerated” and should not distract policy makers from responding to more immediate issues.

The report found there were more limited near-term security risks including cyber attacks, child sexual exploitation material, terrorist content and disinformation, and called for mandatory safety tests for high-risk models and more focus on safety by design.

The committee called on the government to support copyright holders, arguing the government “cannot sit on its hands” while LLM developers use data without permission or compensation, and says the government should end the copyright dispute through legislation if necessary.

The report said measures could include a way for rightsholders to check training data for copyright breaches, investment in new datasets to encourage tech firms to pay for licensed content and a requirement for tech firms to declare what their web crawlers are being used for.

Baroness Stowell, chairman of the House of Lords Communications and Digital Committee, said: “The rapid development of AI LLMs is likely to have a profound effect on society, comparable to the introduction of the internet. That makes it vital for the government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks.

“We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush.”

Baroness Stowell added: “The government must ensure exaggerated predictions of an AI driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open-source AI development or exclude innovative smaller players from developing AI services.

“We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology.”