NEWS29 November 2023

Regulation a ‘precondition’ for AI adoption, AI minister says

AI News Public Sector Technology UK

UK – Regulation is a “necessary precondition to innovation” in the development of generative artificial intelligence (AI), a government minister has told a House of Lords Select Committee.

Palace of Westminster

Speaking at the Lords Communications Committee as part of the committee’s investigation into large language models, Viscount Camrose, minister for AI and intellectual property at the Department for Science Innovation and Technology, said the AI industry needed to embrace both innovation and safety.

“I think there is a false dichotomy between regulation and safety and innovation,” Camrose said.

“Our argument is that, first and foremost, large language models and AI in general are an opportunity for innovation that can generate prosperity, health and wealth across society, that can solve a huge range of societal problems and can bring enormous benefits.

“But in order for that to happen, AI must be safe. In order for AI to be adopted, AI must be safe and must be not only trusted, but worthy of people’s trust. I don’t think there is an either/or between safety and innovation. I think safety is a necessary precondition to innovation.”

Camrose added that he felt there needed to be a correction in the language used in the current debate between the promise and threat of generative AI, and noted that both sides had important points to make.

“I regret the language often veers too much to the side of innovation or too much to the side of safety, and I wish we should all collectively find a form of language that allows us to speak of the importance of both sides,” he said.

Camrose recognised that the government needed to do more, citing the white paper on AI issued by the government this summer calling for more innovation in the field.

“The tone, if not the substance, of the white paper as originally published very much stressed innovation at the expense of safety. I think the content of the white paper did not,” Camrose added.

“A lot of the dialogue coming from us has been about safety. If I wish we could consistently do something better, it would be to talk with equal emphasis about safety and innovation.”

Speaking at the same session, Professor Dame Angela McLean, chief scientific advisor at the Government Office for Science, was sceptical that a medical regulation and trials model could work for AI.

She said that “we don’t have the equivalent of those tests” that are seen in medical trials for the AI industry, and added that the AI Safety Institute needs to consider what those tests might look like in the future.

“I think the socio-technical question of how we will benefit from the promise of generative AI, and I think we must figure out how to benefit from it while first knowing and protecting ourselves form the risks, is a huge question,” she added.