NEWS24 November 2022

AI regulation and policy remains a priority for UK

AI Innovations News Privacy Public Sector Technology UK

UK – Artificial intelligence (AI) strategy and regulation is an “important area of focus” for the government and bespoke UK AI regulations are being designed with approaches elsewhere in the world in mind, according to the head of regulation at the Office for Artificial Intelligence.

AI abstract image

Speaking at the Next steps for AI in the UK Westminster eForum policy conference earlier this week, Alex Leonidou, head of regulation at the Office for Artificial Intelligence, said that the government was looking to engage with stakeholders and experts to improve on current AI regulatory frameworks.

“This is a really important area of focus for the government,” Leonidou said. “We have made an effort at every stage of this to engage proactively and collaboratively with the ecosystem in general, whether that is business or academia.

“We are really aware of the value of outward looking engagement in this, given the pace of change and its significance.”

The government last year produced an AI strategy and has released an AI regulation policy framework this year as well in an attempt to future-proof the UK’s AI industry.

Leonidou said that the current non-statutory approach was trying to address gaps and overlaps in regulation while also keep up with the fast pace of change in the AI sector, and “trying to look at where we perceive risks and harms coming from the application and context of use of AI and regulating with that in mind”.

Interoperability was also vital for future AI regulation in the UK, she added.

“We are, very deliberately, trying to come up with our own regulatory framework,” Leonidou said. “We are not copying and pasting anyone else’s. But that’s not to say we aren’t acutely of that interoperability point.

“We are not making something new for the sake of it – we are making something new because we think that is the right thing to do for the UK’s AI ecosystem and our position as a leader in this space.

“Whether with the EU AI Act or any other emerging framework around the world, interoperability is very much top of mind.”

Elsewhere in the conference, Stephen Almond, director of technology and innovation at the Information Commissioner’s Office (ICO), criticised the use of emotion analysis technologies, such as those that track people’s gaze, facial movements, heartbeat and skin moisture to draw inferences about people’s emotions.

He warned that the “science doesn’t stack up” for emotion analysis technology and warned users risk “systemic bias, inaccuracy and even discrimination” with its use.

“Organisations shouldn’t be using meaningless information to make what can be pretty meaningful decisions,” Almond added.

“We are yet to see any emotion analysis technologies that would meet the requirements of data protection law, although our door is always open for people who want to come to us.

“Organisations that are not acting responsibly, who are causing harm to vulnerable people, can expect to be investigated.”

Almond said the ICO would be next year updating its definition of ‘fairness’ in AI, and will be offering innovation advice on the data protection implications of AI, building on its existing regulatory sandbox.

“We are continually scanning the horizon and investing our resources to look at novel risks that are emerging,” he explained.

Also speaking at the conference, Francois Candelon, global director at the BCG Henderson Institute, said more was needed to maintain the UK’s preeminent status in AI development and innovation.

“I believe you have been extremely strong in terms of driving technology development, generating academic research, growing AI talents and fostering an environment for start-ups to emerge,” Candelon said. “When I look at the take-up rate, there is still room for improvement.”

He added that lessons can be learnt from China, where his research suggests 80% of companies have adopted AI compared with 50% in the UK.

Candelon added that the Chinese government has played a “crucial catalyst role” in creating “vertical AI ecosystems” that can help adoption of AI in specific industries, with private companies then helping to lead areas of innovation.

“You already have many of the ingredients to nurture these AI ecosystems and transformers,” he added.

“I am really looking forward to the creation of these AI ecosystems. We might need to compete ecosystem to ecosystem rather than company to company. This is not the prerogative of one company or one player – all the stakeholders will have to work hand in hat.”

0 Comments