Data watchdogs warn over AI-generated images

In the statement, 61 data protection authorities including the UK’s Information Commissioner’s Office (ICO), the European Data Protection Board and Ireland’s Data Protection Commission (DPC), set out their concerns and called on tech firms to ‘ensure that technological advancement does not come at the expense of privacy, dignity, safety, and other fundamental rights’.
The statement, coordinated by the Global Privacy Assembly’s International Enforcement Cooperation Working Group, follows the ICO opening an investigation into X earlier this month over the company’s AI tool, Grok, being used to produce sexualised content.
“Recent developments – particularly AI image and video generation integrated into widely accessible social media platforms – have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals,” said the co-signatories of the statement. “We are especially concerned about potential harms to children and other vulnerable groups, such as cyber-bullying and/or exploitation.”
Organisations developing and using AI content generation systems must do so in accordance with laws including data protection and privacy rules, the letter said.
The harms caused by such content are ‘significant’ and ‘call for urgent regulatory attention’, said the co-signatories, who also pledged to work together to address misuse of AI content generation systems.
We hope you enjoyed this article.
Research Live is published by MRS.
The Market Research Society (MRS) exists to promote and protect the research sector, showcasing how research delivers impact for businesses and government.
Members of MRS enjoy many benefits including tailoured policy guidance, discounts on training and conferences, and access to member-only content.
For example, there's an archive of winning case studies from over a decade of MRS Awards.
Find out more about the benefits of joining MRS here.







0 Comments