GLOBAL – Twenty technology companies including Google, Meta, OpenAI, Tiktok and X have promised to work together to prevent artificial intelligence content from interfering with the various elections set to take place this year.

Someone scrolling through a blue mobile phone in the dark

On Friday ( 16th February), the firms signed the the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ at the Munich Security Conference.

The accord is a set of commitments to deploy technology countering ‘harmful AI-generated content meant to deceive voters’.

In 2024, four billion people in over 40 countries around the world are expected to vote. 

A paper published in PNAS Nexus in January projected that disinformation campaigns will use generative AI, and that bad actor AI attacks will occur almost daily by mid-2024. Another study, published in Science in 2023, found that AI-generated disinformation was more ‘compelling’ than that created by humans, and that humans cannot distinguish between human and AI-generated disinformation.

Participating companies have agreed that they will develop and implement technology to mitigate risks related to deceptive AI election content, including open-source tools where appropriate, and assess models to understand the risks they may present.

The businesses have also committed to seeking to detect the distribution of this content on their platforms, and ‘appropriately address’ it.

Other commitments include providing transparency to the public and supporting efforts for greater awareness and media literacy.

The companies to have signed the accord are Adobe, Amazon, Anthropic, ARM, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, TruePic, and X. 

The accord addresses AI-generated audio, video, and images that fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote. 

Kent Walker, president, global affairs at Google, said: “Today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can't let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science.”

Nick Clegg, president, global affairs at Meta, said: “This work is bigger than any one company and will require a huge effort across industry, government and civil society. Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.” 

Christina Montgomery, vice-president and chief privacy and trust officer, IBM, said: “Disinformation campaigns are not new, but in this exceptional year of elections – with more than 4 billion people heading to the polls worldwide – concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content.”