FEATURE2 January 2020

Machine wars

x Sponsored content on Research Live and in Impact magazine is editorially independent.
Find out more about advertising and sponsorship.

AI Features Impact

The war on fake news faces a new enemy: could bad actors harness AI technology to generate disinformation without the need for human writers? By Katie McQuater


Disinformation is not a new problem; propaganda and fake news have been used as weapons throughout history. Modern technology, however, has given disinformation a new edge: never has it been easier to disseminate and spread false information with the intention of misleading people.

While disinformation as we know it is written by humans and spread via social media, researchers at the Allen Institute for Artificial Intelligence have turned their focus to a new potential threat: neural fake news created by machines.

Developments in natural language generation and artificial intelligence mean they could be used to generate fake news without the need for human sleight of hand. The Allen Institute has responded to this potential threat by building a prototype, called Grover, to detect neural fake news – and to generate fake content itself, as this has been found to be the best way of detecting it. The model can distinguish between news written by humans and news written by a machine with 92% accuracy.

“It’s the philosophy of ‘know your enemy’,” says Rowan Zellers, University of Washington PhD student and co-author on the project. “We need to know what types of attack to be prepared for, so we can defend against them.”

At the moment, such attacks are hypothetical; the researchers aren’t aware of any instances of this type of disinformation being used currently. “There are other types of deep fakes and human-written fake news, but the technology hasn’t caught on yet.”

One reason for this is that the technology generates news stories at random, based on the style and content of a particular website – for example, nytimes.com. “You can’t control these models as much as an adversary would want to. It doesn’t benefit an adversary to generate wrong random news stories – it has to fit their message or be viral and generate ad revenue,” says Zellers.

More ‘believable’

When given a sample headline, the Grover model can generate the rest of the news article itself, and the team established that people find these generations more trustworthy than fake content written by humans. “We still don’t know whether adversaries would even want to generate this type of thing, but humans do find it more believable than human-generated propaganda,” says Zellers.

The researchers think this could be because a lot of disinformation websites are written in a style that’s not credible, such as using lots of capital letters. It may also be because writers of fake news actually believe the ideas they’re peddling – for instance, that vaccines cause autism. “It’s really hard for humans to lie in a convincing way,” Zellers adds.

While neural fake news may be more believable from the reader’s perspective, it is harder to gauge its potential effectiveness, as the researchers were unable to evaluate the model in a ‘real life’ setting – for example, running Facebook adverts and measuring click-through rates – for ethical reasons.

“Our evaluation is the closest approximation we can do with humans who are told they are going to be looking at news articles and some of these might not be true,” says Zellers.

One of the fears around disinformation is that those seeking to spread it can simply flood social networks with lots of content and people will be unable to discern fact from fiction, because the volume of fake content is so high. But the Grover team found that the more stories they fed in from one source, the better the model became at detecting and classifying which stories were fake.

The team is now looking at how Grover’s decisions can be shared with the wider public. “We can’t expect everyone to know about the latest advancements in AI technology, but we need people to be informed,” says Zellers. “We want to be able to communicate with users to say ‘we think this article might have been machine generated’.”

This article was first published in Impact magazine.