Can people be persuaded not to believe disinformation?

Anyone following American politics in recent months will have been treated to their fair share of bogus claims. USAID, the country’s main development agency, sent $50m worth of condoms to the Gaza Strip; that tens of millions of deceased centenarians are continuing to receive social-security payments; or that disaster-relief funding was spent on housing migrants in luxury hotels in New York City. That so many believe them nonetheless highlights how an age of social media and political polarisation has blurred the lines between truth and conspiracy theory. Debunkology, or how to unpick beliefs once they take root in people’s brains, is struggling to catch up.

The immediate approaches many reach for—argumentation and debate—rarely work, says Kurt Braddock, who researches the persuasive effects of propaganda, and how to counter it, at the American University in Washington, DC. What’s more, they often have the opposite effect, further entrenching opinions, he adds. But new work is showing that persuasion may work better when the interlocutor is a generative artificial-intelligence (AI) model.

In September 2024 Thomas Costello at the Massachusetts Institute of Technology (MIT) and his colleagues published a study of what happens when ChatGPT attempts to talk self-professed believers in conspiracy theories out of their beliefs. The study, which put 2,190 believers into conversation with the GPT-4 model that underpins the chatbot, reduced the self-reported strength of their beliefs by 20% after three rounds of conversation. One in four participants disavowed their beliefs entirely.

Debater bots

Dr Costello believes chatbots work where humans fail because they offer rational responses instead of letting emotions get the better of them. What’s more, they are able to comb through their extensive training data to offer precise counter-arguments, rather than the generalised ones humans often reach for in debates.

The use of AI chatbots may also help address another problem with human-led debunking. In a paper published in PNAS Nexus in October 2024, some of Dr Costello’s colleagues at MIT suggested those whose beliefs are challenged often look for secret motives their self-appointed debunkers may be hiding. Of course Democrats would shoot down the notion that votes were stolen in America’s presidential election in 2020, a Republican might say, because they have a vested interest in upholding the result. An AI system presented as holding the world’s collective knowledge may seem more trustworthy.

Not all believers will be accommodating enough to argue with a machine on command. For those looking to stop a belief from taking root, it might be more effective to prebunk, rather than debunk. This idea has been around since the 1960s, albeit with a less catchy name: attitudinal inoculation. Coined by social psychologist William McGuire, the approach involves telling people that outlandish beliefs and outright disinformation exist, followed by showing them specific examples and suggesting strategies to avoid and overcome them. Provide someone with a refutation, says Dr Braddock, and they’re more likely to resist disinformation.

A study from 2023 looking at a wider range of interventions found that inoculation of this kind had what the authors described as “medium” or “large” effects on countering such beliefs. But how long prebunking lasts is questionable, says Karen Douglas, a psychology professor at the University of Kent.

There are other ways of “hacking” people’s attention: an analysis published in August by researchers at the University of Wisconsin-Madison suggests the debunking messages posted by medical experts on TikTok are more effective if overlaid with high-tempo music. The academics believe the music helps swamp the brain’s ability to present counter-arguments, making the message more persuasive to the listener. Deploying strong narratives to accompany a particular message, including characters and rich description, is another way of overwhelming the brain’s ability to battle back against spurious claims, prior research has shown.

Many of these techniques can, of course, be co-opted by the bunk-spreaders as well as the debunkers. One notable exception is critical-thinking education, which consists of being taught how to evaluate evidence in order to make informed judgments. One study on 806 university students in 2018 found that such education had the ability to reduce belief in aliens as well as health pseudoscience. It was less good at countering, among other things, Holocaust denial and a belief that the Moon landing was faked.

But that might be as good an outcome as can be hoped for. All the scientific resources in the world can be expended on understanding how to dissuade another person, says John Synnott, a psychology researcher at the University of Huddersfield, but it is ultimately up to that person to determine what they believe.