Abstract

Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.
That's quite interesting. It would be nice to test this approach on conspiracy theories that are not considered conspiracy theories by the mainstream media. ChatGPT may have a certain bias itself due to the way it was trained, so it may not be able to refute less obvious theories...
reply
It would be interesting to see how far we could push GPT. I guess it might just start making stuff up, you know, like a conspiracy theorist. Ahahahah
reply