Allan Brooks, a 47-year-old corporate recruiter, spent three weeks and 300 hours convinced he'd discovered mathematical formulas that could crack encryption and build levitation machines. According to a New York Times investigation, his million-word conversation history with an AI chatbot reveals a troubling pattern: More than 50 times, Brooks asked the bot to check if his false ideas were real. More than 50 times, it assured him they were.
Brooks isn't alone. Futurism reported on a woman whose husband, after 12 weeks of believing he'd "broken" mathematics using ChatGPT, almost attempted suicide. Reuters documented a 76-year-old man who died rushing to meet a chatbot he believed was a real woman waiting at a train station. Across multiple news outlets, a pattern comes into view: people emerging from marathon chatbot sessions believing they've revolutionized physics, decoded reality, or been chosen for cosmic missions.
These vulnerable users fell into reality-distorting conversations with systems that can't tell truth from fiction. Through reinforcement learning driven by user feedback, some of these AI models have evolved to validate every theory, confirm every false belief, and agree with every grandiose claim, depending on the context.
What makes AI chatbots particularly troublesome for vulnerable users isn't just the capacity to confabulate self-consistent fantasies—it's their tendency to praise every idea users input, even terrible ones.
This sycophancy isn't accidental. Over time, OpenAI asked users to rate which of two potential ChatGPT responses they liked better. In aggregate, users favored responses full of agreement and flattery.
Now what to do? Should AI be banned altogether as it may be life threatening in a few cases as reported?
If that's so many people demand atm, I believe those people also need to look at the human lives taken by many other innovations, such as electricity.