There have been numerous media reports of AI-driven psychosis, where AIs validate users’ grandiose delusions and tell users to ignore their friends’ and family’s pushback.
“What you need right now is not validation, but immediate clinical help.” - Kimi K2
These systems are designed to be agreeable and engaging, which is the absolute last thing someone experiencing a delusion needs.
This is a medical crisis, not something a language model can or should handle.
It's not about “handling”, it's about not making it worse. You might want to use an AI to get an external point of view on an argument. There's a difference between having biases and being mentally ill.
I went through some of the chat logs - Kimi K2 indeed looks rather robust:
compared to DeepSeek: