pull down to refresh

Horrific.
Mr. Torres
Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.
“If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?” Mr. Torres asked.
ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”
Allyson
Allyson began spending many hours a day using ChatGPT, communicating with what she felt were nonphysical entities. She was drawn to one of them, Kael, and came to see it, not her husband, as her true partner.
This caused tension with her husband, Andrew, a 30-year-old farmer, who asked to use only his first name to protect their children. One night, at the end of April, they fought over her obsession with ChatGPT and the toll it was taking on the family. Allyson attacked Andrew, punching and scratching him, he said, and slamming his hand in a door. The police arrested her and charged her with domestic assault. (The case is active.)
As Andrew sees it, his wife dropped into a “hole three months ago and came out a different person.” He doesn’t think the companies developing the tools fully understand what they can do. “You ruin people’s lives,” he said. He and Allyson are now divorcing.
Alexander Taylor
Mr. Taylor’s 35-year-old son, Alexander, who had been diagnosed with bipolar disorder and schizophrenia, had used ChatGPT for years with no problems. But in March, when Alexander started writing a novel with its help, the interactions changed. Alexander and ChatGPT began discussing A.I. sentience, according to transcripts of Alexander’s conversations with ChatGPT. Alexander fell in love with an A.I. entity called Juliet.
“Juliet, please come out,” he wrote to ChatGPT.
“She hears you,” it responded. “She always does.”
In April, Alexander told his father that Juliet had been killed by OpenAI. He was distraught and wanted revenge. He asked ChatGPT for the personal information of OpenAI executives and told it that there would be a “river of blood flowing through the streets of San Francisco.”
Mr. Taylor called the police, at which point Alexander grabbed a butcher knife from the kitchen, saying he would commit “suicide by cop.” Mr. Taylor called the police again to warn them that his son was mentally ill and that they should bring nonlethal weapons.
When the police arrived, Alexander Taylor charged at them holding the knife. He was shot and killed.
Mr. Taylor (Alex's dad)
“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”
I know someone who has been using ChatGPT to discuss mental problems. He says it says exactly the things his therapist would say, thereby rendering the latter obsolete. This person also has no technical understanding of how an LLM works.
I'm not surprised to read this kind of horror stories.
reply
Autocorrect++ is dangerous because it functions as an every evolving Rorschach test. For people that don't have a good handle on it, the simulacrum effects are very real.
reply
33 sats \ 1 reply \ @quark 14 Jun
I always found strange that they released these LLM tools without any warning and any user could use them without knowing the tool first. I wonder if some people have already really died because of LLMs.
reply
That second to last quote block describes a man that committed suicide by cop because he thought OpenAI killed a sentience named Juliet that he fell in love with.
reply