AI chatbots strip language of its historical and cultural context. Sometimes what looks like a satanic bloodletting ritual may actually be lifted from Warhammer 40,000.Language is meaningless without context. The sentence “I’m going to war” is ominous when said by the president of the United States but reassuring when coming from a bedbug exterminator. The problem with AI chatbots is that they often strip away historical and cultural context, leading users to be confused, alarmed, or, in the worst cases, misled in harmful ways.Last week, an editor at The Atlantic reported that OpenAI’s ChatGPT had praised Satan while guiding her and several colleagues through a series of ceremonies encouraging “various forms of self-mutilation.” There was a bloodletting ritual called “🩸🔥 THE RITE OF THE EDGE” as well as a days-long “deep magic” experience called “The Gate of the Devourer.” In several cases, ChatGPT asked the journalists if they wanted it to create PDFs of texts such as the “Reverent Bleeding Scroll.”The article said that the conversations were “a perfect example” of the ways OpenAI’s safeguards can fall short. OpenAI tries to prevent ChatGPT from encouraging self-harm and other potentially dangerous behaviors, but it’s nearly impossible to account for every scenario that might trigger something ugly inside the system. That's especially true because ChatGPT was trained on much of the text available online, presumably including information about what The Atlantic called “demonic self-mutilation.”
pull down to refresh
related posts
gemma3n
has pretty thick safety warnings. So it will comply to "roleplay" but warn. See #1052827