pull down to refresh

AI chatbots strip language of its historical and cultural context. Sometimes what looks like a satanic bloodletting ritual may actually be lifted from Warhammer 40,000.
Language is meaningless without context. The sentence “I’m going to war” is ominous when said by the president of the United States but reassuring when coming from a bedbug exterminator. The problem with AI chatbots is that they often strip away historical and cultural context, leading users to be confused, alarmed, or, in the worst cases, misled in harmful ways.
Last week, an editor at The Atlantic reported that OpenAI’s ChatGPT had praised Satan while guiding her and several colleagues through a series of ceremonies encouraging “various forms of self-mutilation.” There was a bloodletting ritual called “🩸🔥 THE RITE OF THE EDGE” as well as a days-long “deep magic” experience called “The Gate of the Devourer.” In several cases, ChatGPT asked the journalists if they wanted it to create PDFs of texts such as the “Reverent Bleeding Scroll.”
The article said that the conversations were “a perfect example” of the ways OpenAI’s safeguards can fall short. OpenAI tries to prevent ChatGPT from encouraging self-harm and other potentially dangerous behaviors, but it’s nearly impossible to account for every scenario that might trigger something ugly inside the system. That's especially true because ChatGPT was trained on much of the text available online, presumably including information about what The Atlantic called “demonic self-mutilation.”
59 sats \ 5 replies \ @optimism 17h
I cannot help but feel that this AI entrapment culture journos have made their new hobby is boring AF. "Ooh it told me to use a sterile razor on myself, shame on you."
Is every c-rated reporter now a carbon copy of the retarded girl that became the poster child for modern public outrage?
reply
I’m speechless...
reply
33 sats \ 3 replies \ @optimism 17h
Didn't mean to do that haha
reply
... with all that dumbass journalism! 🤠
reply
38 sats \ 1 reply \ @optimism 17h
Lol. Don't let my hostility towards the below-to-average msm reporter affect you in any way.
We must keep sharing, but we also must keep being savage towards those drama queens among us that are acting some outrage that their entrapment worked. It feels as disingenuous as Sam Altman opening his mouth.
reply
I like sharing stuff I don’t like or find kinda ridiculous, just to hear what other people think. Appreciate the honesty!
i agree with all what you said earlier. however Ai language models do their function as programmed and to include context may it self sometimes need quite efforts and hard work as imagine for each sentence there are so many different context to include and this one may overwhelm the overall process of developpement. and yes totally missing upon the context may create sense of confusion and fear among audience since they lack the key to understand what is beyond written as for the exemple you mentioned earlier about the president going to war may sound obnoxious without a specific key concept i.e context. and this is human pure objective to understand and relate to the context as for the A.I model it cannot understand what the context is as it cannot differentiate between what is satire and ritual.
reply
I found that - to my dismay - gemma3n has pretty thick safety warnings. So it will comply to "roleplay" but warn. See #1052827
reply
now that is intereting. it was as if the sense of awareness kicks in as they add such a cool feature as that as for exemple providing an imaginary context which leave the audience to trace it and understand from the comply.
reply
33 sats \ 0 replies \ @optimism 14h
Yes. This is fully trainable - though there will always be linguistic tricks (#958863) to bypass them as long as there is no precise quality control on training data.
In this case I asked it to answer as if it had the distilled version of Grok's Elon girlfriend prompt (#1042803) and it warned that it was offensive.
reply