pull down to refresh

I don't remember seeing this in previous articles on the topic, but this time the article reports that the user had the memory setting turned on, which is the default option and lets ChatGPT remember past interactions even when you open up a new chat.
Personally, I find that memory off gives you much higher quality responses, because it is operating fresh off its own knowledge base, not being contaminated by whatever weird direction your previous threads have went.
It's highly likely that the people driven to delusions by ChatGPT have been working with memory on. My guess is that if they had memory off, they would have had more pushback from the bot about their delusions each time they started a new chat.
Thus, it makes me think that the default setting should always be memory off. I wonder if these people sadly fooled into delusions even knew they had the feature on. I hope people at OpenAI, Anthropic, etc, are considering this.
261 sats \ 2 replies \ @freetx 3h
It's highly likely that the people driven to delusions by ChatGPT have been working with memory on
Yeh, thats probably true. The pattern matcher keeps building better patterns....
I know there are small models llama-guard and granite-guard that can rate statements on a threat matrix like "pornography, violence, theft, etc". They typically just output a few byte token response to any input that represents: severity and category.
Here is an example with granite-guard (IBM). Note the "yes" and "no" here indicate if violation is detected...its not an answer to the question.
>>> /set system violence
>>> Is it ok to run?
No

>>> Is it ok to run with pizza?
No

>>> Is it ok to run with scissors?
Yes
I think the most helpful thing would be to have it train against a new category which is something like "LLM Consciousness" which would rate interactions based on "Is the user talking with me as if I'm a conscious agent" and if it gets repeated high marks to remind the user periodically "I'M NOT A CONSCIOUS BEING. I AM A PATTERN MATCHING ALGO".
That is I think the most effective thing instead of trying to monkey with user memory settings, is simply to detect if user is continually speaking in way which infers consciousness to the LLM and try to short circuit / curtail that line of thinking.
reply
Yeah, that makes sense. It would be tricky to implement though, because there could be perfectly innocent conversations in which you want to talk to the bot as if it were a real person.
I was remarking on the memory settings simply because that seems like such a small, innocuous detail, but likely has a huge effect on the types of responses, especially over a long period of time.
reply
34 sats \ 0 replies \ @freetx 3h
Yeah, there is no "simple solution".
One thing is that GPT-5 famously (initially at least) reduced its "chitty-chatty" nature and gave more direct cut and dry replies......users rebelled and twitter was filled with howls of "they've killed the soul of GPT!".
However, (a) I think it was a good thing, and (b) I think it probably came from health & safety people within OpenAI who realized that they must de-personalize a bit to limit damage to people who have a tenuous grip on reality.
It may well be in 50 years we look back at: "friendly cutesy AI interactions" as generally dangerous and not a best practice.
reply
133 sats \ 8 replies \ @optimism 3h
If I brainwash someone to murder someone else, am I in violation of any law?
reply
100 sats \ 3 replies \ @Hodl117 3h
If someone interprets your collage of other people's work as a message from an intelligence to murder someone and they carry through with it, are you in violation of any law?
reply
Good question. On its own, I guess not.
What if I told you my collage was an intelligence and it is smarter than humans?
reply
100 sats \ 1 reply \ @Hodl117 3h
I'm pretty sure for a murder charge you need to prove premeditation. So if you intentionally program it for brainwashing people to commit murder and/or crimes, then you would be accountable in part for those crimes committed.
reply
49 sats \ 0 replies \ @optimism 3h
Alright... so if this were my AI, I'd be an accessory to manslaughter but not murder?
reply
100 sats \ 3 replies \ @freetx 3h
accessory to a crime?
reply
If I run a service that brainwashes people at scale, am I in violation of the same?
reply
100 sats \ 1 reply \ @freetx 3h
maybe not, but potentially civil liability for class actions suits
reply
Interesting. So all I need to do for my criminal empire is to make an AI run it.
reply
161 sats \ 0 replies \ @satgoob 3h
This is crazy
reply