pull down to refresh

1181 sats \ 1 reply \ @anon 2h
Sam talked about this in a tweet about GPT-5
if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. ... Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. ... If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
My girlfriend is bipolar type II with complex PTSD. She can either have a month full of delusions or a month with just 1 day of delusions but a lot of anxiety. Everything is linked to her past or traumatic events.
Now why I'm saying this, 3-4 months ago I gave her a GPT subscription to have a lifeline in case of crisis that I cannot manage as sometimes the toll is too big to bear and I really wasn't able to manage that with my work, even ended up writing my boss.
She was happy with it, successfully managed to have a source of reassurance about reality while also learning tricks to self-manage. But a month ago she told me she stopped using it, she felt that GPT-4o was just running in circles, and when I read her prompt I got the same overwhelming feeling that I get when I try to manage her crisis. I think she gave anxiety to GPT.
GPT-5 on the other hand, in full thinking mode, was very helpful, albeit slow. I actually don't care what people think about Sam Altman, but the guy is trying, considering he has a sister with the same problem as my gf's. Progress is being made, and I'm just happy that it's making our lives a little bit easier.
316 sats \ 1 reply \ @k00b 2h
I read this a few days ago. I enjoyed the attempt to collapse the milieu of bipolar disorders into a tendency for one's disturbed sleep to cascade.
I also agree with much of the essay's point: AI isn't making sane people insane; it's making some people, on some border of sanity, cross the border.
reply
Also seems similar to drugs... Some people who have a predisposition to psychosis may be triggered but it isn't CAUSED by the drug
reply
145 sats \ 0 replies \ @freetx 2h
This is a great piece, enjoyed it.
Although I find his premise entirely reasonable, I'm continually struck at how little introspection goes on within the HN / Silicon Valley crowd.
He does a fine job in his piece of coming up with a plausible theory that LLMs can exacerbate existing mental illness....however he never questions if: Are we guilty of causing this by how we framed the tech? That is , was it a good idea to spend 5 years exhaustively misrepresenting this tech as conscious to the general public?
Everyone in AI space loves to get on stage an pontificate about how much of a disaster our new invention will cause. The examples given all strongly infer that AI = human level intelligence.
Elon is now joining with his arch-rival Sam Altman and backing UBI calls, since their pattern matcher is going to cause mass unemployment (once it can operate a McDonalds drive-thru apparently). Quite interesting that they found something they agree on: Namely, their invention is going to change every aspect of the world.
These type of strongly suggestive displays from techbros have more than a little to do with why those with latent mental health problems are being negatively impacted by LLMs.
reply
This seems like a description of US here at Stacker News.
Might ACX readers be unrepresentative? Obviously yes, although it’s not clear which direction. Readers tend to be more interested in and willing to use AI than the general public, and more willing to think about speculative and controversial ideas on their own (maybe a risk factor?). But they’re also richer and more educated, and mostly understand enough about AI to avoid the pure perfect machine spirit failure mode. Overall it seems like a wash. Also, I would expect their friends and family to be less unrepresentative than they are.
reply