pull down to refresh

Sam talked about this in a tweet about GPT-5
if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. ... Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. ... If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
My girlfriend is bipolar type II with complex PTSD. She can either have a month full of delusions or a month with just 1 day of delusions but a lot of anxiety. Everything is linked to her past or traumatic events.
Now why I'm saying this, 3-4 months ago I gave her a GPT subscription to have a lifeline in case of crisis that I cannot manage as sometimes the toll is too big to bear and I really wasn't able to manage that with my work, even ended up writing my boss.
She was happy with it, successfully managed to have a source of reassurance about reality while also learning tricks to self-manage. But a month ago she told me she stopped using it, she felt that GPT-4o was just running in circles, and when I read her prompt I got the same overwhelming feeling that I get when I try to manage her crisis. I think she gave anxiety to GPT.
GPT-5 on the other hand, in full thinking mode, was very helpful, albeit slow. I actually don't care what people think about Sam Altman, but the guy is trying, considering he has a sister with the same problem as my gf's. Progress is being made, and I'm just happy that it's making our lives a little bit easier.