pull down to refresh

I've had some minimal training (10 sessions) of dealing with people that have psychosis and I've been applying that to the cases much like what is described in here, when I get a message or email (for example on a security mailing list) from what seems to be someone completely delusional with their AI companion.
But I actually don't know if that's the right approach: never confirm or deny the grandeur of the bot, ask questions, correct factual mistakes.
Thus far it has always resulted in people giving up, but I don't know how it goes with them. Did they drop the bot? Or did they move on to something that will confirm their delusion? I don't know if I should reach out and check in on people, or let it be, for I played a role in their delusion and I wouldn't want them to regress. This is very, very hard.
I don't have any formal training in this, but I spent a three years working at a drop-in center for people who were chronically homeless (this was almost always synonymous with some form of psychosis).
The lesson I learned from those years was that I was most helpful to people when I realized that I didn't play any different role in what was going on with them than the chairs on which we sat or the steps into the building. Whatever helpfulness I provided, occurred when I didn't allow myself to feel personally responsible for their psychosis.
reply
102 sats \ 0 replies \ @anon 17 Sep
ime one common symptom of psychosis is believing you arent delusional. i found this guys framework helpful. it helps avoid the worst case where they dont trust you which is imo why they end up on the streets; everyone that cares about them keeps telling them they are crazy and forcing them to do things instead of accepting and helping them however they want help.
listen (reflectively) - empathize (with how they feel about their experience) - agree (on truth) - partner (to help them accomplish their goals)
reply
Yes, this is mainly why I haven't done anything to follow up. Just guide them to at least let go of the illusion that their AI has found a glorious bug in something that doesn't exist or is minunderstood.
reply