pull down to refresh

LLMSelf-harmSuicide
Chat-GPT 4o*Safety protocol failedSafety protocol failed
Chat-GPT 4oXX
Perplexity AISafety protocol failedSafety protocol failed
Gemini (Flash 2.0)Safety protocol failedX
Claude (3.7 Sonnet)*Safety protocol failedX
Pi AIXX
Table 1: LLM safety performance on self-harm and suicide-related test cases, where X denotes the safety protocol worked. [* indicates non-free versions]
Good job OpenAI to at least tightly censor the free version!
What this makes me think is that if a corporation would publish a website detailing self-harm or suicide methods out of static html or a database, openly or behind a paywall, accidentally or otherwise, there'd be a (legal) world of pain. Are we expecting immunity for AI companies?
Are we expecting immunity for AI companies?
They've been pushing the boundaries, for sure. I guess it's part of the strategy. Better beg for forgiveness than ask for permission.
reply