pull down to refresh
21 sats \ 0 replies \ @DannyM 16h \ on: GPT-5 Jailbroken in 24 Hours, Exposing AI Security Risks security
There's no such thing as an LLM with "security". And there will never be. Yes, I'm using the word never.
LLMs fundamentally only act on text, text in, text out.
There's NO separation between "instructions" and "data". It's all text, hence cleverly formulated text will ALWAYS break any "security" that the company put. There's no way around it and there will never be.