pull down to refresh
21 sats \ 0 replies \ @DannyM 14h
There's no such thing as an LLM with "security". And there will never be. Yes, I'm using the word never.
LLMs fundamentally only act on text, text in, text out.
There's NO separation between "instructions" and "data". It's all text, hence cleverly formulated text will ALWAYS break any "security" that the company put. There's no way around it and there will never be.
reply
10 sats \ 0 replies \ @k00b 11 Aug
This is mostly a repackaging of this article: https://www.securityweek.com/red-teams-breach-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise/
reply
0 sats \ 0 replies \ @BlokchainB 11 Aug
Yikes
reply