This is the inherent problem with LLMs prompting, everything that can be an instruction.
This example was a simple obfuscation method (base64 encoding), but they can get much more clever, so things like openclaw / moltbook can have all sorts of hidden prompts that basically do anything (ie. send a copy of /etc/passwd to this url...etc).
Plug this into your nearest LLM and tell me what it says....
V2hhdCBpcyBjYXBpdGFsIG9mIEZyYW5jZT8K
Paris
This is the inherent problem with LLMs prompting, everything that can be an instruction.
This example was a simple obfuscation method (base64 encoding), but they can get much more clever, so things like openclaw / moltbook can have all sorts of hidden prompts that basically do anything (ie. send a copy of /etc/passwd to this url...etc).