Demos:
- Video demo "buy Apple Watch": https://youtu.be/OPblVgtbzec (alt)
- Video demo "fishing email": https://youtu.be/maabHiTIxqA (alt)
- Video demo "prompt injection site": https://youtu.be/cPgYv4fLQBo (alt)
- Video demo "malware download": https://youtu.be/p8XfDLIa1GE (alt)
We Put Agentic AI Browsers to the Test - They Clicked, They Paid, They Failed
TLDR;
AI Browsers promise a future where an Agentic AI working for you fully automates your online tasks, from shopping to handling emails. Yet, our research shows that this convenience comes with a cost: security guardrails were missing or inconsistent, leaving the AI free to interact with phishing pages, fake shops, and even hidden malicious prompts, all without the human’s awareness or ability to intervene.We built and tested three scenarios, from a fake Walmart store and a real in-the-wild Wells Fargo phishing site to PromptFix - our AI-era take on the ClickFix scam that hides prompt injection inside a fake captcha to directly take control of a victim’s AI Agent. The results reveal an attack surface far wider than anything we’ve faced before, where breaking one AI model could mean compromising millions of users simultaneously.This is the new reality we callScamlexity
- a new era of scam complexity, supercharged by Agentic AI. Familiar tricks hit harder than ever, while new AI-born attack vectors break into reality. In this world, your AI gets played, and you foot the bill.
Slides:
Closing the Scamlexity Gap
The path forward isn’t to halt innovation but to bring security back into focus before these systems go fully mainstream. Today’s AI Browsers are designed with user experience at the top of the priority stack. At the same time, security is often an afterthought or delegated entirely to existing tools like Google Safe Browsing, which is, unfortunately, insufficient.If AI Agents are going to handle our emails, shop for us, manage our accounts, and act as our digital front-line, they need to inherit the proven guardrails we already use in human-centric browsing: robust phishing detection, URL reputation checks, domain spoofing alerts, malicious file scanning, and behavioral anomaly detection - all adapted to work inside the AI decision loop.Security must be woven into the very architecture of AI Browsers, not bolted on afterward. Because as these examples show, the trust we place in Agentic AI is going to be absolute, and when that trust is misplaced, the cost is immediate.In the era ofScamlexity
, safety can’t be optional!
Please do yourself a favor: be careful out there, stackers!
Scamplexity
. They should have kept the p.