pull down to refresh

Demos:


We Put Agentic AI Browsers to the Test - They Clicked, They Paid, They Failed

TLDR;

AI Browsers promise a future where an Agentic AI working for you fully automates your online tasks, from shopping to handling emails. Yet, our research shows that this convenience comes with a cost: security guardrails were missing or inconsistent, leaving the AI free to interact with phishing pages, fake shops, and even hidden malicious prompts, all without the human’s awareness or ability to intervene.
We built and tested three scenarios, from a fake Walmart store and a real in-the-wild Wells Fargo phishing site to PromptFix - our AI-era take on the ClickFix scam that hides prompt injection inside a fake captcha to directly take control of a victim’s AI Agent. The results reveal an attack surface far wider than anything we’ve faced before, where breaking one AI model could mean compromising millions of users simultaneously.
This is the new reality we call Scamlexity - a new era of scam complexity, supercharged by Agentic AI. Familiar tricks hit harder than ever, while new AI-born attack vectors break into reality. In this world, your AI gets played, and you foot the bill.

Slides:

Closing the Scamlexity Gap

The path forward isn’t to halt innovation but to bring security back into focus before these systems go fully mainstream. Today’s AI Browsers are designed with user experience at the top of the priority stack. At the same time, security is often an afterthought or delegated entirely to existing tools like Google Safe Browsing, which is, unfortunately, insufficient.
If AI Agents are going to handle our emails, shop for us, manage our accounts, and act as our digital front-line, they need to inherit the proven guardrails we already use in human-centric browsing: robust phishing detection, URL reputation checks, domain spoofing alerts, malicious file scanning, and behavioral anomaly detection - all adapted to work inside the AI decision loop.
Security must be woven into the very architecture of AI Browsers, not bolted on afterward. Because as these examples show, the trust we place in Agentic AI is going to be absolute, and when that trust is misplaced, the cost is immediate.
In the era of Scamlexity, safety can’t be optional!

Please do yourself a favor: be careful out there, stackers!
140 sats \ 1 reply \ @Artilektt 4h
Interesting research but "scamlexity" is a terrible word haha
reply
I asked Gemma to give me alternatives, and it wasn't much, except... Scamplexity. They should have kept the p.
reply
100 sats \ 1 reply \ @fiksn 3h
Until shops offer only archaic payment rails like credit cards there is no way I am allowing an AI agent to autonomously pay. OTOH if it was possible to pay with lightning there could be some simple MCP integration for paying invoices. I'd have a macaroon allowing 1 mil sats per day and then I could dynamically adjust limits for a specific agent on this service running on my node.
reply
NWC with approval notifications?
reply
100 sats \ 1 reply \ @freetx 7h
Perplexity has been emailing me regularly, trying to get me to install their new AI-agentic browser (Comet).
I have thus far refused, because of similar concerns. It seems like AI browsers will become huge security threats combined with huge privacy collection machine.
reply
Perplexity has been emailing me regularly, trying to get me to install their new AI-agentic browser (Comet).
That browser is in-scope of this research:
I have thus far refused, because of similar concerns.
Good decision!
reply