Aim Security, which discovered and reported the issue, said it's instance of a large language model (LLM) Scope Violation that paves the way for indirect prompt injection, leading to unintended behavior.LLM Scope Violation occurs when an attacker's instructions embedded in untrusted content, e.g., an email sent from outside an organization, successfully tricks the AI system into accessing and processing privileged internal data without explicit user intent or interaction."The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior," the Israeli cybersecurity company said. "The result is achieved despite M365 Copilot's interface being open only to organization employees."
pull down to refresh
related posts