logo
#

Latest news with #EchoLeak

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

Hans India

timea day ago

  • Hans India

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails. The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn't need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios. Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant. How EchoLeak Worked Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI's models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats. EchoLeak took advantage of this feature. Here's a breakdown of the exploit process: A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message. When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it's relevant. The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image. As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker's server. Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft's Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers. The Bigger Concern: LLM Scope Violations The vulnerability isn't just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands. 'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot. Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability. Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' their report concludes. As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

Researchers discover zero-click vulnerability in Microsoft Copilot
Researchers discover zero-click vulnerability in Microsoft Copilot

The Hindu

timea day ago

  • The Hindu

Researchers discover zero-click vulnerability in Microsoft Copilot

Researchers have said that Microsoft Copilot had a critical zero-click AI vulnerability that was fixed before hackers stole sensitive data. Called 'EchoLeak,' the attack was mounted by Aim Labs researchers in January this year and then reported to Microsoft later. In a blog posted by the research team, they said that EchoLeak was the first zero-click attack on an AI agent and could hack remotely via an email. The vulnerability was given the identifier CVE-2025-32711 and rated critical and fixed eventually in May. The researchers have categorised EchoLeak under a new class of vulnerabilities called 'LLM Scope Violation,' which can lead a large language model to leak internal data without any interaction with the hacker. Although Microsoft acknowledged the security flow, it confirmed that there had been no instance of exploitation which had impacted users. Users receive an email that's been designed to look like a business document embedded with a hidden prompt injection that instructs the LLM to extract and exfiltrate sensitive data. When the user asks Copilot a query the email is retrieved into the LLM prompt by Retrieval-Augmented Generation or RAG.

First ever security flaw detected in an AI agent, could allow hacker to attack user via email
First ever security flaw detected in an AI agent, could allow hacker to attack user via email

India Today

time2 days ago

  • India Today

First ever security flaw detected in an AI agent, could allow hacker to attack user via email

In a first-of-its-kind discovery, cybersecurity researchers have identified a major security flaw in a Microsoft 365 Copilot AI agent. The vulnerability, called EchoLeak, allowed attackers to silently steal sensitive data from a user's environment by simply sending them an email. No clicks, downloads, or user actions were needed. The issue was uncovered by researchers at Aim Labs in January 2025 and reported to Microsoft. In May, the tech giant fixed the flaw server-side, meaning users didn't need to take any action. Microsoft also confirmed that no customers were impacted, and there is no evidence that the flaw was used in real-world the discovery marks a significant turning point for AI security, as EchoLeak is believed to be the first-ever zero-click AI vulnerability affecting a large language model-based the EchoLeak attack worksMicrosoft 365 Copilot is built into Office apps like Word, Excel, Outlook, and Teams. It uses AI to generate content, analyse data, and answer questions using internal documents, emails, and chats. It relies on OpenAI's models and Microsoft Graph to function. EchoLeak targeted how this assistant processes information from emails and documents when answering user questions. Here's how the attack worked:-An attacker sends a business-like email to the target. The email contains text that looks normal but hides a special prompt, designed to confuse the AI assistant.-When the user later asks a related question to Copilot, the system retrieves the earlier email using its Retrieval-Augmented Generation (RAG) engine, thinking it's relevant to the this point, the hidden prompt is activated. It silently instructs the AI to extract internal data and place it in a link or image.-When the email is displayed, the embedded link is automatically accessed by the browser – sending internal data to the attacker's server without the user realising anything has gone of the markdown image formats used in the attack are designed to make browsers send automatic requests, which made this data exfiltration Microsoft uses Content Security Policies (CSP) to block requests to unknown websites, services like Microsoft Teams and SharePoint are trusted by default. This allowed attackers to bypass certain defences.A new kind of AI vulnerabilityEchoLeak is more than just a software bug – it introduces a new class of threats known as LLM Scope Violations. This term refers to flaws in how large language models handle and leak information without being directly instructed by a user. In its report, Aim Labs warned that these kinds of vulnerabilities are especially dangerous in enterprise environments, where AI agents are deeply integrated into internal systems.'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs said. The team believes the same risk could exist in other RAG-based AI systems, not just Microsoft's. Because EchoLeak required no user interaction and could work in fully automated ways, Aim Labs says it highlights the kind of threats that might become more common as AI becomes more embedded in business labelled the vulnerability as critical, assigned it CVE-2025-32711, and released a server-side fix in May. The company reassured users that no exploit had taken place and that the issue is now though no damage was done, researchers say the warning is clear. 'The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' the report from Aim Labs reads.

Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security
Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Yahoo

time2 days ago

  • Business
  • Yahoo

Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Unique AI Vulnerability Research Yields Breakthrough 'EchoLeak' Discovery: First Zero-Click AI Vulnerability in Microsoft 365 Copilot NEW YORK, June 11, 2025--(BUSINESS WIRE)--Aim Security, the fastest-growing AI Security Platform, today announced the launch of Aim Labs, a new advanced vulnerability research division dedicated to uncovering and mitigating the most sophisticated threats targeting AI technologies. Led by former Google leaders and top alumni from Israel's elite Unit 8200, Aim Labs unites a rare combination of deep AI research and advanced cybersecurity expertise to drive innovation and set new standards for real-time defense through the proactive sharing of high quality threat intelligence. In concert with the launch, Aim Labs also released groundbreaking research detailing the first-of-its-kind "zero-click" attack chain on an AI agent. The critical vulnerability in Microsoft 365 Copilot, dubbed 'EchoLeak', allows attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot—without any user interaction, or reliance on specific victim behavior. The attack is initiated simply by sending an email to a target within an organization, regardless of sender restrictions or admin configurations. Aim Labs worked closely with Microsoft's Security Response Center to responsibly disclose the vulnerability and issue a fix. It is the first AI vulnerability to receive a no-action CVE from Microsoft (CVE-2025-32711). "AI is fundamentally re-writing the security playbook. EchoLeak is a reminder that even robust, enterprise-grade AI tools can be leveraged for sophisticated and automated data theft," said Itay Ravia, Head of Aim Labs. "This discovery underscores just how rapidly the threat landscape is evolving, reinforcing the urgent need for continuous innovation in AI security—the very mission driving Aim Labs." Aim Labs will serve as Aim's dedicated research hub, tackling the unique security challenges introduced by AI adoption across critical sectors including banking, healthcare, insurance, manufacturing, and defense. Trusted by Fortune 500 companies, the Aim platform's unique runtime detection capabilities will be leveraged by the Aim Engine to mitigate emerging vulnerabilities and exploitation methods in real-time. Through continuous threat discovery and openly sharing cutting-edge research and best practices, Aim Labs will empower organizations to confidently and securely harness the power of AI. "As AI becomes integral to business operations, organizations face unprecedented risks related to data exposure, supply chain vulnerabilities, and emerging threats like prompt injection and jailbreaks," said Matan Getz, CEO and Co-founder of Aim Security. "Aim Labs is our commitment to staying ahead of these evolving threats by fostering continuous innovation and sharing actionable insights with the global security community." Further Aim Labs research can be found on the Aim Labs website. For more information about Aim Security visit About Aim Security The Age of AI is radically transforming the traditional security stack. Aim Security is the enterprise-trusted partner to secure AI adoption, equipping security leaders with the ability to drive business productivity while providing the right guardrails and ensuring proactive protection for all use cases across the entire organization, whether enterprise use or production use. Leading CISOs and security practitioners on their secure AI journey, Aim empowers enterprises to unlock the full potential of AI technology without compromising security. View source version on Contacts Media Contact: Susie DoughertyMarketbridge for Aim SecurityE: aim@

Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—'I would be terrified'
Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—'I would be terrified'

Yahoo

time2 days ago

  • Business
  • Yahoo

Exclusive: New Microsoft Copilot flaw signals broader risk of AI agents being hacked—'I would be terrified'

Microsoft 365 Copilot, the AI tool built into Microsoft Office workplace applications including Word, Excel, Outlook, PowerPoint, and Teams, harbored a critical security flaw that, according to researchers, signals a broader risk of AI agents being hacked. The flaw, revealed today by AI security startup Aim Security and shared exclusively in advance with Fortune, is the first known 'zero-click' attack on an AI agent, an AI that acts autonomously to achieve specific goals. The nature of the vulnerability means that the user doesn't need to click anything or interact with a message for an attacker to access sensitive information from apps and data sources connected to the AI agent. In the case of Microsoft 365 Copilot, the vulnerability lets a hacker trigger an attack simply by sending an email to a user, with no phishing or malware needed. Instead, the exploit uses a series of clever techniques to turn the AI assistant against itself. Microsoft 365 Copilot acts based on user instructions inside Office apps to do things like access documents and produce suggestions. If infiltrated by hackers, it could be used to target sensitive internal information such as emails, spreadsheets, and chats. The attack bypasses Copilot's built-in protections, which are designed to ensure that only users can access their own files—potentially exposing proprietary, confidential, or compliance-related data. The researchers at Aim Security dubbed the flaw 'EchoLeak.' Microsoft told Fortune that it has already fixed the issue in Microsoft 365 Copilot and that its customers were unaffected. 'We appreciate Aim for identifying and responsibly reporting this issue so it could be addressed before our customers were impacted,' a Microsoft spokesperson said in a statement. 'We have already updated our products to mitigate this issue and no customer action is required. We are also implementing additional defense-in-depth measures to further strengthen our security posture.' The Aim researchers said that EchoLeak is not just a run-of-the-mill security bug. It has broader implications beyond Copilot because it stems from a fundamental design flaw in LLM-based AI agents that is similar to software vulnerabilities in the 1990s, when attackers began to be able to take control of devices like laptops and mobile phones. Adir Gruss, cofounder and CTO of Aim Security, told Fortune that he and his fellow researchers took about three months to reverse engineer Microsoft 365 Copilot, one of the most widely-used generative AI assistants. They wanted to determine whether something like those earlier software vulnerabilities lurked under the hood and then develop guardrails to mitigate against them. 'We found this chain of vulnerabilities that allowed us to do the equivalent of the 'zero click' for mobile phones, but for AI agents,' he said. First, the attacker sends an innocent-seeming email that contains hidden instructions meant for Copilot. Then, since Copilot scans the user's emails in the background, Copilot reads the message and follows the prompt—digging into internal files and pulling out sensitive data. Finally, Copilot hides the source of the instructions, so the user can't trace what happened. After discovering the flaw in January, Gruss explained that Aim contacted the Microsoft Security Response Center, which investigates all reports of security vulnerabilities affecting Microsoft products and services. 'They want their customers to be secure,' he said. 'They told us this was super groundbreaking for them.' However, it took five months for Microsoft to address the issue, which Gruss said 'is on the (very) high side of something like this.' One reason, he explained, is that the vulnerability is so new, and it took time to get the right Microsoft teams involved in the process and educate them about the vulnerability and mitigations. Microsoft initially attempted a fix in April, Gruss said, but in May the company discovered additional security issues around the vulnerability. Aim decided to wait until Microsoft had fully fixed the flaw before publishing its research, in the hope that other vendors that might have similar vulnerabilities 'will wake up.' Gruss said the biggest concern is that EchoLeak could apply to other kinds of agents—from Anthropic's MCP (Model Context Protocol), which connects AI assistants to other applications, to platforms like Salesforce's Agentforce. If he led a company implementing AI agents right now, 'I would be terrified,' Gruss said. 'It's a basic kind of problem that caused us 20, 30 years of suffering and vulnerability because of some design flaws that went into these systems, and it's happening all over again now with AI.' Organizations understand that, he explained, which may be why most have not yet widely adopted AI agents. 'They're just experimenting, and they're super afraid,' he said. 'They should be afraid, but on the other hand, as an industry we should have the proper systems and guardrails.' Microsoft tried to prevent such a problem, known as an LLM Scope Violation vulnerability. It's a class of security flaws in which the model is tricked into accessing or exposing data beyond what it's authorized or intended to handle—essentially violating its 'scope' of permissions. 'They tried to block it in multiple paths across the chain, but they just failed to do so because AI is so unpredictable and the attack surface is so big,' Gruss said. While Aim is offering interim mitigations to clients adopting other AI agents that could be affected by the EchoLeak vulnerability, Gruss said the long-term fix will require a fundamental redesign of how AI agents are built. 'The fact that agents use trusted and untrusted data in the same 'thought process' is the basic design flaw that makes them vulnerable,' he explained. 'Imagine a person that does everything he reads—he would be very easy to manipulate. Fixing this problem would require either ad-hoc controls, or a new design allowing for clearer separation between trusted instructions and untrusted data.' Such a redesign could be in the models themselves, Gruss said, citing active research into enabling the models to better distinguish between instructions and data. Or, the applications the agents are built on top of could add mandatory guardrails for any agent. For now, 'every Fortune 500 I know is terrified of getting agents to production,' he said, pointing out that Aim has previously done research on coding agents where the team was able to run malicious code on developers' machines. 'There are users experimenting, but these kind of vulnerabilities keep them up at night and prevent innovation.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store