logo
#

Latest news with #AimLabs

Posted Jun 13, 2025 at 10:51 AM EDT 0 Comments
Posted Jun 13, 2025 at 10:51 AM EDT 0 Comments

The Verge

time2 hours ago

  • The Verge

Posted Jun 13, 2025 at 10:51 AM EDT 0 Comments

Security researchers found a zero-click vulnerability in Microsoft 365 Copilot. The vulnerability, called 'EchoLeak,' lets attackers 'automatically exfiltrate sensitive and proprietary information' from Microsoft 365 Copilot without knowledge of the user, according to findings from Aim Labs. An attacker only needs to send their victim a malicious prompt injection disguised as a normal email, which covertly instructs Copilot to pull sensitive information from a user's account. Microsoft has since fixed the critical flaw and given it the identifier CVE-2025-32711. It also hasn't been exploited in the wild.

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit
AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

Hans India

timea day ago

  • Hans India

AI Security Alarm: Microsoft Copilot Vulnerability Exposed Sensitive Data via Zero-Click Email Exploit

In a major first for the AI security landscape, researchers have identified a critical vulnerability in Microsoft 365 Copilot that could have allowed hackers to steal sensitive user data—without the user ever clicking a link or opening an attachment. Known as EchoLeak, this zero-click flaw revealed how deeply embedded AI assistants can be exploited through subtle prompts hidden in regular-looking emails. The vulnerability was discovered by Aim Labs in January 2025 and promptly reported to Microsoft. It was fixed server-side in May, meaning users didn't need to take any action themselves. Microsoft emphasized that no customers were affected, and there's no evidence that the flaw was exploited in real-world scenarios. Still, the discovery marks a historic moment, as EchoLeak is believed to be the first-ever zero-click vulnerability targeting a large language model (LLM)-based assistant. How EchoLeak Worked Microsoft 365 Copilot integrates across Office applications like Word, Excel, Outlook, and Teams. It utilizes AI, powered by OpenAI's models and Microsoft Graph, to help users by analyzing data and generating content based on internal emails, documents, and chats. EchoLeak took advantage of this feature. Here's a breakdown of the exploit process: A malicious email is crafted to look legitimate but contains a hidden prompt embedded in the message. When a user later asks Copilot a related question, the AI, using Retrieval-Augmented Generation (RAG), pulls in the malicious email thinking it's relevant. The concealed prompt is then activated, instructing Copilot to leak internal data through a link or image. As the email is displayed, the link is automatically accessed by the browser, silently transferring internal data to the attacker's server. Researchers noted that certain markdown image formats used in the email could trigger browsers to send automatic requests, enabling the leak. While Microsoft's Content Security Policies (CSP) block most unknown web requests, services like Teams and SharePoint are considered trusted by default—offering a way in for attackers. The Bigger Concern: LLM Scope Violations The vulnerability isn't just a technical bug—it signals the emergence of a new category of threats called LLM Scope Violations. These occur when language models unintentionally expose data through their internal processing mechanisms, even without direct user commands. 'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs stated in their report. They also cautioned that similar risks could be present in other RAG-based AI systems, not just Microsoft Copilot. Microsoft assigned the flaw the ID CVE-2025-32711 and categorized it as critical. The company reassured users that the issue has been resolved and that there were no known incidents involving the vulnerability. Despite the fix, the warning from researchers is clear: "The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' their report concludes. As AI agents become more integrated into enterprise systems, EchoLeak is a stark reminder that security in the age of intelligent software needs to evolve just as fast as the technology itself.

Researchers discover zero-click vulnerability in Microsoft Copilot
Researchers discover zero-click vulnerability in Microsoft Copilot

The Hindu

timea day ago

  • The Hindu

Researchers discover zero-click vulnerability in Microsoft Copilot

Researchers have said that Microsoft Copilot had a critical zero-click AI vulnerability that was fixed before hackers stole sensitive data. Called 'EchoLeak,' the attack was mounted by Aim Labs researchers in January this year and then reported to Microsoft later. In a blog posted by the research team, they said that EchoLeak was the first zero-click attack on an AI agent and could hack remotely via an email. The vulnerability was given the identifier CVE-2025-32711 and rated critical and fixed eventually in May. The researchers have categorised EchoLeak under a new class of vulnerabilities called 'LLM Scope Violation,' which can lead a large language model to leak internal data without any interaction with the hacker. Although Microsoft acknowledged the security flow, it confirmed that there had been no instance of exploitation which had impacted users. Users receive an email that's been designed to look like a business document embedded with a hidden prompt injection that instructs the LLM to extract and exfiltrate sensitive data. When the user asks Copilot a query the email is retrieved into the LLM prompt by Retrieval-Augmented Generation or RAG.

First ever security flaw detected in an AI agent, could allow hacker to attack user via email
First ever security flaw detected in an AI agent, could allow hacker to attack user via email

India Today

time2 days ago

  • India Today

First ever security flaw detected in an AI agent, could allow hacker to attack user via email

In a first-of-its-kind discovery, cybersecurity researchers have identified a major security flaw in a Microsoft 365 Copilot AI agent. The vulnerability, called EchoLeak, allowed attackers to silently steal sensitive data from a user's environment by simply sending them an email. No clicks, downloads, or user actions were needed. The issue was uncovered by researchers at Aim Labs in January 2025 and reported to Microsoft. In May, the tech giant fixed the flaw server-side, meaning users didn't need to take any action. Microsoft also confirmed that no customers were impacted, and there is no evidence that the flaw was used in real-world the discovery marks a significant turning point for AI security, as EchoLeak is believed to be the first-ever zero-click AI vulnerability affecting a large language model-based the EchoLeak attack worksMicrosoft 365 Copilot is built into Office apps like Word, Excel, Outlook, and Teams. It uses AI to generate content, analyse data, and answer questions using internal documents, emails, and chats. It relies on OpenAI's models and Microsoft Graph to function. EchoLeak targeted how this assistant processes information from emails and documents when answering user questions. Here's how the attack worked:-An attacker sends a business-like email to the target. The email contains text that looks normal but hides a special prompt, designed to confuse the AI assistant.-When the user later asks a related question to Copilot, the system retrieves the earlier email using its Retrieval-Augmented Generation (RAG) engine, thinking it's relevant to the this point, the hidden prompt is activated. It silently instructs the AI to extract internal data and place it in a link or image.-When the email is displayed, the embedded link is automatically accessed by the browser – sending internal data to the attacker's server without the user realising anything has gone of the markdown image formats used in the attack are designed to make browsers send automatic requests, which made this data exfiltration Microsoft uses Content Security Policies (CSP) to block requests to unknown websites, services like Microsoft Teams and SharePoint are trusted by default. This allowed attackers to bypass certain defences.A new kind of AI vulnerabilityEchoLeak is more than just a software bug – it introduces a new class of threats known as LLM Scope Violations. This term refers to flaws in how large language models handle and leak information without being directly instructed by a user. In its report, Aim Labs warned that these kinds of vulnerabilities are especially dangerous in enterprise environments, where AI agents are deeply integrated into internal systems.'This attack chain showcases a new exploitation technique... by leveraging internal model mechanics,' Aim Labs said. The team believes the same risk could exist in other RAG-based AI systems, not just Microsoft's. Because EchoLeak required no user interaction and could work in fully automated ways, Aim Labs says it highlights the kind of threats that might become more common as AI becomes more embedded in business labelled the vulnerability as critical, assigned it CVE-2025-32711, and released a server-side fix in May. The company reassured users that no exploit had taken place and that the issue is now though no damage was done, researchers say the warning is clear. 'The increasing complexity and deeper integration of LLM applications into business workflows are already overwhelming traditional defences,' the report from Aim Labs reads.

Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security
Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Yahoo

time2 days ago

  • Business
  • Yahoo

Aim Security Launches Aim Labs with Elite Researchers from Google and Israel's Unit 8200 to Advance AI Security

Unique AI Vulnerability Research Yields Breakthrough 'EchoLeak' Discovery: First Zero-Click AI Vulnerability in Microsoft 365 Copilot NEW YORK, June 11, 2025--(BUSINESS WIRE)--Aim Security, the fastest-growing AI Security Platform, today announced the launch of Aim Labs, a new advanced vulnerability research division dedicated to uncovering and mitigating the most sophisticated threats targeting AI technologies. Led by former Google leaders and top alumni from Israel's elite Unit 8200, Aim Labs unites a rare combination of deep AI research and advanced cybersecurity expertise to drive innovation and set new standards for real-time defense through the proactive sharing of high quality threat intelligence. In concert with the launch, Aim Labs also released groundbreaking research detailing the first-of-its-kind "zero-click" attack chain on an AI agent. The critical vulnerability in Microsoft 365 Copilot, dubbed 'EchoLeak', allows attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot—without any user interaction, or reliance on specific victim behavior. The attack is initiated simply by sending an email to a target within an organization, regardless of sender restrictions or admin configurations. Aim Labs worked closely with Microsoft's Security Response Center to responsibly disclose the vulnerability and issue a fix. It is the first AI vulnerability to receive a no-action CVE from Microsoft (CVE-2025-32711). "AI is fundamentally re-writing the security playbook. EchoLeak is a reminder that even robust, enterprise-grade AI tools can be leveraged for sophisticated and automated data theft," said Itay Ravia, Head of Aim Labs. "This discovery underscores just how rapidly the threat landscape is evolving, reinforcing the urgent need for continuous innovation in AI security—the very mission driving Aim Labs." Aim Labs will serve as Aim's dedicated research hub, tackling the unique security challenges introduced by AI adoption across critical sectors including banking, healthcare, insurance, manufacturing, and defense. Trusted by Fortune 500 companies, the Aim platform's unique runtime detection capabilities will be leveraged by the Aim Engine to mitigate emerging vulnerabilities and exploitation methods in real-time. Through continuous threat discovery and openly sharing cutting-edge research and best practices, Aim Labs will empower organizations to confidently and securely harness the power of AI. "As AI becomes integral to business operations, organizations face unprecedented risks related to data exposure, supply chain vulnerabilities, and emerging threats like prompt injection and jailbreaks," said Matan Getz, CEO and Co-founder of Aim Security. "Aim Labs is our commitment to staying ahead of these evolving threats by fostering continuous innovation and sharing actionable insights with the global security community." Further Aim Labs research can be found on the Aim Labs website. For more information about Aim Security visit About Aim Security The Age of AI is radically transforming the traditional security stack. Aim Security is the enterprise-trusted partner to secure AI adoption, equipping security leaders with the ability to drive business productivity while providing the right guardrails and ensuring proactive protection for all use cases across the entire organization, whether enterprise use or production use. Leading CISOs and security practitioners on their secure AI journey, Aim empowers enterprises to unlock the full potential of AI technology without compromising security. View source version on Contacts Media Contact: Susie DoughertyMarketbridge for Aim SecurityE: aim@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store