logo
Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say

Researchers find 'dangerous' AI data leak flaw in Microsoft 365 Copilot: What the company has to say

Time of India13-06-2025
A critical artificial intelligence (AI) vulnerability has been discovered in
Microsoft
365 Copilot, raising new concerns about data security in AI-integrated enterprise environments. The flaw, dubbed 'EchoLeak', which enabled attackers to exfiltrate sensitive user data with zero-click interaction, has been devised by
Aim Labs researchers
in January 2025.
According to a report by Bleeping Computer, Aim Labs promptly reported their findings to Microsoft, which rated it as critical. Microsoft swiftly addressed the issue, implementing a server-side fix in May 2025. This means that no user action is required to patch the vulnerability.
Microsoft has also stated there is no evidence of any real-world exploitation, essentially confirming that no customers were impacted by this flaw.
What is EchoLeak attack and how it worked
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Trade Bitcoin & Ethereum – No Wallet Needed!
IC Markets
Start Now
Undo
The EchoLeak attack commenced with a malicious email sent to the target. This email contained text seemingly unrelated to Copilot, designed to resemble a typical business document. It embedded a hidden prompt injection crafted to instruct Copilot's underlying LLM to extract sensitive internal data. Because this hidden prompt was phrased like a normal message, it cleverly bypassed Microsoft's existing XPIA (cross-prompt injection attack) classifier protections.
Microsoft 365 Copilot, an AI assistant integrated into Office applications like Word, Excel, Outlook, and Teams, leverages OpenAI's GPT models and Microsoft Graph to help users generate content, analyse data and answer questions based on their organisation's internal files, emails, and chats.
When the user prompted Copilot with a related business question, Microsoft's Retrieval-Augmented Generation (RAG) engine retrieved the malicious email into the LLM's prompt context due to its apparent relevance and formatting. Once inside the LLM's active context, the malicious injection "tricked" the AI into pulling sensitive internal data and embedding it into a specially crafted link or image.
This led to unintentional leaks of internal data without explicit user intent or interaction.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Microsoft scales back Chinese access to cyber early warning system
Microsoft scales back Chinese access to cyber early warning system

Mint

time2 hours ago

  • Mint

Microsoft scales back Chinese access to cyber early warning system

WASHINGTON (Reuters) -Microsoft said on Wednesday it has scaled back some Chinese companies' access to its early warning system for cybersecurity vulnerabilities following speculation that Beijing was involved in a hacking campaign against the company's widely used SharePoint servers. The new restrictions come in the wake of last month's sweeping hacking attempts against Microsoft SharePoint servers, at least some of which Microsoft and others have blamed on Beijing. That raised suspicions among several cybersecurity experts that there was a leak in the Microsoft Active Protections Program (MAPP), which Microsoft uses to help security vendors worldwide, including in China, to learn about cyber threats before the general public so they can better defend against hackers. Beijing has denied involvement in any SharePoint hacking. Microsoft notified members of the MAPP program of the SharePoint vulnerabilities on June 24, July 3 and July 7, Reuters has previously reported. Because Microsoft said it first observed exploitation attempts on July 7, the timing led some experts to allege that the likeliest scenario for the sudden explosion in hacking attempts was because a rogue member of the MAPP program misused the information. In a statement, Microsoft said several Chinese firms would no longer receive "proof of concept code," which mimics the operation of genuine malicious software. Proof of concept code can help cybersecurity professionals seeking to harden their systems in a hurry, but it can also be repurposed by hackers to get a jump start on the defenders. Microsoft said it was aware that the information it provided its partners could be exploited, "which is why we take steps – both known and confidential – to prevent misuse. We continuously review participants and suspend or remove them if we find they violated their contract with us which includes a prohibition on participating in offensive attacks." Microsoft declined to disclose the status of its investigation of the hacking or go into specifics about which companies had been restricted. (Reporting by Raphael Satter in Washington;Editing by Matthew Lewis)

Microsoft employee protests lead to arrests as company reviews its work with Israels military
Microsoft employee protests lead to arrests as company reviews its work with Israels military

Mint

time4 hours ago

  • Mint

Microsoft employee protests lead to arrests as company reviews its work with Israels military

REDMOND, Wash. — Worker-led protests erupted at Microsoft headquarters this week as the tech company promises an 'urgent' review of the Israeli military's use of its technology during the ongoing war in Gaza. A second day of protests at the Microsoft campus on Wednesday called for the tech giant to immediately cut its business ties with Israel. The police department began making arrests after Microsoft said the protesters were trespassing. 'We said, 'Please leave or you will be arrested,' and they chose not to leave so they were detained,' said police spokesperson Jill Green. Microsoft late last week said it was tapping a law firm to investigate allegations reported by British newspaper The Guardian that the Israeli Defense Forces used Microsoft's Azure cloud computing platform to store phone call data obtained through the mass surveillance of Palestinians in Gaza and the West Bank. 'Microsoft's standard terms of service prohibit this type of usage," the company said in a statement posted Friday, adding that the report raises 'precise allegations that merit a full and urgent review.' The company said it will share the findings after law firm Covington & Burling completes its review. The promised review was insufficient for the employee-led No Azure for Apartheid group, which for months has protested Microsoft's supplying the Israeli military with technology used for its war against Hamas in Gaza. In February, The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems. Following The 's report, Microsoft acknowledged the military applications but said a review it commissioned found no evidence that its Azure platform and artificial intelligence technologies were used to target or harm people in Gaza. Microsoft did not share a copy of that review or say who conducted it. Microsoft in May fired an employee who interrupted a speech by CEO Satya Nadella to protest the contracts, and in April, fired two others who interrupted the company's 50th anniversary celebration. This article was generated from an automated news agency feed without modifications to text.

Microsoft reviewing Israeli army's use of its tech amid worker protests
Microsoft reviewing Israeli army's use of its tech amid worker protests

Hindustan Times

time4 hours ago

  • Hindustan Times

Microsoft reviewing Israeli army's use of its tech amid worker protests

Worker-led protests erupted at Microsoft headquarters this week as the tech company promises an 'urgent' review of the Israeli military's use of its technology during the ongoing war in Gaza. A "Stop Starving Gaza" sign during a protest at the Microsoft Campus in Redmond, Washington as employees rallied at the company to increase pressure on the software maker to stop doing business with Israel.(Bloomberg) A second day of protests at the Microsoft campus on Wednesday called for the tech giant to immediately cut its business ties with Israel. A spokesperson for the Redmond Police Department confirmed that several protesters have been arrested. Microsoft late last week said it was tapping a law firm to investigate allegations reported by British newspaper The Guardian that the Israeli Defense Forces used Microsoft's Azure cloud computing platform to store phone call data obtained through the mass surveillance of Palestinians in Gaza and the West Bank. 'Microsoft's standard terms of service prohibit this type of usage," the company said in a statement posted Friday, adding that the report raises 'precise allegations that merit a full and urgent review.' The company said it will share the findings after law firm Covington & Burling completes its review. The promised review was insufficient for the employee-led No Azure for Apartheid group, which for months has protested Microsoft's supplying the Israeli military with technology used for its war against Hamas in Gaza. In February, The Associated Press revealed previously unreported details about the American tech giant's close partnership with the Israeli Ministry of Defense, with military use of commercial AI products skyrocketing by nearly 200 times after the deadly Oct. 7, 2023, Hamas attack. The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel's in-house AI-enabled targeting systems. Following The AP's report, Microsoft acknowledged the military applications but said a review it commissioned found no evidence that its Azure platform and artificial intelligence technologies were used to target or harm people in Gaza. Microsoft did not share a copy of that review or say who conducted it. Microsoft in May fired an employee who interrupted a speech by CEO Satya Nadella to protest the contracts, and in April, fired two others who interrupted the company's 50th anniversary celebration.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store