logo
#

Latest news with #AonanGuan

Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe
Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe

Yahoo

time3 days ago

  • Yahoo

Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe

When you buy through links on our articles, Future and its syndication partners may earn a commission. Microsoft is quickly heading towards AI agentic browsing — that much is obvious with Edge's AI makeover and an open project called NLWeb that can be used to give any website AI power. But while this all sounds good on paper, it does open the door to a whole lot of security risks, and the company's agentic aspirations have already been hit by a flaw that is concerningly simple. Fortunately, it has been patched out, but it does start a bigger conversation we need to have about staying safe while agentic browsing. Let's get into it. So what happened? NLWeb is envisioned as 'HTML for the Agentic Web.' Announced back at Build 2025, this is the framework for AI browsing on your behalf, but researchers Aonan Guan and Lei Wang found what is called a "path traversal vulnerability". This is a pretty standard security oversight that hackers can take advantage of by having an agentic AI visit a specially-made URL that can grant the attacker access to sensitive files like system configuration files and API keys. What can be done with this information is what can amount to stealing your agent's brain. Attackers at this point can get to the core functions of your AI agent and do a wide-ranging amount of things like look at/interact with emails on your behalf, or even get into your finances. The flaw was found and reported to Microsoft on May 28, 2025, and the company patched it out on July 1, 2025 by updating the open-source repository. It was a simple exposure that had huge problematic potential. 'This issue was responsibly reported and we have updated the open-source repository,' Microsoft spokesperson Ben Hope told The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' How to stay safe while agentic browsing We've seen a significant shift towards agentic browsing over the last 12 months — spearheaded by the likes of OpenAI Operator, Opera launching the world's first on-device agentic AI browser, and Rabbit R1's LAM Playground. This serious flaw may have already been patched out by Microsoft, but it's clear that this won't be the last security issue we come across. For example, there's the Model Context Protocol (MCP), which is an open standard launched by Anthropic to allow AI assistants to interact with tools and services on your behalf. Sounds good on paper, but researchers have already identified the risks of account takeover and token theft: when a hacker gains access to personal authentication tokens and essentially gets the keys to your kingdom. So it's clear you need to be extra careful in the agentic era. Here are some key steps you can take: Be cautious with OAuth permissions (Image: © OpenAI) If your AI agent is asking to connect to a service like Gmail or Google Drive, read the permissions carefully. Don't approve full access if only read access is needed, so avoid clicking 'allow all' without thinking about it. On top of that, if you want an additional layer of security, use a separate account. That way you can see what the agentic AI will be able to do without putting your sensitive information on the line. Don't 100% trust any agent (Image: © Future) Think of any agent as a teenager you just gave the car keys to — effective most of the time, but not averse to mistakes (my battered Vauxhall Corsa can attest to this). By that, I mean check the agent you use is from a reputable company to start with. That means don't install any browser extensions that claim to 'autonomously browse the web.' And whatever you're using, don't let them auto-fill forms, send emails or make purchases unless you explicitly tell them to. Sanitize your browsing and app permissions (Image: © Future) For Chrome users, head over to Google Security Checkup and get rid of any third-party services that have access to your account. This will limit any potential exposures, as much as turning off autofill or password auto-saving will too. And for an additional layer of security, use agentic web tools in incognito/private windows to limit cookie or token leakage. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. More from Tom's Guide OpenAI ChatGPT-5 live blog: Countdown and all the big news as it happens This Windows 11 feature drove me nuts. Microsoft's finally fixing it — sort of I tested ChatGPT vs Gemini 2.5 Pro with these 3 prompts - and it shows what GPT-5 needs to do

Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe
Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe

Tom's Guide

time4 days ago

  • Tom's Guide

Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe

Microsoft is quickly heading towards AI agentic browsing — that much is obvious with Edge's AI makeover and an open project called NLWeb that can be used to give any website AI power. But while this all sounds good on paper, it does open the door to a whole lot of security risks, and the company's agentic aspirations have already been hit by a flaw that is concerningly simple. Fortunately, it has been patched out, but it does start a bigger conversation we need to have about staying safe while agentic browsing. Let's get into it. NLWeb is envisioned as 'HTML for the Agentic Web.' Announced back at Build 2025, this is the framework for AI browsing on your behalf, but researchers Aonan Guan and Lei Wang found what is called a "path traversal vulnerability". This is a pretty standard security oversight that hackers can take advantage of by having an agentic AI visit a specially-made URL that can grant the attacker access to sensitive files like system configuration files and API keys. What can be done with this information is what can amount to stealing your agent's brain. Attackers at this point can get to the core functions of your AI agent and do a wide-ranging amount of things like look at/interact with emails on your behalf, or even get into your finances. The flaw was found and reported to Microsoft on May 28, 2025, and the company patched it out on July 1, 2025 by updating the open-source repository. It was a simple exposure that had huge problematic potential. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'This issue was responsibly reported and we have updated the open-source repository,' Microsoft spokesperson Ben Hope told The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' We've seen a significant shift towards agentic browsing over the last 12 months — spearheaded by the likes of OpenAI Operator, Opera launching the world's first on-device agentic AI browser, and Rabbit R1's LAM Playground. This serious flaw may have already been patched out by Microsoft, but it's clear that this won't be the last security issue we come across. For example, there's the Model Context Protocol (MCP), which is an open standard launched by Anthropic to allow AI assistants to interact with tools and services on your behalf. Sounds good on paper, but researchers have already identified the risks of account takeover and token theft: when a hacker gains access to personal authentication tokens and essentially gets the keys to your kingdom. So it's clear you need to be extra careful in the agentic era. Here are some key steps you can take: If your AI agent is asking to connect to a service like Gmail or Google Drive, read the permissions carefully. Don't approve full access if only read access is needed, so avoid clicking 'allow all' without thinking about it. On top of that, if you want an additional layer of security, use a separate account. That way you can see what the agentic AI will be able to do without putting your sensitive information on the line. Think of any agent as a teenager you just gave the car keys to — effective most of the time, but not averse to mistakes (my battered Vauxhall Corsa can attest to this). By that, I mean check the agent you use is from a reputable company to start with. That means don't install any browser extensions that claim to 'autonomously browse the web.' And whatever you're using, don't let them auto-fill forms, send emails or make purchases unless you explicitly tell them to. For Chrome users, head over to Google Security Checkup and get rid of any third-party services that have access to your account. This will limit any potential exposures, as much as turning off autofill or password auto-saving will too. And for an additional layer of security, use agentic web tools in incognito/private windows to limit cookie or token leakage. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.

Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw
Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw

The Verge

time5 days ago

  • The Verge

Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw

Researchers have already found a critical vulnerability in the new NLWeb protocol Microsoft made a big deal about just just a few months ago at Build. It's a protocol that's supposed to be 'HTML for the Agentic Web,' offering ChatGPT-like search to any website or app. Discovery of the embarrassing security flaw comes in the early stages of Microsoft deploying NLWeb with customers like Shopify, Snowlake, and TripAdvisor. The flaw allows any remote users to read sensitive files, including system configuration files and even OpenAI or Gemini API keys. What's worse is that it's a classic path traversal flaw, meaning it's as easy to exploit as visiting a malformed URL. Microsoft has patched the flaw, but it raises questions about how something as basic as this wasn't picked up in Microsoft's big new focus on security. 'This case study serves as a critical reminder that as we build new AI-powered systems, we must re-evaluate the impact of classic vulnerabilities, which now have the potential to compromise not just servers, but the 'brains' of AI agents themselves,' says Aonan Guan, one of the security researchers (alongside Lei Wang) that reported the flaw to Microsoft. Guan is a senior cloud security engineer at Wyze (yes, that Wyze) but this research was conducted independently. Guan and Wang reported the flaw to Microsoft on May 28th, just weeks after NLWeb was unveiled. Microsoft issued a fix on July 1st, but has not issued a CVE for the issue — an industry standard for classifying vulnerabilities. The security researchers have been pushing Microsoft to issue a CVE, but the company has been reluctant to do so. A CVE would alert more people to the fix and allow people to track it more closely, even if NLWeb isn't widely used yet. 'This issue was responsibly reported and we have updated the open-source repository,' says Microsoft spokesperson Ben Hope, in a statement to The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' Guan says NLWeb users 'must pull and vend a new build version to eliminate the flaw,' otherwise any public-facing NLWeb deployment 'remains vulnerable to unauthenticated reading of .env files containing API keys.' While leaking an .env file in a web application is serious enough, Guan argues it's 'catastrophic' for an AI agent. 'These files contain API keys for LLMs like GPT-4, which are the agent's cognitive engine,' says Guan. 'An attacker doesn't just steal a credential; they steal the agent's ability to think, reason, and act, potentially leading to massive financial loss from API abuse or the creation of a malicious clone.' Microsoft is also pushing ahead with native support for Model Context Protocol (MCP) in Windows, all while security researchers have warned of the risks of MCP in recent months. If the NLWeb flaw is anything to go by, Microsoft will need to take an extra careful approach of balancing the speed of rolling out new AI features versus sticking to security being the number one priority. Posts from this author will be added to your daily email digest and your homepage feed. See All by Tom Warren Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Microsoft Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Security Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store