Latest news with #NLWeb
Yahoo
2 days ago
- Yahoo
Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe
When you buy through links on our articles, Future and its syndication partners may earn a commission. Microsoft is quickly heading towards AI agentic browsing — that much is obvious with Edge's AI makeover and an open project called NLWeb that can be used to give any website AI power. But while this all sounds good on paper, it does open the door to a whole lot of security risks, and the company's agentic aspirations have already been hit by a flaw that is concerningly simple. Fortunately, it has been patched out, but it does start a bigger conversation we need to have about staying safe while agentic browsing. Let's get into it. So what happened? NLWeb is envisioned as 'HTML for the Agentic Web.' Announced back at Build 2025, this is the framework for AI browsing on your behalf, but researchers Aonan Guan and Lei Wang found what is called a "path traversal vulnerability". This is a pretty standard security oversight that hackers can take advantage of by having an agentic AI visit a specially-made URL that can grant the attacker access to sensitive files like system configuration files and API keys. What can be done with this information is what can amount to stealing your agent's brain. Attackers at this point can get to the core functions of your AI agent and do a wide-ranging amount of things like look at/interact with emails on your behalf, or even get into your finances. The flaw was found and reported to Microsoft on May 28, 2025, and the company patched it out on July 1, 2025 by updating the open-source repository. It was a simple exposure that had huge problematic potential. 'This issue was responsibly reported and we have updated the open-source repository,' Microsoft spokesperson Ben Hope told The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' How to stay safe while agentic browsing We've seen a significant shift towards agentic browsing over the last 12 months — spearheaded by the likes of OpenAI Operator, Opera launching the world's first on-device agentic AI browser, and Rabbit R1's LAM Playground. This serious flaw may have already been patched out by Microsoft, but it's clear that this won't be the last security issue we come across. For example, there's the Model Context Protocol (MCP), which is an open standard launched by Anthropic to allow AI assistants to interact with tools and services on your behalf. Sounds good on paper, but researchers have already identified the risks of account takeover and token theft: when a hacker gains access to personal authentication tokens and essentially gets the keys to your kingdom. So it's clear you need to be extra careful in the agentic era. Here are some key steps you can take: Be cautious with OAuth permissions (Image: © OpenAI) If your AI agent is asking to connect to a service like Gmail or Google Drive, read the permissions carefully. Don't approve full access if only read access is needed, so avoid clicking 'allow all' without thinking about it. On top of that, if you want an additional layer of security, use a separate account. That way you can see what the agentic AI will be able to do without putting your sensitive information on the line. Don't 100% trust any agent (Image: © Future) Think of any agent as a teenager you just gave the car keys to — effective most of the time, but not averse to mistakes (my battered Vauxhall Corsa can attest to this). By that, I mean check the agent you use is from a reputable company to start with. That means don't install any browser extensions that claim to 'autonomously browse the web.' And whatever you're using, don't let them auto-fill forms, send emails or make purchases unless you explicitly tell them to. Sanitize your browsing and app permissions (Image: © Future) For Chrome users, head over to Google Security Checkup and get rid of any third-party services that have access to your account. This will limit any potential exposures, as much as turning off autofill or password auto-saving will too. And for an additional layer of security, use agentic web tools in incognito/private windows to limit cookie or token leakage. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. More from Tom's Guide OpenAI ChatGPT-5 live blog: Countdown and all the big news as it happens This Windows 11 feature drove me nuts. Microsoft's finally fixing it — sort of I tested ChatGPT vs Gemini 2.5 Pro with these 3 prompts - and it shows what GPT-5 needs to do


Tom's Guide
2 days ago
- Tom's Guide
Microsoft's agentic AI roadmap had a flaw that let hackers take over browsers — here's what to know and how to stay safe
Microsoft is quickly heading towards AI agentic browsing — that much is obvious with Edge's AI makeover and an open project called NLWeb that can be used to give any website AI power. But while this all sounds good on paper, it does open the door to a whole lot of security risks, and the company's agentic aspirations have already been hit by a flaw that is concerningly simple. Fortunately, it has been patched out, but it does start a bigger conversation we need to have about staying safe while agentic browsing. Let's get into it. NLWeb is envisioned as 'HTML for the Agentic Web.' Announced back at Build 2025, this is the framework for AI browsing on your behalf, but researchers Aonan Guan and Lei Wang found what is called a "path traversal vulnerability". This is a pretty standard security oversight that hackers can take advantage of by having an agentic AI visit a specially-made URL that can grant the attacker access to sensitive files like system configuration files and API keys. What can be done with this information is what can amount to stealing your agent's brain. Attackers at this point can get to the core functions of your AI agent and do a wide-ranging amount of things like look at/interact with emails on your behalf, or even get into your finances. The flaw was found and reported to Microsoft on May 28, 2025, and the company patched it out on July 1, 2025 by updating the open-source repository. It was a simple exposure that had huge problematic potential. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. 'This issue was responsibly reported and we have updated the open-source repository,' Microsoft spokesperson Ben Hope told The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' We've seen a significant shift towards agentic browsing over the last 12 months — spearheaded by the likes of OpenAI Operator, Opera launching the world's first on-device agentic AI browser, and Rabbit R1's LAM Playground. This serious flaw may have already been patched out by Microsoft, but it's clear that this won't be the last security issue we come across. For example, there's the Model Context Protocol (MCP), which is an open standard launched by Anthropic to allow AI assistants to interact with tools and services on your behalf. Sounds good on paper, but researchers have already identified the risks of account takeover and token theft: when a hacker gains access to personal authentication tokens and essentially gets the keys to your kingdom. So it's clear you need to be extra careful in the agentic era. Here are some key steps you can take: If your AI agent is asking to connect to a service like Gmail or Google Drive, read the permissions carefully. Don't approve full access if only read access is needed, so avoid clicking 'allow all' without thinking about it. On top of that, if you want an additional layer of security, use a separate account. That way you can see what the agentic AI will be able to do without putting your sensitive information on the line. Think of any agent as a teenager you just gave the car keys to — effective most of the time, but not averse to mistakes (my battered Vauxhall Corsa can attest to this). By that, I mean check the agent you use is from a reputable company to start with. That means don't install any browser extensions that claim to 'autonomously browse the web.' And whatever you're using, don't let them auto-fill forms, send emails or make purchases unless you explicitly tell them to. For Chrome users, head over to Google Security Checkup and get rid of any third-party services that have access to your account. This will limit any potential exposures, as much as turning off autofill or password auto-saving will too. And for an additional layer of security, use agentic web tools in incognito/private windows to limit cookie or token leakage. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


The Verge
3 days ago
- The Verge
Microsoft's plan to fix the web with AI has already hit an embarrassing security flaw
Researchers have already found a critical vulnerability in the new NLWeb protocol Microsoft made a big deal about just just a few months ago at Build. It's a protocol that's supposed to be 'HTML for the Agentic Web,' offering ChatGPT-like search to any website or app. Discovery of the embarrassing security flaw comes in the early stages of Microsoft deploying NLWeb with customers like Shopify, Snowlake, and TripAdvisor. The flaw allows any remote users to read sensitive files, including system configuration files and even OpenAI or Gemini API keys. What's worse is that it's a classic path traversal flaw, meaning it's as easy to exploit as visiting a malformed URL. Microsoft has patched the flaw, but it raises questions about how something as basic as this wasn't picked up in Microsoft's big new focus on security. 'This case study serves as a critical reminder that as we build new AI-powered systems, we must re-evaluate the impact of classic vulnerabilities, which now have the potential to compromise not just servers, but the 'brains' of AI agents themselves,' says Aonan Guan, one of the security researchers (alongside Lei Wang) that reported the flaw to Microsoft. Guan is a senior cloud security engineer at Wyze (yes, that Wyze) but this research was conducted independently. Guan and Wang reported the flaw to Microsoft on May 28th, just weeks after NLWeb was unveiled. Microsoft issued a fix on July 1st, but has not issued a CVE for the issue — an industry standard for classifying vulnerabilities. The security researchers have been pushing Microsoft to issue a CVE, but the company has been reluctant to do so. A CVE would alert more people to the fix and allow people to track it more closely, even if NLWeb isn't widely used yet. 'This issue was responsibly reported and we have updated the open-source repository,' says Microsoft spokesperson Ben Hope, in a statement to The Verge. 'Microsoft does not use the impacted code in any of our products. Customers using the repository are automatically protected.' Guan says NLWeb users 'must pull and vend a new build version to eliminate the flaw,' otherwise any public-facing NLWeb deployment 'remains vulnerable to unauthenticated reading of .env files containing API keys.' While leaking an .env file in a web application is serious enough, Guan argues it's 'catastrophic' for an AI agent. 'These files contain API keys for LLMs like GPT-4, which are the agent's cognitive engine,' says Guan. 'An attacker doesn't just steal a credential; they steal the agent's ability to think, reason, and act, potentially leading to massive financial loss from API abuse or the creation of a malicious clone.' Microsoft is also pushing ahead with native support for Model Context Protocol (MCP) in Windows, all while security researchers have warned of the risks of MCP in recent months. If the NLWeb flaw is anything to go by, Microsoft will need to take an extra careful approach of balancing the speed of rolling out new AI features versus sticking to security being the number one priority. Posts from this author will be added to your daily email digest and your homepage feed. See All by Tom Warren Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Microsoft Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Security Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech


Mint
07-07-2025
- Business
- Mint
Are firms wasting their money on AI agents?
Next Story Leslie D'Monte Most firms are still trying out AI agents, hailed by Big Tech as game changers. Analysts warn that many of these may be scrapped within two years as they hide high costs, uncertain returns, and weak risk controls. Can firms unlock value through human-AI agent teamwork? Google and Microsoft recently called AI agents the next big shift, unveiling Project Mariner, Gemini tools, Azure AI Foundry and NLWeb. Gift this article Why is Big Tech bullish on AI agents? Unlike basic chatbots, AI agents are autonomous systems that can plan, reason and complete complex multi-step tasks with minimal input—such as coding, data analysis and generating reports. Developers use them across IT, customer support, and enterprise workflows. Google and Microsoft recently called AI agents the next big shift, unveiling Project Mariner, Gemini tools, Azure AI Foundry and NLWeb. Salesforce, Amazon, IBM, and Meta are also building these platforms to automate workflows and enhance productivity. Nasscom has said 46% of firms are experimenting with AI agents, mainly in IT. Also read | Global poverty: How to deal with funding cuts What should firms be wary of? Many so-called agentic use cases today can be done with simpler tools, says Gartner, which predicts over 40% of such projects will be scrapped by 2027 due to high costs, vague value or weak risk controls. It adds that of thousands of vendors, only about 130 are seen as credible; and many engage in 'agent washing"—rebranding chatbots, robotic process automation (RPA), or assistants as agents without real autonomy. Most current systems also lack the maturity to deliver complex outcomes or sustained return on investment (ROI). Nasscom echoes this, noting 62% of enterprises are only doing in-house agent testing. Also read | How India's $12 bn R&D push could reshape its tech future How big is the market for AI agents? The global AI agents market, valued at $5.4 billion in 2024, is pegged to touch $50.3 billion by 2030, per Grand View Research. North America led with 40.1% revenue share, while Asia-Pacific is the fastest-growing region. Machine learning, single-agent systems, and ready-to-deploy agents dominated through tech, system type, and deployment model, respectively. Also read | Shades of grey: Inside the world of pre-IPO trading How can companies unlock real value? AI agent adoption is hindered by privacy concerns, regulation gaps, and limited focus on talent readiness. Integration with legacy enterprise software remains complex and costly. Regardless, Gartner predicts 15% of daily work decisions will be made autonomously by agentic AI by 2028, up from 0% in 2024. It also urges agentic AI-use only when the returns are clear. Nasscom sees strong potential in real-time decision-making and agility. Key focus areas include data governance and AI risk protocols. Also read | China's economy beats the gloom. Can it do more? Can humans make AI agents more effective? Though AI agents are designed to be autonomous, Nasscom's recent study of over 100 global enterprises reveals how businesses are transitioning from early-stage GenAI applications towards more goal-oriented, human-plus, AI agentic systems. It believes that to scale responsibly, enterprises must prioritize human-AI collaboration, trust and data readiness. Nasscom adds that most (77%) firms adopt AI agents with a 'human-in-the-loop" approach, reflecting the need for oversight and contextual judgment. Also read | Residential sales: Where have all the buyers gone? Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.


Business Mayor
26-05-2025
- Business
- Business Mayor
The battle to AI-enable the web: NLweb and what enterprises need to know
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In the first generation of the web, back in the late 1990s, search was okay but not great, and it wasn't easy to find things. That led to the rise of syndication protocols in the early 2000s, with Atom and RSS (Really Simple Syndication) providing a simplified way for website owners to make headlines and other content easily available and searchable. In the modern era of AI, a new group of protocols is emerging to serve the same basic purpose. This time, instead of making sites easier for humans to find, it's all about making websites easier for AI. Anthropic's Model Control Protocol (MCP), Google's Agent2Agent and large language models/ are among the existing efforts. The newest protocol is Microsoft's open-source NLWeb (natural language web) effort, which was announced during the Build 2025 conference. NLWeb is also directly linked to the first generation of web syndication standards, as it was conceived and created by RV Guha, who helped create RSS, RDF (Resource Description Framework) and NLWeb enables websites to easily add AI-powered conversational interfaces, effectively turning any website into an AI app where users can query content using natural language. NLWeb isn't necessarily about competing with other protocols; rather, it builds on top of them. The new protocol uses existing structured data formats like RSS, and each NLWeb instance functions as an MCP server. 'The idea behind NLWeb is it is a way for anyone who has a website or an API already to very easily make their website or their API an agentic application,' Microsoft CTO Kevin Scott said during his Build 2025 keynote. 'You really can think about it a little bit like HTML for the agentic web.' NLWeb transforms websites into AI-powered experiences through a straightforward process that builds on existing web infrastructure while leveraging modern AI technologies. Building on existing data: The system begins by leveraging structured data that websites already publish, including markup, RSS feeds and other semi-structured formats that are commonly embedded in web pages. This means publishers don't need to rebuild their content infrastructure completely. Data processing and storage: NLWeb includes tools for adding this structured data to vector databases, which enable efficient semantic search and retrieval. The system supports all major vector database options, allowing developers to choose the solution that best fits their technical requirements and scale. Read More Building a 'virtual Vegas' in honor of CES AI enhancement layer: LLMs then enhance this stored data with external knowledge and context. For instance, when a user queries about restaurants, the system automatically layers on geographic insights, reviews and related information by combining the vectorized content with LLM capabilities to provide comprehensive, intelligent responses rather than simple data retrieval. Universal interface creation: The result is a natural language interface that serves both human users and AI agents. Visitors can ask questions in plain English and receive conversational responses, while AI systems can programmatically access and query the site's information through the MCP framework. This approach allows any website to participate in the emerging agentic web without requiring extensive technical overhauls. It makes AI-powered search and interaction as accessible as creating a basic webpage was in the early days of the internet. There are a lot of different protocols emerging in the AI space; not all do the same thing. Google's Agent2Agent , for example, is all about enabling agents to talk to each other. It's about orchestrating and communicating agentic AI and is not particularly focused on AI-enabling existing websites or AI content. Maria Gorskikh, founder and CEO of AIA and a contributor to the Project NANDA team at MIT, explained to VentureBeat that Google's A2A enables structured task passing between agents using defined schemas and lifecycle models. 'While the protocol is open-source and model-agnostic by design, its current implementations and tooling are closely tied to Google's Gemini stack — making it more of a backend orchestration framework than a general-purpose interface for web-based services,' she said. Another emerging effort is Its goal is to help LLMs better access web content. While on the surface, it might sound somewhat like NLWeb, it's not the same thing. 'NLWeb doesn't compete with it is more comparable to web scraping tools that try to deduce intent from a website,' Michael Ni, VP and Principal Analyst at Constellation Research told VentureBeat. Krish Arvapally, co-founder and CTO of Dappier, explained to VentureBeat that provides a markdown-style format with training permissions that helps LLM crawlers ingest content appropriately. NLWeb focuses on enabling real-time interactions directly on a publisher's website. Dap pier has its own platform that automatically ingests RSS feeds and other structured data, then delivers branded, embeddable conversational interfaces. Publishers can syndicate their content to their data marketplace. MCP is the other big protocol, and it is increasingly becoming a de facto standard and a foundational element of NLWeb. Fundamentally, MCP is an open standard for connecting AI systems with data sources. Ni explained that in Microsoft's view, MCP is the transport layer, where, together, MCP and NLWeb provide the HTML and TCP/IP of the open agentic web. Forrester Senior Analyst Will McKeon-White sees a number of advantages for NLWeb over other options. 'The main advantage of NLWeb is better control over how AI systems 'see' the pieces that make up websites, allowing for better navigation and more complete understanding of the tooling,' McKeon-White told VentureBeat. 'This could reduce both errors from systems misunderstanding what they're seeing on websites, as well as reduce interface rework.' Microsoft didn't just throw NLWeb over the proverbial wall and hope someone would use it. Microsoft already has multiple organizations engaged and using NLWeb, including Chicago Public Media, Allrecipes, Eventbrite, Hearst (Delish), O'Reilly Media, Tripadvisor and Shopify. Andrew Odewahn, Chief Technology Officer at O'Reilly Media is among the early adopters and sees real promise for NLWeb. 'NLWeb leverages the best practices and standards developed over the past decade on the open web and makes them available to LLMs,' Odewahn told VentureBeat. 'Companies have long spent time optimizing this kind of metadata for SEO and other marketing purposes, but now they can take advantage of this wealth of data to make their own internal AI smarter and more capable with NLWeb.' In his view, NLWeb is valuable for enterprises both as consumers of public information and publishers of private information. He noted that nearly every company has sales and marketing efforts where they might need to ask, 'What does this company do?' or 'What is this product about?' 'NLWeb provides a great way to open this information to your internal LLMs so that you don't have to go hunting and pecking to find it,' Odewahn said. 'As a publisher, you can add your own metadata using standard and use NLWeb internally as an MCP server to make it available for internal use.' Using NLWeb isn't necessarily a heavy lift, either. Odewahn noted that many organizations are probably already using many of the standards NLWeb relies on. 'There's no downside in trying it out now since NLWeb can run entirely within your infrastructure,' he said. 'It's open source software meeting the best in open source data, so you have nothing to lose and a lot to gain from trying it now.' Constellation Research Analyst Michael Ni has a somewhat positive viewpoint on NLWeb. However, that doesn't mean enterprises need to adopt it immediately. Ni noted that NLWeb is in the very early stages of maturity and enterprises should expect 2-3 years for any substantial adoption. He suggests that leading-edge companies with specific needs, such as active marketplaces, can look to pilot with the ability to engage and help shape the standard. 'It's a visionary specification with clear potential, but it needs ecosystem validation, implementation tooling, and reference integrations before it can reach mainstream enterprise pilots,' Ni said. Others have a somewhat more aggressive viewpoint on adoption. Gorskikh suggests taking an accelerated approach to ensure your enterprise doesn't fall behind. 'If you're an enterprise with a large content surface, internal knowledge base, or structured data, piloting NLWeb now is a smart and necessary step to stay ahead,' she said. 'This isn't a wait-and-see moment — it's more like the early adoption of APIs or mobile apps.' That said, she noted that regulated industries need to tread carefully. Sectors like insurance, banking and healthcare should hold off on production use until there's a neutral, decentralized verification and discovery system in place. There are already early-stage efforts addressing this — such as the NANDA project at MIT that Gorskikh participates in, which is building an open, decentralized registry and reputation system for agentic services. For enterprise AI leaders, NLWeb is a watershed moment and a technology that should not be ignored. AI is going to interact with your site, and you need to AI enable it. NLWeb is one way that will be particularly attractive to publishers, much like RSS became a must-have for all websites in the early 2000s. In a few years, users will just expect it to be there; they will expect to be able to search and find things, while agentic AI systems will need to be able to access the content as well. That's the promise of NLWeb.