logo
Google rolls out Project Mariner, its web-browsing AI agent

Google rolls out Project Mariner, its web-browsing AI agent

Yahoo21-05-2025

Google announced during Google I/O 2025 that it's rolling out Project Mariner, the company's experimental AI agent that browses and uses websites, to more users and developers. Google also says it's significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time.
U.S. subscribers to Google's new $249.99-per-month AI Ultra plan will get access to Project Mariner, and the company says support for more countries is coming soon. Google also says it's bringing Project Mariner's capabilities to the Gemini API and Vertex AI, allowing developers to build out applications powered by the agent.
First unveiled in late 2024, Project Mariner represents Google's boldest effort yet to revamp how users interact with the internet through AI agents. At launch, Google Search leaders said they viewed Project Mariner as part of a fundamental user experience shift, in which people will delegate more tasks to an AI agent, instead of visiting websites and completing those tasks themselves.
For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website — they just chat with Google's AI agent, and it visits websites and takes actions for them.
Project Mariner competes with other web-browsing AI agents, such as OpenAI's Operator, Amazon's Nova Act, and Anthropic's Computer Use. These tools are all in an experimental stage, and TechCrunch's experience has proven the prototypes to be slow and prone to mistakes.
However, Google says it's taken feedback from early testers to improve Project Mariner's capabilities. A Google spokesperson tells TechCrunch the company updated Project Mariner to run on virtual machines in the cloud, much like agents from OpenAI and Amazon. This means users can work on other projects while Project Mariner completes tasks in the background — Google says the new Project Mariner can handle up to 10 tasks simultaneously.
This update makes Project Mariner significantly more useful compared to its predecessor, which ran on a user's browser. As I noted in my initial review, Project Mariner's early design meant users couldn't use other tabs or apps on their desktop while the AI agent was working. This kind of defeated the purpose of an AI agent — it would work for you, but you couldn't do anything else while it was working.
In the coming months, Google says users will be able to access Project Mariner in AI Mode, the company's AI-powered Google Search experience. When it launches, the feature will be limited to Search Labs, Google's opt-in testing ground for search features. Google says it's working with Ticketmaster, StubHub, Resy, and Vagaro to power some of these agentic flows.
Separately today, Google unveiled an early demo of another agentic experience called "Agent Mode." The company says this feature combines web browsing with research features and integrations, as well as with other Google apps. Google says Ultra subscribers will gain access to Agent Mode on desktop soon.
At this year's I/O, Google finally seems willing to ship the agentic experiences it's been talking about for years. Project Mariner, Agent Mode, and AI Mode all seem poised to change how users navigate the web, and how vendors interact with their customers online. Web-browsing agents have big implications for the internet economy, but Google seems ready to put all these agents out in the world.
This article originally appeared on TechCrunch at https://techcrunch.com/2025/05/20/google-rolls-out-project-mariner-its-web-browsing-ai-agent/

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Photos merges classic search with AI to speed up results
Google Photos merges classic search with AI to speed up results

TechCrunch

time27 minutes ago

  • TechCrunch

Google Photos merges classic search with AI to speed up results

After Google temporarily paused the rollout of its buggy AI-powered 'Ask Photos' feature in Google Photos, the company announced that it has improved the feature's ability to quickly return search results. The AI feature, first introduced at Google's I/O developer conference last year, allows users to search across their collection of digital photos using natural language queries. Leveraging Google's Gemini, Ask Photos taps into the AI's ability to understand a photo's content and its other metadata when responding to input. However, users complained the AI feature wasn't reliable and was often slow to respond while the AI was 'thinking.' Addressing these concerns, Google Photos product manager Jamie Aspinall wrote on X earlier in June that 'Ask Photos isn't where it needs to be, in terms of latency, quality and ux,' and noted the rollout would be paused for a couple of weeks while Google worked to bring back the 'speed and recall of the original search.' Screenshot In a short blog post published on Thursday, Google says it's bringing the best of Photos' classic search feature into Ask Photos, particularly for simple searches like 'beach' or 'dogs.' This allows the search results to display more quickly, as classic search did before. The AI, in the meantime, will work in the background to find the most relevant photos and work to answer more complex queries. For instance, if you search for a photo of a 'white dog,' a series of initial search results immediately appear. After the AI finishes its analysis, its results will appear below, along with some introductory text that may identify your dog by name, if you've added it, and tell you when photos of the animal first appeared. The interface still allows you to switch to classic search if you prefer. As a result of these changes, Google has now resumed the rollout of Ask Photos to more people across the U.S. To be eligible to use Ask Photos, you must be 18 or older, and your account language must be set to English. You must also enable Face Groups, the feature that labels the people and pets found in the Google Photos library.

Gemini is getting ready to replace Google Assistant on Android
Gemini is getting ready to replace Google Assistant on Android

The Verge

time36 minutes ago

  • The Verge

Gemini is getting ready to replace Google Assistant on Android

Android users will soon be able to let Gemini control device features and apps with fewer privacy concerns. In an email seen by Android Police, Google recently notified Gemini users that it will start rolling out an update on July 7th that allows the AI bot to 'use Phone, Messages, WhatsApp, and Utilities on your phone, whether your Gemini Apps Activity is on or off.' Disabling the Gemini Apps Activity setting stops conversations with the chatbot from being used to 'provide, improve, develop, and personalize' Google products and AI models. It also currently prevents users from asking Gemini to perform tasks in connected apps, such as setting alarms, calling contacts, sending WhatsApp messages, and controlling media playback settings. The vague wording of Google's message initially raised some confusion around whether the change would give Gemini unrestricted access to private data or system functions. Google later clarified that Gemini's app connections can still be disabled at any time, and that the update 'is good for users.' 'They can now use Gemini to complete daily tasks on their mobile devices like send messages, initiate phone calls, and set timers while Gemini Apps Activity is turned off,' Google said in a statement to Android Authority. 'With Gemini Apps Activity turned off, their Gemini chats are not being reviewed or used to improve our AI models.' The incoming change means that people can use Gemini like a personal assistant for their device without contributing to Google's AI training datasets, just in time for Gemini to actually replace Google Assistant on Android devices later this year. Turning Apps Activity off will also stop Gemini interactions from appearing in the activity log, though Google notes it will still save conversations for up to 72 hours for security purposes, regardless of whether the setting is disabled or not.

Meta Scores AI Fair Use Court Victory, but Judge Warns Such Wins Won't Always Be the Case
Meta Scores AI Fair Use Court Victory, but Judge Warns Such Wins Won't Always Be the Case

CNET

timean hour ago

  • CNET

Meta Scores AI Fair Use Court Victory, but Judge Warns Such Wins Won't Always Be the Case

AI companies scored another victory in court this week. Meta on Wednesday won a motion for partial summary judgment in its favor in Kadrey v. Meta, a case brought by 13 authors alleging the company infringed on their copyright protections by illegally using their books to train its Llama AI models. The ruling comes two days after a similar victory for Claude maker Anthropic. But Judge Vince Chhabria stressed in his order that this ruling should be limited and doesn't absolve Meta of future claims from other authors. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," he wrote. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." The issue at the heart of the cases is whether the AI companies' use of protected content for AI training qualifies as fair use. The fair use doctrine is a fundamental part of US copyright law that allows people to use copyrighted work without the rights holders' explicit permission, like in education and journalism. There are four key considerations when evaluating whether something is fair use. Anthropic's ruling focused on transformativeness, while Meta's focused on the effect the use of AI has on the existing publishing market. These rulings are big wins for AI companies. OpenAI, Google and others have been fighting for fair use so they don't have to enter costly and lengthy licensing agreements with content creators, much to the chagrin of content creators. For the authors bringing these cases, they may see some victories in subsequent piracy trials (for Anthropic) or new lawsuits. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) In his analysis, Chhabria focused on the effect AI-generated books have on the existing publishing market, which he saw as the most important factor of the four needed to prove fair use. He wrote extensively about the risk that generative AI and large language models could potentially violate copyright law, and that fair use needs to be evaluated on a case-by-case basis. Some works, like autobiographies and classic literature such as The Catcher in the Rye, likely couldn't be created with AI, he wrote. However, he noted that "the market for the typical human-created romance or spy novel could be diminished substantially by the proliferation of similar AI-created works." In other words, AI slop could make human-written books seem less valuable and undercut authors' willingness and ability to create. Still, Chhabria said that the plaintiffs did not show sufficient evidence to prove harm from how "Meta's models would dilute the market for their own works." The plaintiffs focused their arguments on how Meta's AI models can reproduce exact snippets from their works and how the company's Llama models hurt their ability to license their books to AI companies. These arguments weren't as compelling in Chhabria's eyes -- he called them "clear losers" -- so he sided with Meta. That's different from the Anthropic ruling, where Judge William Alsup focused on the "exceedingly transformative" nature of the use of the plantiff's books in the results AI chatbots spit out. Chhabria wrote that while "there is no disputing" the use of copyrighted material was transformative, the more urgent question was the effect AI systems had on the ecosystem as a whole. Alsup also outlined concerns about Anthropic's methods of obtaining the books, through illegal online libraries and then by deliberating purchasing print copies to digitize for a "research library." Two court rulings do not make every AI company's use of content legal under fair use. What makes these cases notable is that they are the first to issue substantive legal analyses on the issue; AI companies and publishers have been duking it out in court for years now. But just as Chhabria referenced and responded to the Anthropic ruling, all judges use past cases with similar situations as reference points. They don't have to come to the same conclusion, but the role of precedent is important. It's likely that we'll see these two rulings referenced in other AI and copyright/piracy cases. But we'll have to wait and see how big of an effect these rulings will play in future cases -- and whether it's the warnings or greenlights that hold the most weight in future decisions. For more, check out our guide to copyright and AI.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store