Everything you need to know from Google I/O 2025
From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long.
During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.
But, we know what you're really here for: Product updates and new product announcements.
Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year.
We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.
Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available.
Google tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.
At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means.
Google announces AI Mode in Google Search Credit: Google
AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail.
In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.
According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.
Out of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.
Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description.
Google AI Shopping Credit: Google
Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.
The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.
Google said users will be able to try on different looks via AI in Google Labs starting today.
Google's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.
One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.
While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year.
Project Moohan Credit: Google
In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.
Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.
Easily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.
Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files.
Jules coding agent Credit: Google
Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.
The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.
Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.
Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.
Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.
Gemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.
Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward.
Flow AI video tool Credit: Google
A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue.
Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.
Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.
Another big announcement out of Google I/O: Project Starline is no more.
Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.
As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks.
Google Meet translations Credit: Google
Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.
While Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.
Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.
Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users.
Gemini Live Credit: Google
It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.
Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.
Another one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner.
While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.
Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
18 minutes ago
- Yahoo
On This Day 60 Years Ago: The First American Spacewalk
Sixty years ago, NASA astronaut Ed White became the first American to walk in space. The Texas native performed the first spacewalk during the Gemini 4 mission on June 3, 1965. The event occurred over the Pacific Ocean, starting at 3:45 p.m. Eastern Time near Hawaii. The trip lasted 23 minutes, eventually ending over the Gulf. In the end, he had traveled 6,500 miles at a speed of 17,000 miles per hour. White used a handheld maneuvering oxygen jet gun to push himself out of the capsule and move back and forth to the end of a 26-foot-long tether. Once the jet gun fuel expired, White pulled himself back and forth with the tether. When the spacewalk ended, White returned to the capsule, famously stating, 'This is the saddest moment of my life.' The 4-day Gemini mission helped study the effects of prolonged spaceflight. White was the mission pilot, while James McDivitt served as the mission commander.


Forbes
19 minutes ago
- Forbes
Tariffs Either Can't, Won't, Or Shouldn't Re-shore Manufacturing Jobs
Sanctions Political Trade War getty Despite the unanimous ruling from the Court of International Trade, the Supreme Court will likely decide whether the International Emergency Economic Powers Act empowers President Trump to levy global tariffs. As this process will take time to play out, economic uncertainty will persist for the foreseeable future. What isn't uncertain is the adverse consequences that these tariffs will have. One oft-cited justification for the tariffs is the need to reinvigorate domestic manufacturing in the U.S. While this justification has no merit, something we return to later, the tariffs are not confined to products whose production can be re-shored. Products, especially agricultural products, that cannot be produced domestically are also targeted. Take coffee as an example. Hawaii is the only state in the country that produces coffee. During the 2023-24 growing season, Hawaii produced about 17 million pounds of coffee, which is down 26% compared to the 2022-23 growing season. This may sound like a lot, but we consume 1.62 billion pounds of coffee every year. Therefore, the U.S. can domestically produce only about 1 percent of our annual coffee consumption. Tariffs or not, we will have to continue importing 99% of our desired coffee if we want to continue enjoying our daily cup of Joe (actually, for the average person, it's 3 cups). Much of the coffee we consume is from Latin America, which would face a 10% tariff, but imports from Vietnam (the 2nd largest global grower) face a potential 46% tariff and imports from Indonesia (the 4th largest grower) face a 32% duty. Even if all imports were subject to the 10% tariff minimum, the imposition of Trump's tariffs equate to an additional $820 million cost that all coffee drinkers will have to pay. Coffee is far from the only example. Spices, a commodity we often take for granted, are also generally grown in other countries because they cannot be economically produced in the U.S. For instance, cinnamon is primarily grown in Vietnam, Sri Lanka, and Indonesia with essentially no production in the U.S. Beyond a popular flavoring for coffee and ingredient in many baked goods, there are several potential health benefits linked to cinnamon including being an antioxidant, helping manage blood sugar, and having anti-inflammatory properties. The same constraints apply to the black pepper, which is mostly grown in tropical regions such as Vietnam, Indian, and Brazil. The U.S. currently imports virtually all our domestic consumption. Since the U.S. cannot source our own cinnamon or black pepper due to the climate required to grow these commodities, consumers will either have to pay the higher tariffs or reduce their total consumption of these desired spices, whether from a retailer or a restaurant. Overall, the U.S. imported $2.4 billion in spices in 2024. Since spices are traded on a global market, it is likely that the cost of the tariffs will be passed on to U.S. consumers. Consequently, even a 10% tariff could impose a $240 million cost on U.S. consumers. Given the origin of most spices, the actual costs would likely be significantly higher. While the reshoring excuse clearly makes no sense for agricultural products that we cannot produce referred to as 'unavailable nature resources,' it does not follow that the reshoring argument is valid simply because the products can be produced in the U.S. In fact, leveraging the global trading system enables companies to invest in creating higher-valued jobs in the U.S. Take the iPhone as an example. Apple designs and engineers its iPhones at its headquarters in Cupertino, California. These tasks require highly skilled workers and create high paying U.S. based jobs. According to Glassdoor, Apple Engineers earn an average total pay of nearly $180,000 annually. President Trump seems to ignore these jobs and is focusing on the physical manufacturing of the iPhone. For instance, he stated in a Truth Social post 'I have long ago informed Tim Cook of Apple that I expect their iPhone's that will be sold in the United States of America will be manufactured and built in the United States, not India, or anyplace else. If that is not the case, a Tariff of at least 25% must be paid by Apple to the U.S.' The average salary for a smartphone assembler is around $36,000 a year ($18 per hour). Therefore, despite the large number of high paying jobs the company creates, the President wants Apple to create more low paying jobs in the U.S. To get his way, the President is willing to harm the company risking the growth in higher paying jobs, not to mention his willingness to impose higher costs on consumers. Importantly, the U.S. does a great deal of manufacturing in the U.S. already. In 2024, the real value of total U.S. manufacturing output was $2.4 trillion. This is 12% higher than the total produced in 2020. This activity creates well paying jobs for Americans because it focuses on those activities we do well. It does not force companies to produce things that can be produced more efficiently elsewhere. Regardless of their legal merit, President Trump's global tariffs are economically ill advised, particularly on products that will never be grown in America. Yet, as his doubling of the steel tariffs indicate, the President remains all in. As a result, even if the Supreme Court rules that the Administration has exceeded its authority, the current tariff follies will likely persist. The problem, of course, is that tariffs either can't, won't, or shouldn't shift production back into the U.S. What they will do is increase costs for consumers and put higher paying jobs at risk.


The Verge
19 minutes ago
- The Verge
Google's NotebookLM now lets you share your notebook — and AI podcasts — publicly
Google's AI-powered notetaking app, NotebookLM, now lets you share your notebooks with classmates, coworkers, or students using a public link. Though viewers can't edit what's in your notebook, they can still use it to ask questions and interact with AI-generated content like audio overviews, briefings, and FAQs. First launched as an experiment in 2023, NotebookLM has become a breakout hit for Google. The app is designed to help you understand material from a variety of sources, such as notes, documents, presentation slides, and even YouTube videos. It can provide AI-generated summaries of the content, generate AI podcast-style discussions, 'chat' with you about the material, and more. Google launched a mobile NotebookLM app last month. The steps to making your notebook available publicly are pretty similar to the way you share something in Google Drive, Docs, Sheets, and Slides. You just select the Share button in the top-right corner of the notebook, and then change the access to 'Anyone with a link.' From there, hit the 'Copy link' button and then paste the notebook link into a text, email, or even on social media if you want more people to interact with the information.