logo
Google is adding its AI-powered Gemini voice assistant to Android Auto

Google is adding its AI-powered Gemini voice assistant to Android Auto

Yahoo20-05-2025

This story was originally published on Automotive Dive. To receive daily news and insights, subscribe to our free daily Automotive Dive newsletter.
Google is adding its Gemini AI-powered voice assistant to Android Auto, Patrick Brady, VP Android for Cars, announced in a May 13 blog post.
The generative AI technology may boost safety by reducing distractions, as drivers can speak naturally on range of topics and get detailed responses without remembering specific phrases or interacting with the infotainment screen.
Gemini will be available for vehicles that support Android Auto in the coming months, and for other models that feature Google built-in later this year, including the new Lincoln Nautilus, Honda Passport and select Chevrolet, Cadillac, GMC, Acura and Volvo vehicles.
The Android Auto app was announced by the Open Automotive Alliance over a decade ago during the annual Google I/O developer conference. The alliance is a coalition of OEMs and technology suppliers collaborating to integrate the Android smartphone ecosystem and apps into vehicle infotainment systems. Open Automotive Alliance automaker partners include General Motors, Ford, Toyota, Volkswagen and over 30 other vehicle brands.
Android Auto is designed to mirror features from an Android smartphone in a vehicle's infotainment screen. There are more than 250 million vehicles on the road that support Android Auto, according to the blog post. It's similar to Apple CarPlay for iPhone users.
Using generative AI, drivers can speak naturally on a range of topics. The technology also supports more advanced features for messaging, including setting the preferred language for specific contacts, which the system will remember in the future. Gemini can translate messages into over 40 languages, as well as help drivers craft a new message, per the release.
If a driver is using Google Maps for navigation, for example, they can ask Gemini to find restaurants along the route by cuisine, then follow up with general questions about the business. Drivers can also use Gemini to retrieve an address or other details from a Gmail inbox.
Gemini will also be added to the Android Automotive OS (AAOS) infotainment platform, which can replace a vehicle's standard in-vehicle software with compatible hardware and software preinstalled by OEMs. The company said it worked with over a dozen car brands to launch next-generation cars with Google built-in.
The integration of Gemini into AAOS presents a significant market opportunity for OEMs to improve the in-vehicle experience for customers.
AAOS offers drivers an interface that's optimized for a vehicle's screen dimensions and compatible apps can be downloaded directly to the vehicle without a smartphone. Over 50 models now feature Google built-in, according to the release. Google will soon release additional apps, including games and for video streaming while a vehicle is stationary.
In addition to adding Gemini to Android Auto and the AAOS, Google says it's also working on digital car keys for Audi, Volvo and Polestar vehicles, allowing drivers to remotely lock, unlock or start their vehicle with a smartphone app. The digital key technology, which does not require a vehicle's key fob, will roll out to more models soon, per the release.
Google this week will showcase Android Auto with Gemini Live integration at the annual Google I/O developer conference in Mountain View, California. The demos will include both vehicles compatible with Android Auto and with Google built-in.
Other automakers are planning to add generative AI to their models. At the CES technology conference in January 2024, Volkswagen demonstrated a new AI-powered in-vehicle voice assistant for the new ID.7 electric sedan developed in partnership with software company Cerence. The technology enables drivers and passengers to control vehicle functions using their voice, such as climate controls.
In February, Stellantis announced plans to launch an AI-powered in-car assistant. The assistant will enable the driver to ask questions about vehicle features and receive immediate guidance using conversational interactions, such as what vehicle warning indicators mean or how to troubleshoot a problem with a vehicle.
Last month, Kia announced it's rolling out a new generative AI-powered voice recognition system called 'AI Assistant' for the EV3 in Europe.
Recommended Reading
Stellantis to launch AI-powered in-car assistant

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Everything you need to know from Google I/O 2025
Everything you need to know from Google I/O 2025

Yahoo

timean hour ago

  • Yahoo

Everything you need to know from Google I/O 2025

From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago. But, we know what you're really here for: Product updates and new product announcements. Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year. We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them. Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available. Google tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top. At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means. Google announces AI Mode in Google Search Credit: Google AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable. According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today. Out of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees. Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description. Google AI Shopping Credit: Google Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet. The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change. Google said users will be able to try on different looks via AI in Google Labs starting today. Google's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR. One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that. While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year. Project Moohan Credit: Google In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker. Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that. Easily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first. Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files. Jules coding agent Credit: Google Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages. The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts. Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer. Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S. Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs. Gemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too. Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward. Flow AI video tool Credit: Google A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue. Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2. Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform. Another big announcement out of Google I/O: Project Starline is no more. Google's immersive communication project will now be known as Google Beam, an AI-first communication platform. As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks. Google Meet translations Credit: Google Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display. While Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now. Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it. Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users. Gemini Live Credit: Google It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement. Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities. Another one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future. Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.

Google's Veo 3 AI video generator is unlike anything you've ever seen. The world isn't ready.
Google's Veo 3 AI video generator is unlike anything you've ever seen. The world isn't ready.

Yahoo

timean hour ago

  • Yahoo

Google's Veo 3 AI video generator is unlike anything you've ever seen. The world isn't ready.

At the Google I/O 2025 event on May 20, Google announced the release of Veo 3, a new AI video generation model that makes 8-second videos. Within hours of its release, AI artists and filmmakers were showing off shockingly realistic videos. You may have even seen some of these videos in your social media feeds and not realized they were artificially generated. To be blunt: We've never seen anything like Veo 3 before. It's impressive. It's scary. And it's only going to get better. Misinformation experts have been warning for years that we will eventually reach a point where it's impossible for the average person to tell the difference between an AI video and the real thing. With Veo 3, we have officially stepped out of the uncanny valley and into a new era, one where AI videos are a fact of life. While several other AI video makers exist, most notably Sora from OpenAI, the clips made by Veo 3 instantly stand out in your timeline. Veo 3 brought with it several innovations that separate it from other video generation tools. Crucially, in addition to video, Veo 3 also produces audio and dialogue. It doesn't just offer photorealism, but fully realized soundscapes and conversations to go along with videos. It can also maintain consistent characters in different video clips, and users can fine-tune camera angles, framing, and movements in entirely new ways. On social media, many users are dumbfounded by the results. Veo 3 is available to use now with Google's paid AI plans. Users can access the tool in Gemini, Google's AI chatbot, and in Flow, an 'AI filmmaking tool built for creatives, by creatives,' per Google. Already, AI filmmakers are using Veo 3 to create short films, and it's only a matter of time until we see a full-length film powered by Veo 3. On X, YouTube, Instagram, and Reddit, users are sharing some of the most impressive Veo 3 videos. If you're not on your guard and simply casually scrolling your feed, you might not think twice about whether the videos are real or not. The short film "Influenders" is one of the most widely shared short films made with Veo 3. "Influenders" was created by Yonatan Dor, the founder of the AI visual studio The Dor Brothers. In the movie, a series of influencers react as an unexplained cataclysm occurs in the background. The video has hundreds of thousands of views across various platforms. "Yes, we used Google Veo 3 exclusively for this video, but to make a piece like this really come to life we needed to do further sound design, clever editing and some upscaling at the end," Dor said in an email to Mashable. "The full piece took around 2 days to complete." Dor added, "Veo 3 is a massive step forward, it's easily the most advanced tool available publicly right now. We're especially impressed by its dialogue and prompt adherence capabilities." Similar videos featuring man-on-the-street videos have also gone viral, with artists like Alex Patrascu and Impekable showing off Veo 3's capabilities. And earlier this week, a Wall Street Journal reporter made an entire short film starring a virtual version of herself using Veo 3. All this in just 10 days. In "Influenders" and these other videos, some of the clips and characters are more realistic than others. Many still have the glossy aesthetic and jerky camera movements that are a signature of AI videos, a clear giveaway that's similar to the ChatGPT em dash. Just a couple of years ago, AI creations with too many fingers and other obvious anatomical abnormalities were commonplace. If the technology keeps progressing at this pace, there will soon be no obvious difference between real video and AI video. In promoting Veo 3, Google is eager to stress its partnerships with artists and filmmakers like Darren Aronofsky. And it's clear that Veo 3 could drastically reduce the cost of creating animation and special effects. But for content farms and bad actors producing fake news and manipulative outrage bait, Veo 3 is equally powerful. We asked Google about the potential for Veo 3 to be used for misinformation, and the company said that safeguards such as digital watermarks are built into Veo 3 video clips. "It's important that people can access provenance tools for videos and other content they see online," a representative with Google DeepMind told Mashable via email. "The SynthID watermark is embedded in all content generated by Google's AI tools, and our SynthID detector rolled out to early testers last week. We plan to expand access more broadly soon, and as an additional step to help people, we're adding a visible watermark to Veo videos." Google also has AI safety guidelines that it uses, and the company says it wants to "help people and organizations responsibly create and identify AI-generated content." A screenshot from an AI-generated video made by Google with Veo 3. Credit: Google But does the average person stop to ask whether the images and videos on their timelines and FYP are real? As the viral emotional support kangaroo proves, they do not. There's zero doubt that AI videos are about to become even more commonplace on social media and video apps. That will include plenty of AI slop, but also videos with more nefarious purposes. Despite safeguards built into AI video generation tools, skilled AI artists can create deepfake videos featuring celebrities and public figures. TV news anchors speaking into the camera have also been a recurring theme in Veo 3 videos so far, which has worrying implications for the information ecosystem online. If you're not already asking "Is this real?" when you come across a video clip online, now is the time to start. Or, as a chorus of voices are saying on X, "We're so cooked." Follow Timothy Beck Werth () and Mashable () on X for the latest and analysis. Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Hey chatbot, is this true? AI 'factchecks' sow misinformation
Hey chatbot, is this true? AI 'factchecks' sow misinformation

Yahoo

time2 hours ago

  • Yahoo

Hey chatbot, is this true? AI 'factchecks' sow misinformation

As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification -- only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool. With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots -- including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini -- in search of reliable information. "Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media. But the responses are often themselves riddled with misinformation. Grok -- now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries -- wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India. Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes. "The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP. "Our research has repeatedly found that AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news," she warned. - 'Fabricated' - NewsGuard's research found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation narratives and false or misleading claims related to the recent Australian election. In a recent study of eight AI search tools, the Tow Center for Digital Journalism at Columbia University found that chatbots were "generally bad at declining to answer questions they couldn't answer accurately, offering incorrect or speculative answers instead." When AFP fact-checkers in Uruguay asked Gemini about an AI-generated image of a woman, it not only confirmed its authenticity but fabricated details about her identity and where the image was likely taken. Grok recently labeled a purported video of a giant anaconda swimming in the Amazon River as "genuine," even citing credible-sounding scientific expeditions to support its false claim. In reality, the video was AI-generated, AFP fact-checkers in Latin America reported, noting that many users cited Grok's assessment as evidence the clip was real. Such findings have raised concerns as surveys show that online users are increasingly shifting from traditional search engines to AI chatbots for information gathering and verification. The shift also comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X. Researchers have repeatedly questioned the effectiveness of "Community Notes" in combating falsehoods. - 'Biased answers' - Human fact-checking has long been a flashpoint in a hyperpolarized political climate, particularly in the United States, where conservative advocates maintain it suppresses free speech and censors right-wing content -- something professional fact-checkers vehemently reject. AFP currently works in 26 languages with Facebook's fact-checking program, including in Asia, Latin America, and the European Union. The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed, prompting concerns that their output may be subject to political influence or control. Musk's xAI recently blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing "white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Musk as the "most likely" culprit. Musk, the South African-born billionaire backer of President Donald Trump, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. "We have seen the way AI assistants can either fabricate results or give biased answers after human coders specifically change their instructions," Angie Holan, director of the International Fact-Checking Network, told AFP. "I am especially concerned about the way Grok has mishandled requests concerning very sensitive matters after receiving instructions to provide pre-authorized answers." burs-ac/nl

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store