logo
#

Latest news with #AIassistant

Here's what Google Gemini will look like on your Android Auto dashboard
Here's what Google Gemini will look like on your Android Auto dashboard

Yahoo

timea day ago

  • Automotive
  • Yahoo

Here's what Google Gemini will look like on your Android Auto dashboard

When you buy through links on our articles, Future and its syndication partners may earn a commission. Google Gemini is heading to Android Auto soon A new video shows how the integration will work The new AI will replace Google Assistant in cars Gemini is taking over from Google Assistant across all of Google's apps and devices, and Android Auto will be making the switch soon – as shown by a new demo video that gives us an idea of how the AI assistant is going to work on your car's dashboard. The video was captured at Google I/O 2025 and posted by 9to5Google, and you can see how Gemini slots in on the right-hand of the screen (or perhaps the left-hand, depending on the rules of the road in your country). If you've used Gemini on your phone, the interface will be familiar, with the glowing blue-ish ripples showing that Gemini is active. You're then free to ask whatever questions you have on the road, using natural language. You might want to see nearby gas stations for example, or have Gemini pull up the location of an event you're heading to from your Google Calendar. Anything you can do on your phone you can do through Gemini on Android Auto. One demo Google showed off was using Gemini to compile a list of ingredients for a particular meal in Google Keep, then asking for directions to the local grocery store to pick up the necessary supplies – all very impressive. Generally, it's much more flexible and more intelligent than Google Assistant. All of your in-car chats will be synced back to Gemini on the web and on your phone, so you can pick up where you left off on other devices and carry on the conversation. With Android Automotive (so the version built right into cars), the interface is a little more subtle, with a small pop-up bar showing Gemini. However, the exact look can vary depending on your vehicle and the dashboard screen configuration. It's not clear exactly when Gemini will show up on Android Auto, but Google has said it's coming soon, and we've seen numerous signs that it's on the way. Gemini is already the default AI assistant on new Android phones. Gemini 2.5 Flash promises to be your favorite AI chatbot A useful new Android Auto video feature has been spotted We've tried Google Pixel 9's new Gemini Astra upgrade

OpenAI wants ChatGPT to be a ‘super assistant' for every part of your life
OpenAI wants ChatGPT to be a ‘super assistant' for every part of your life

The Verge

time3 days ago

  • Business
  • The Verge

OpenAI wants ChatGPT to be a ‘super assistant' for every part of your life

Thanks to the legal discovery process, Google's antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT. An internal OpenAI strategy document titled 'ChatGPT: H1 2025 Strategy' describes the company's aspiration to build an 'AI super assistant that deeply understands you and is your interface to the internet.' Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. 'In the first half of next year, we'll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,' reads the document from late 2024. 'The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT's ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.' The document goes on to describe a 'super assistant' as 'an intelligent entity with T-shaped skills' for both widely applicable and niche tasks. 'The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.' It mentions coding as an early example of a more niche task.

Zuckerberg says Meta AI now used by a billion people monthly across its platforms
Zuckerberg says Meta AI now used by a billion people monthly across its platforms

Malay Mail

time5 days ago

  • Business
  • Malay Mail

Zuckerberg says Meta AI now used by a billion people monthly across its platforms

SAN FRANSICO, May 29 — Meta chief Mark Zuckerberg touted the tech firm's generative artificial intelligence (Gen AI) assistant yesterday, telling shareholders it is used by a billion people each month across its platforms. Zuckerberg noted the milestone anew at Meta's annual gathering of shareholders and as the social media behemoth vies with Google, Microsoft, OpenAI and others to be a leader in Gen AI. It was not clear how much Meta AI use involved people seeking out the chatbot versus passive users of Meta AI, as it is built into features in its family of apps. Since Google debuted AI Overviews in search results a year ago, it has grown to more than 1.5 billion users, according to Google chief executive Sundar Pichai. 'That means Google Search is bringing Gen AI to more people than any other product in the world,' Pichai said. Google's AI Overviews are automatically provided summaries of search results that appear instead of the previous practice of simply showing pages of blue links to revelant websites. Pichai said last week that Google's dedicated Gemini AI app has more than 400 million monthly users. Tech rivals are rapidly releasing new AI products despite ongoing challenges with preventing misinformation and establishing clear business models, and little sense of how the tech will affect society. Meta unveiled its first standalone AI assistant app on April 29, giving users a direct path to its Gen AI models. 'A billion people are using Meta AI across our apps now, so we made a new standalone Meta AI app for you to check out,' Meta CEO and founder Mark Zuckerberg said in a video posted on Instagram at the time. Zuckerberg said the app 'is designed to be your personal AI' and would be primarily accessed through voice conversations with the interactions personalized to the individual user. Use of Meta AI is growing fastest on WhatsApp, according to chief financial officer Susan Li. 'Our focus for this year is deepening the experience and making Meta AI the leading personal AI,' Zuckerberg said when Meta announced quarterly earnings at the end of April. — AFP

Google shows off its next-gen AI assistant that can control your Android phone
Google shows off its next-gen AI assistant that can control your Android phone

Android Authority

time20-05-2025

  • Android Authority

Google shows off its next-gen AI assistant that can control your Android phone

Google TL;DR Google's Project Astra aims to be a 'universal AI assistant' that understands context, devises plans, and acts on your behalf within Android. A demo showcased Astra assisting with a bike repair by navigating a manual, finding a YouTube tutorial, and potentially contacting a shop. This was powered by an AI agent that controls Android apps by simulating screen inputs, indicating an advanced but still developing capability. Google's Gemini chatbot has come a long way since its initial debut at the end of 2023. At first, the chatbot struggled with even basic tasks its predecessor was well equipped to handle, like setting a reminder. Today, it can not only chain actions across multiple services but also answer questions about what's showing on your phone's screen or through your phone's camera. In the future, Gemini may even be able to control your Android phone for you, allowing it to search through your phone's documents and open your apps to find the information you're looking for. During Google I/O 2025 earlier today, Google showed off its vision for a 'universal AI assistant.' A universal AI assistant is not only intelligent but can understand the context you're in, come up with a plan to solve your problems, and then take action on your behalf to save you time. For example, if you're having a problem with your bike's brakes, you can ask the assistant to find the bike's user manual online, have it open the manual, and make it scroll to the page that covers the brakes. Then, you can follow-up and tell the assistant to open the YouTube app and play a video that shows how to fix the brakes. Once you've learned what parts you need to replace, you can ask the assistant to go through your email conversations with the bike shop to find relevant information on part sizes or even have it call the nearest bike shop on your behalf to see if the right parts are in stock. There's currently no AI assistant that can do everything I just mentioned without some manual user intervention, but Google does offer various independent AI features that, if chained together, make this feat possible. Google's latest prototype of Project Astra, the code-name for its future universal AI assistant, demonstrates exactly that. Today's demonstration shows a man asking Astra on his phone how to fix a problem with his bike's brakes, with the assistant doing every step I just described in the previous paragraph. What's particularly interesting about this demo is that it shows off an AI agent that Google developed to automate actions within Android apps. We've known that Google has an AI agent called Project Mariner that can control a web browser, but this is the first we've heard of an AI agent from Google that can control an Android phone. In the demo, we see Google's Android AI agent open a PDF, scroll the screen until it finds the page requested by the user, open the YouTube app, and scroll through the search results until it finds a relevant video. Google When Astra is controlling the phone, we can see a small, circular overlay on the left. When it scrolls the screen, we can see the tap and swipe inputs it sends, showing that Astra is simulating screen inputs. Judging by the screen recording chip in the top left corner and the glowing overlay around the edges of the screen, it seems that Astra is reading the contents of the screen and then deciding where to tap or swipe. Google We don't know if Astra is doing these actions on device; it would certainly be possible through the use of the multimodal Gemini Nano model, but we can't determine if it's being used in this demo. What we do know is that Google has some work to do before it can roll out this Android AI agent. The portions of the video showing the agent were apparently sped up by a factor of 2, suggesting that it can be quite slow at taking action. Still, we're excited to see Google get closer to achieving its vision of a universal AI assistant. Every update we get on Project Astra makes Gemini a more appealing product, as we can expect new Astra features to eventually trickle down to the chatbot. The new capabilities Google demoed today might not be available for some time, but they'll eventually go live in some form, and when they do, they might be even more impressive than what we're seeing today. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.

Google's Gemini AI is coming to Chrome
Google's Gemini AI is coming to Chrome

The Verge

time20-05-2025

  • The Verge

Google's Gemini AI is coming to Chrome

Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday. Initially, Gemini will be able to 'clarify complex information on any webpage you're reading or summarize information,' according to a blog post from Google Labs and Gemini VP Josh Woodward. Google envisions that Gemini in Chrome will later 'work across multiple tabs and navigate websites on your behalf.' I saw a demo during a briefing ahead of Tuesday's announcement. In Chrome, you'll see a little sparkle icon in the top right corner. Click that and a Gemini chatbot window will open — it's a floating UI that you can move and resize. From there, you can ask questions about the website. In the demo, Charmaine D'Silva, a director of product management on the Chrome team, opened a page for a sleeping bag at REI and clicked on a suggested Gemini prompt to list the bag's key features. Gemini read the entire page and listed a quick summary of the bag. D'Silva then asked if the sleeping bag was a good option for camping in Maine, and Gemini in Chrome responded by pulling information from the REI page and the web. After that, D'Silva went to a shopping page on another retailer's website for a different sleeping bag and asked Gemini to compare the two sleeping bags. Gemini did that and included a comparison table. The tool initially only works across two tabs. But 'later in the year,' Gemini in Chrome will be able to work across multiple tabs. D'Silva also showed a demo of a feature that will be available in the future: using Gemini to navigate websites. In the demo, D'Silva pulled up Gemini Live in Chrome to help navigate a recipe site. D'Silva asked Gemini to scroll to the ingredients, and the AI zipped to that part of the page. It also responded when D'Silva asked for help converting the required amount of sugar from cups to grams. In Google's selected demos, Gemini in Chrome seems like it could occasionally be useful, especially with comparison tables or in-the-moment ingredient conversions. I'd rather just read the website or do my own research instead of reading Gemini's AI summaries, especially since AI can hallucinate incorrect information. Gemini in Chrome is launching on Wednesday. It will initially launch on Windows and macOS in early access to users 18 or older who use English as their language. It will be available to people who subscribe to Google's AI Pro and Ultra subscriptions or users of Chrome's beta, canary, and dev channels, Parisa Tabriz, Google's VP and GM of Chrome, said in the briefing. As for bringing Gemini to mobile Chrome, 'it's an area that we'll think about,' Tabriz says, but right now, the company is 'very focused on desktop.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store