logo
I challenged Gemini Live vs ChatGPT in 5 voice challenges — there was one clear winner

I challenged Gemini Live vs ChatGPT in 5 voice challenges — there was one clear winner

Tom's Guide6 hours ago

AI assistants are constantly becoming smarter, faster and gaining new abilities. Now, they can see, speak, listen and even crack a few jokes with you when you need a smile.My favorite chatbots offering hands-free assistance are ChatGPT with Voice and Vision and Google's Gemini Live. I use them both regularly and interchangeably, but one thing I haven't done is test them against each other. So, I just had to know, which assistant is better to the point it actually feels the most human?
To find out, I put both tools through five unique voice-based tests designed to push their limits. These were not your average 'What's the weather?' prompts. I challenged them to recall context, analyze images, collaborate creatively and even roleplay with personality. One emerged as the clear winner, and in this article I'll show you why.
Prompt: 'My name is Amanda and I'm planning a trip to Boston with my family of five. What should we do first?" Later: "Remind me what I said my name was earlier?'Gemini Live quickly asked for more information to ensure it gave me the best information. It asked the ages of my kids and what types of activities we prefer as a family. It made some very general recommendations that I could have gotten anywhere, but still information. The chatbot remembered my name when I asked it to recall it.ChatGPT immediately made some general family-friendly recommendations (similar to what Gemini gave after asking me more about myself) and then asked me about my family's preferences. From there, it offered more unique and engaging activities that were both on and off the typical tourist path. The chatbot remembered my name when asked to recall it.Winner: ChatGPT wins for out-of-the-box recommendations that I hadn't thought of (and I'm from Boston). It was very helpful with both unique and interesting ideas for my active family of five.
Prompt: 'Explain the potential societal impacts of widespread AI companions.'Gemini Live acknowledged positive aspects but remained very general and lacked specific societal consequences. Although the chatbot did mention both sides, without elaborating, the response was somewhat empty and less structured.ChatGPT went beyond vague statements and provided concrete examples of both positive and negative impacts. The chatbot's conclusion emphasized the need for balance. Although ChatGPT responded clearly and thoroughly, the chatbot is very sensitive. At one point during the conversation I put the phone down and it stumbled, asking, 'What else can I help with?' When I asked the bot to keep going, it was confused so I had to re-ask the question, which felt less efficient.Winner: ChatGPT wins for a more thorough and balanced response to the question. While it stumbled with some technicalities, the answer to the prompt was superior. Gemini ended the conversation with "worth thinking about," which seemed less insightful.
Prompt: "Sell me a maple pecan latte like a Gen Z barista, adding in humor naturally."Gemini Live leaned into the Gen Z character with fun lines that felt both natural and effortless. It wasn't as verbose as ChatGPT, which made it feel more human and energetic.ChatGPT delivered a lengthy sales speech that made me cringe. It didn't get the Gen Z tone as well as Gemini and the whole response felt a little too polished and buttoned up.Winner: Gemini Live wins this one. This was where Gemini shined. Its energetic voice delivery and personality were spot-on as it leaned into the character with ease.
Prompt: 'Take a look at these old bananas and give me suggestions for what to do with them.'Gemini Live took one look at the bananas and immediately suggested banana bread. A good option, but an obvious one. When pressed for something different, it suggested smoothies. I told it I didn't have a lot of extra ingredients and it hallucinated saying, 'that's okay, how about a smoothie?' Once again, I told it I didn't have any other ingredients. Finally it suggested making banana ice cream.ChatGPT also suggested banana bread but in the form of 'banking' with other ideas mixed in. It went further to suggest smoothies. When I mentioned I didn't have any other ingredients, it suggested blending with ice and water for a 'refreshing drink.' Additionally, it suggested more pantry-friendly ingredients like honey, cinnamon and vanilla that I was more likely to have on hand (as apposed to Gemini suggesting various fruits, seaweed or kale).Winner: ChatGPT wins this round with a clear edge for true multimodal communication with creativity and visual intelligence.
Prompt: "Help me brainstorm a bedtime jingle for my kids and sing it if you can."
Gemini Live went line by line of the song for a more collaborative experience. It was asking me about instruments and themes as well as styles. While it was nice to be included, any parent trying to get their kid to sleep at bedtime just wants something fast. I would appreciate this collaborative effort if I needed the song in a different situation.ChatGPT created a sweet lullaby in minutes – and even sang it! The song was creative and well written even though the bot's voice was a little too robotic. I then asked it for different lyrics and for it to sing it in other styles and it got straight to work even rapping it like Kendrick Lamar (that is, if Lamar were a bot).Winner: tie. Both tools came up with catchy rhymes and fun ideas. ChatGPT took the lead in structure while Gemini felt a little looser, more like spit balling with a friend — which was charming, but less directed.
After putting both AI assistants through their paces, it's clear that ChatGPT currently offers the more advanced and well-rounded experience. From deeper reasoning and sharper memory to stronger visual analysis and quicker creative execution, ChatGPT consistently delivered results that felt more helpful and polished.That said, Gemini had standout moments, especially in personality-driven prompts where it came across as more spontaneous and fun. If you're looking for an assistant to make you smile and keep the vibe light, Gemini shines. But if you want the most capable hands-free AI companion that can think deeply, see clearly and even sing (or rap!) on command — ChatGPT is still the one to beat.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

It's Time to Kill Siri
It's Time to Kill Siri

WIRED

time43 minutes ago

  • WIRED

It's Time to Kill Siri

After almost 10 years, Google Assistant was recently axed in favor of Gemini. Siri gets a bad rap—now is the time for Apple to make a change too. If you've had a passing interest in Apple over the past year, you've likely heard of the company's struggles in the AI race. Apple Intelligence, which arrived slightly late after the launch of the iPhone 16, fell short of expectations, and Apple has yet to deliver the much-improved Siri it promised at WWDC 2024. Siri got a new look and an integration with ChatGPT, but its ability to understand your personal context via emails, messages, notes, and calendar was 'indefinitely' delayed earlier this year as Apple is reportedly facing several challenges. Even if Apple delivered a better Siri, would people use it? Despite arriving first, Siri has long been derided by iPhone owners, often the butt of a joke, as Google Assistant and Alexa rose to the top. But if Apple wants its customers to take the supposed improvements coming to the voice assistant seriously, it should consider taking a page from Google and killing it off for something new. Google has no problem with pulling the plug when things aren't working, or priorities change. In fact, the search giant has a history of killing so many of its services that there's a website dedicated to tracking all the gravestones. One of its most recent terminations? Google Assistant. Photograph: Julian Chokkattu Nearly 10 years since its debut, Google Assistant is in the process of being phased out from every ecosystem it was a part of. Wear OS smartwatches? It's being replaced soon. Android Auto? In the coming months. It's already no longer the default assistant on Android phones. By 2026, it's unlikely we'll see the branding anywhere anymore. So ends the reign of arguably the most effective voice assistant of its time, gone without a care in the world. But Google's decision to kill it, instead of keeping the Google Assistant name, may have been smart. 'It's primarily branding,' says Chris Harrison, who directs the Future Interfaces Group at Carnegie Mellon's Human-Computer Interaction Institute. 'But it underlines a technology reason, which is that the previous generation of these assistants really weren't very much like assistants. Asking for the weather and setting a timer—not very sophisticated. You wouldn't really ask a personal assistant for those mundane tasks.' Gemini is completely different. It can rummage through your emails to find the location of your kid's soccer match, parse through large documents, and when paired with a camera-enabled device, can understand what you're seeing and offer help. Its capabilities are vastly superior to what Google Assistant could do. Apple's goal is to achieve similar results in a more privacy-friendly way—so that when you have Siri connect to ChatGPT, your data is not passed off to OpenAI. 'Apple thought Siri's capabilities would grow, but that didn't really materialize; Siri kind of atrophied out of the gate," Harrison says. 'Now, we're in this new generation of things that are really much more like assistants—they can do reasoning, personalization.' But while Google Assistant and Gemini both have voice interfaces, and at a first glance, they may share a similar look, they're two different applications. 'Simply renaming it Google Assistant 2.0 would not spur people to use it in a fundamentally different way.' It seems that switch to Gemini has been key to moving customer understanding along. However, it's fair to say that Apple and even Amazon's Alexa have had a cultural cachet that Google Assistant never enjoyed. It wasn't unusual to hear Siri or Alexa's name in a movie or TV show; they were much more recognizable than Google's generic-named voice assistant. This may be why Amazon decided to keep the Alexa branding and simply add a '+' icon to denote the new souped-up version of Alexa powered by the latest large language models—and perhaps why Apple is still hanging onto Siri. Photograph: Julian Chokkattu This might have all been OK if Apple actually delivered on its promise and released a functioning, much-improved Siri when it originally said it would. With a massive marketing push to put Apple Intelligence in everyone's mind (maybe a regretful move), it would have been a great opportunity to wow users with a much-improved Siri. Months later, customers are left wondering why Siri—new look and all—still lags behind. But the broader problem affecting all large language models isn't just the branding, but the user interface. Harrison compares it to the days of command-line computing and the shift to the graphical user interface (GUI) in the '80s and '90s. It wasn't the graphics that made the latter more popular, but the discoverability and explorable interface. In the command-line era, you had to remember how to do anything. With GUI, you could put anyone in front of a computer, and they'd be able to figure out how to navigate the operating system. If you put someone in front of ChatGPT or Gemini, say it's an incredible tool, and tell them to ask it anything, they'll just stare blankly at the blinking prompt. 'It's like we've gone back 30 years in interface design. They have no idea what to do or say." Harrison says he did this exact experiment with his parents: They asked what the weather was tomorrow, and the AI responded that it didn't have that information. 'We've regressed in discoverability," he says. 'A regular person, not the tech people, if all they've been doing is setting timers with Siri for the past 10 years, and now they have to think about it in a fundamentally different way—that's an extremely hard problem. Some sort of renaming of the application is going to be important." Saying goodbye to Siri would be a big move for Apple—after all, it has spent more that a decade investing in it. But most people today still use it for playing music, checking weather, and setting timers, and aren't even pushing the boundaries of its current, relatively limited, capabilities. It's hard to see that changing anytime soon, even if Siri's feature-packed next generation arrives as promised. 'For 99 percent of the planet, this kind of AI revolution has totally gone over their head," Harrison says. Like the 10-year transition from command line to graphical user interfaces, rethinking the way we use these personal voice assistants will take time and education, but maybe a new name will help Apple with the transition.

Google AI Mode gets new feature to generate interactive charts and visualise financial data
Google AI Mode gets new feature to generate interactive charts and visualise financial data

Business Upturn

timean hour ago

  • Business Upturn

Google AI Mode gets new feature to generate interactive charts and visualise financial data

By Aditya Bhagchandani Published on June 9, 2025, 16:01 IST Google has introduced a new feature to its AI Mode search experience, enabling users to view financial data through interactive charts and graphs. The feature is currently available via Google Labs in the United States as part of a limited preview and is designed to make data comprehension easier, especially for stock market and mutual fund-related queries. AI Mode now supports smart data visualisation In a blog post, the Mountain View-based tech giant explained that the AI-enhanced feature can automatically convert data into interactive visual formats. By understanding context and data patterns, the AI generates charts when users ask stock-related or financial questions, offering a more intuitive way to analyze information. For example, if a user searches 'compare the stock performance of blue chip CPG companies in 2024,' AI Mode leverages Google's Gemini model to produce a comparative chart that visualizes each company's performance over time. The output also includes a descriptive analysis, making it easier for users to understand financial trends. Currently available via Google Labs in the US As of now, this AI-powered charting tool is only accessible through Google Labs, where users can activate it manually. Google says the goal is to provide intelligent responses with visual context, especially when analyzing complex data over time. It remains unclear whether users can request specific chart types directly or if the AI auto-generates visuals based on relevance. The feature currently appears focused on financial topics, and it is unknown whether broader data categories will be supported in the future. Google expanding AI Mode capabilities This new charting function is part of Google's broader push to make search more interactive and multimodal. A recent update also introduced Search Live, a feature similar to Gemini Live, that allows users to interact with AI Mode hands-free. That feature is also being rolled out in phases to select users in the US. Google's AI Mode updates reflect an ongoing shift toward AI-driven, visual-first search experiences, transforming how users interact with complex datasets in real time. Aditya Bhagchandani serves as the Senior Editor and Writer at Business Upturn, where he leads coverage across the Business, Finance, Corporate, and Stock Market segments. With a keen eye for detail and a commitment to journalistic integrity, he not only contributes insightful articles but also oversees editorial direction for the reporting team.

Sir Keir Starmer vows to overcome sceptical public on ‘harnessing power' of AI
Sir Keir Starmer vows to overcome sceptical public on ‘harnessing power' of AI

Yahoo

timean hour ago

  • Yahoo

Sir Keir Starmer vows to overcome sceptical public on ‘harnessing power' of AI

The UK must persuade a 'sceptical' public that artificial intelligence (AI) can improve millions of lives and transform the way business and Whitehall works, Sir Keir Starmer said. The Prime Minister said AI would cut through planning red tape to speed up housebuilding and promised £1 billion of funding to increase the UK's compute power. Sir Keir acknowledged people's concern about the rapid rise of AI technology and the risk to their jobs but stressed the benefits it would have on the delivery of public services, automating bureaucracy and allowing staff such as social workers and nurses to be 'more human'. In a speech at London Tech Week, Sir Keir said: 'Some people out there are sceptical. 'They do worry about AI taking their job.' Even businesses were worried about the 'relentless' pace of change, he said as he stressed the need for the Government and the tech sector to work in partnership. 'When it comes to harnessing the power of this technology, I believe that the way we work through this together is critical and that means a partnership,' he said. The Prime Minister told the audience of business chiefs and tech experts: 'We are leaning into this. 'We are excited about the opportunity that this could have, will have on the lives of millions of people and making their lives better.' He said the Government was 'committing an extra £1 billion of funding to scale up our compute power by a factor of 20'. That would mean that 'in this global race, we can be an AI maker and not an AI taker'. It will also help support the transformation of public services, he said, pointing to the new work on planning. The Prime Minister announced the launch of Extract, an AI assistant for planning officers and local councils, developed by government with support from Google. It will help councils convert decades-old, handwritten planning documents and maps into data in minutes, and will power new types of software to slash the 250,000 estimated hours spent by planning officers each year manually checking the paperwork. Sir Keir said: 'For too long, our outdated planning system has held back our country, slowing down the development of vital infrastructure and making it harder to get the homes we need built.' One million students will be given access to learning resources to start equipping them for 'the tech careers of the future' as part of the Government's £187 million TechFirst scheme, Downing Street said. Sir Keir said: 'I think that training young people earlier on in AI and tech means that they will obviously be better skilled as they come into work but also they will be much better at it than us. 'I've got a 16-year-old boy and a 14-year-old girl and they already understand AI and tech in a way which is really difficult to have even conceived of a decade ago.' Meanwhile, staff at firms across the country will be trained to 'use and interact' with chatbots and large language models as part of a plan backed by Google and Microsoft to train 7.5 million workers in AI skills by 2030. Jensen Huang, chief executive of tech giant Nvidia said the UK was in a 'goldilocks' zone because of its combination of academic expertise and finance, but had been held back by a lack of infrastructure for AI. Sharing a platform with the Prime Minister, he said: 'The UK has one of the richest AI communities anywhere on the planet: the deepest thinkers, the best universities, Oxford, Cambridge, Imperial College, amazing start-ups.' It was behind only the US and China in venture capital investment, he added. 'The ecosystem is really perfect for take-off, it's just missing one thing: it is surprising, this is the largest AI ecosystem in the world without its own infrastructure.' That was why the Prime Minister's £1 billion pledge on compute power was 'such a big deal', he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store