Google Adds Button to Generate Error-Laden AI Podcast About Your Search Results Instead of Just Reading Them Like a Functioning Member of Society
Google has released a baffling new AI feature that turns your web search into a podcast.
Why anybody would want to enable the feature is unclear. Why be plagued by misleading and hallucinated AI Overviews search results when you can have a robotic voice read them out loud instead? Have we really lost the ability as a species to parse written information, nevermind original sources?
The opt-in feature — which currently lives inside Google's experimental "Labs" section and has to be manually turned on — harnesses the power of the company's Gemini AI model to turn a search query into "quick, conversational audio overviews."
According to the tech giant, an "audio overview can help you get a lay of the land, offering a convenient, hands-free way to absorb information whether you're multitasking or simply prefer an audio experience."
But is this anything anybody really asked for? Having two fake podcast hosts rant about a subject you're researching — likely with a smattering of hallucinations — sounds like an incredibly counterintuitive and needlessly obtuse way to get quick access to information.
The feature first surfaced last year as part of Google's NotebookLM, a note-taking tool that uses AI to help users organize their thoughts and summarize notes. An "Audio Overviews" feature can then take your notes and turn them into AI-generated podcasts, with often unintentionally hilarious results.
While AI researchers have gushed over the feature, using it to turn Wikipedia pages into hours-long podcast episodes they allegedly listen to, we still can't shake the feeling that Google may be barking up the wrong tree.
Particularly when it comes to search results, where speed has conventionally trumped anything else, turning AI summaries into rambling audio snippets sounds pretty exhausting.
Besides, if Google's AI Overviews are anything to go by, the tech's propensity to make up facts is still enormous. The feature has been plaguing users with outright wrong and misleading information for quite some time now, with users desperately reaching out to Reddit to find ways to disable it.
It's a sign of the times, with tech companies desperately looking for ways to shoehorn AI into every aspect of our digital lives to justify their enormous investments in the space.
Soon we won't just be inundated with AI slop in text and image format; a fake podcast host could one day be talking your head off while you're simply trying to figure out the winner of the Pedro Pascal lookalike contest in Brooklyn.
More on Google AI: Google's AI Is Actively Destroying the News Media

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
36 minutes ago
- Yahoo
Everything, everywhere, all Firefly AI: New Adobe app launches for iPhone and Android
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. Generative AI products that can create amazing images and videos indistinguishable from real ones have practically democratized photoshopping. You no longer need years of training or expensive software to create and edit any kind of image or video. Just issue commands in natural language, and the advanced AI model of your choice will deliver stunning results in seconds. You'd think these developments would directly threaten Adobe, the creator of Photoshop. But Adobe isn't backing down. Instead, the company has adapted its tools to take advantage of generative AI innovations. Products like Photoshop and Firefly already let you use AI to brainstorm and create images and videos tailored to your needs. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 Adobe isn't even trying to one-up the likes of OpenAI, Google, and other AI firms that might seem like competitors. Instead, the company is embracing those alternatives, integrating them into apps like Firefly. Just like that, Firefly can become your one-stop shop for all things photo and video creation that benefit from advanced AI tools. Adobe has just expanded the list of AI partners in the Firefly app and released iPhone and Android versions. Adobe hosted its Max event in London a few weeks ago, where it announced several big updates to Firefly, including support for high-end third-party AI models and a new Firefly Boards feature designed to help teams collaborate on AI-generated content. Adobe also confirmed at Max that iPhone and Android Firefly apps were coming soon, though it didn't share release dates. Fast-forward to June 17th, and Adobe has released the Firefly app for iPhone and Android. Along with it, Adobe announced a new partnership with third-party genAI services for generating and editing photos and videos, plus new Firefly Boards features. You can use Adobe's own models, also called Firefly, in the Firefly apps to generate photos and videos. But if you prefer something from the competition, Firefly gives you that option too. Here's the current list of partners, including the new AI models Adobe announced on Tuesday: Image Models: Black Forest Lab's Flux 1.1 Pro and Flux.1 Kontext; Ideogram's Ideogram 3.0; Google's Imagen 3 and Imagen 4; OpenAI's image generation model; Runway's Gen-4 Image Video Models: Google's Veo 2 and Veo 3; Luma AI's Ray2; Pika's text-to-video generator Of those, Ideogram, Luma, Pika, and Runway are new Adobe partners for Firefly. The Firefly app for iPhone and Android is available to download now, so you can create AI content 'wherever inspiration strikes.' The mobile app gives you quick access to tools you might already use in the desktop version of Firefly, including Generative Fill, Generative Expand, Text to Image, Text to Video, and Image to Video. Creators can choose between Adobe's Firefly models or rely on third-party frontier AI from Google and OpenAI. The Firefly mobile app lets you save your creations with your Creative Cloud account, making it easy to switch between mobile and desktop without interrupting your work. One big advantage of using the Firefly app instead of going directly to OpenAI, Google, or other genAI tools is that it brings everything together in one place. That's especially useful if you're using multiple content generation platforms for a single project. That's exactly what Adobe is aiming for. 'We built the Firefly app to be the ultimate one-stop shop for creative experimentation, where you can explore different AI models, aesthetics, and media types all in one place,' said Adobe's vice president of generative AI, Alexandru Costin. 'Every new partner model we add gives creators even more flexibility to experiment, iterate, and push their ideas further.' Adobe also addressed content safety, saying a 'durable 'nutrition label'' will be attached to everything created in the Firefly apps. This will identify whether Firefly AI or a partner model was used. It's unclear if this label will be visibly marked, though. You'll need an Adobe account and a plan to unlock all Firefly features. Access to third-party models depends on your subscription. In-app purchases include a Firefly Mobile Monthly plan ($4.99) and a Firefly Mobile Yearly plan ($49.99). Adobe also introduced new features for Firefly Boards, which debuted a few weeks ago. Firefly Boards let you generate video using either the Firefly Video model or an AI from an Adobe partner. You can also make iterative edits to images using the AI model of your choice. The feature helps organize your Boards with a single click so everything's ready for a presentation. Adobe Docs can also be linked to Boards. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the
Yahoo
36 minutes ago
- Yahoo
Remarkable new AI can tell your age by looking at your eyes
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. One of the most impressive areas of generative AI software like ChatGPT right now involves enhanced computer vision. AI can understand and interpret data from images. That's why we now have such advanced image and video generation models in ChatGPT, Gemini, Firefly, and other AI software. Models like ChatGPT o3 can accurately guess the location of an image by analyzing its details. Google offers advanced photo editing tools in its Photos app, and also directly in Gemini. These tools let you alter real photos in ways that weren't possible before. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 These image-related AI capabilities aren't just used to generate memes or melt OpenAI's servers. Researchers are developing AI models that can interpret images for various purposes, including medicine. The latest study showing such advancements comes from China. Researchers from several universities have been able to determine a person's age with high accuracy by having AI inspect an image of their retina. The readings also showed differences between the person's age and the eye's age. The researchers found that the retinal age gap the AI provided can be especially helpful for women. A simple retinal scan might help doctors offer better support to couples trying to conceive and to women at risk of early menopause. Retinal fundus imaging, or a photo of the back of the eye, lets doctors see microvascular features that reflect systemic aging. An AI trained on thousands of images can then predict the eye's age and compare it to the person's actual age to 'predict retinal age from fundus images with high precision.' The scientists used an AI called Frozen and Learning Ensemble Crossover (FLEX) to predict retinal age from fundus images. They fed FLEX over 20,000 eye photos from more than 10,000 adults of all ages to teach it how the back of the eye looks as people age. FLEX also analyzed over 2,500 images from nearly 1,300 pre-menopausal women. The AI was then able to estimate a person's age by examining a retinal fundus photo. If the eye appears older than the woman's actual age, the retinal age gap is positive. That could also mean other organs in the body are aging faster. The implications for reproductive health are clear. Fertility and menopause issues could benefit directly from such an AI screening tool. The researchers linked a larger retinal age gap to lower blood levels of anti-Müllerian hormone (AMH), a marker for ovarian reserve. The lower the AMH value, the harder it is for older women to conceive. The scientists studied women ages 40 to 50 and found that each additional retinal year raised the risk of a low AMH result. The risk increased by 12% in the 40-44 age group and by 20% in the 45-50 group for every extra retinal year. The study also found that having more childbirths at younger ages was associated with lower AMH levels than average. Each additional retinal year increased the risk of developing menopause before age 45 by 36%, according to the paper. We're still in the early days of using AI for medical imaging, but the study shows promise for using a simple, non-invasive technique to improve reproductive health protocols. Imagine getting a retinal scan in your late 20s or early 30s to help decide whether to get pregnant or freeze your eggs. Similarly, women over 40 concerned about pre-menopause or menopause could use an eye scan to check their retinal age and assess the risk of early symptoms. This might help them prepare for the years ahead with hormonal therapies to delay or ease symptoms. For any of this to happen, the conclusions from Hanpei Miao & Co. would need to be confirmed by further research. Separately, the FLEX AI model used in this study could be explored for other health conditions where eye scans might serve as early indicators of age-related health risks. The full study is available in Nature magazine. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the
Yahoo
2 hours ago
- Yahoo
New code in Spotify's app references the long-awaited ‘lossless' tier
It's been over four years since Spotify first announced its plans to launch a lossless audio tier that has yet to arrive. After numerous leaks and teases from company execs, we know better than to start hoping that the long-delayed service could be finally nearing arrival. But newly added references in the music app's code suggest the lossless service is at least still under development and could even be getting closer to launch. According to technologist and reverse engineer Chris Messina, a number of references to 'lossless' have popped up in Wednesday's build of the Spotify desktop app. These code snippets mainly refer to help cards that would appear to inform the end user about what the lossless tier offers and how to use or troubleshoot the service. For instance, one card says, 'Lossless music, now in premium,' while another describes it as the 'best sound quality on Spotify for music in up to 24-bit/44.1 kHz.' Some of the cards warn users that particular songs aren't available in lossless or when their device has bad connectivity, which could affect playback. Another notes that lossless music is 'best enjoyed on devices compatible with Spotify Connect and/or wired devices.' In addition, Reddit user u/bendotlc noted that other changes were seen in the latest version of the mobile app, where the code now mentions lossless: 'Say hello to the best sound quality on Spotify. Listen to high-fidelity music in up to 24-bit/44.1 kHz.' The Redditor claims the feature is actually present in the app, but is currently disabled. Of course, code references don't mean the lossless feature is coming anytime soon. As we've seen in previous years, leaks and code references have appeared before, and yet the high-quality audio tier never arrived. When asked directly about lossless and other plans for premium tiers on recent earnings calls with investors, Spotify execs, including CEO Daniel Ek, would only hint that the company was investing in premium products for 'super fans.' They wouldn't give a time frame on when these or any other higher-quality streaming options would finally launch. Most recently, Spotify Chief Business Officer Alex Norstrom told investors on the Q1 2025 call that Spotify was still investing in more premium tiers. 'Now, with regards to higher tiers, we see great potential in them as we've mentioned before,' he said. 'So creating higher tiers around new offerings is something we are working towards as it really opens up new opportunities to delight users,' Norstrom said at the time. When Spotify first announced its plans for Spotify HiFi, as it was then called, in 2021, it said that the service would offer users music in 'CD-quality, lossless audio format.' It had run tests of a lossless option in the years before this, however. Eventually, Spotify pointed to licensing issues as the reason for its delay in launching. In what could be promising news on that front, the streamer signed new deals with labels including Warner Music and Universal Music Group in early 2025, which could pave the way for the lossless tier — or so music fans hope. Bloomberg also reported earlier this year that a premium Music Pro tier would launch later in 2025, which would include high-quality streaming, remixing, and other features. Spotify declined to comment on this report.