logo
Study Finds Dolphins Can Make Human Vowel Sounds as Experts Launch AI-Powered Dolphin Translation

Study Finds Dolphins Can Make Human Vowel Sounds as Experts Launch AI-Powered Dolphin Translation

Yahoo24-04-2025

New research reveals that dolphins can mimic human vowel sounds, indicating their potential for interspecies communication
The study highlights a dolphin named Zeus and the marine mammal's intentional communication with humans
Google's AI model, DolphinGemma, is advancing human-dolphin communication, helping researchers identify mimicked sounds and better respond to dolphins' requests
Could humans have entire conversations with dolphins in the future? A new study found that the marine mammals already know part of the human language.
In a press announcement published on Tuesday, April 23, J & J Publishing, a dolphin-focused education and science publisher, released a research paper titled "Novel Dolphin Vocalization." It reveals the "intricate cognitive capabilities of dolphins and their sophisticated understanding of mimicry as a tool for interspecies communication."
The study notes that dolphins have developed the ability to produce specific sounds, measured by complexity, frequency, amplitude, and duration, that mimic vocalizations resembling the human vowel sounds, "A, E, O, and U."
Related: Once-Extinct Bird Is Now Laying Eggs in the Wild for the First Time in Nearly 40 Years
Unlike humans, who auditorily communicate with their mouths and vocal cords, dolphins' vocalizations come from their blowholes and air sacs. The sounds emitted can create a range of octaves, some even too complex for the human ear.
Researchers from SpeakDolphin focused on a dolphin named Zeus for the study after noticing the animal's spontaneous vocalizations.
"I missed it at first," confessed lead researcher Jack Kassewitz. "Because they live in aquatic environments, dolphins' brains have adapted to process sound much faster than humans. As researchers, it's often challenging to recognize dolphins' very fast-paced acoustic patterns while recording on location."
However, once the researcher caught on, he realized the vowel sounds were distinct from Zeus' normal dolphin vocalization. "I am convinced that some dolphins, Zeus in particular, are as determined to communicate with us as we are with them," said Kassewitz. "We have heard anecdotal reports over the decades of similar dolphin mimicry. Finally, we can analyze these specialized vocalizations with our advanced software."
Related: 'Extremely Rare' Hybrid 'Rockaroni' Penguins Caught on Camera Evolving 'Before Our Eyes' (Exclusive)
Kassewitz explained the difference between the sounds dolphins make connecting with each other versus the sounds made towards humans above water. "When dolphins vocalize above the water surface, it is almost certainly to get the attention of nearby humans," he noted. "Zeus' vowel vocalizations — made with his head above water and directed at our research team — demonstrated how intentional his efforts were."
The study comes only weeks after Google announced its progress in human-dolphin communication using AI.
On April 14, in celebration of National Dolphin Day, Google — in collaboration with researchers at Georgia Tech (led by Google DeepMind Research Scientist and Georgia Tech professor Dr. Thad Starner) and field researchers led by the Research Director and founder of the Wild Dolphin Project (WDP) Dr. Denise Herzing — revealed the developments made to their foundational AI model, DolphinGemma.
The AI model uses the SoundStream tokenizer, a Google audio technology, to recreate dolphin sounds to identify recurring sound patterns, clusters, and reliable sequences.
Related: New Details Emerge About the Seizure and Death of Pet Influencer Peanut the Squirrel
From decades of underwater observations, WDP has analyzed specific sounds with behavioral correlations for dolphins. For example, signature whistles (or names) are used to call the calves back to them; burst-pulse 'squawks' are used during fights; and clicking 'buzzes' are heard during courting and shark chases.
WDP has furthered its communication exploration by creating the Cetacean Hearing Augmentation Telemetry (or CHAT) system, alongside Georgia Institute of Technology, to establish a more stable, more straightforward shared vocabulary with the marine mammals.
Researchers use the CHAT system to create synthetic dolphin sounds and then associate those sounds with objects that the dolphins they're studying enjoy, including sargassum, seagrass, or scarves. From there, the CHAT system listens for the dolphins to mimic the sounds it creates, identifies which whistle was mimicked, and informs the researcher which object the dolphin is referring to. This allows the researcher to provide the requested item quickly, building the vocabulary and connection between the two.
With DolphinGemma, CHAT can anticipate and identify mimicked sounds faster. This increases the speed at which researchers can respond to the dolphin's requests, allowing for a more fluid communication sequence.
Read the original article on People

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google fixes bug that led AI Overviews to say it's now 2024
Google fixes bug that led AI Overviews to say it's now 2024

TechCrunch

time3 hours ago

  • TechCrunch

Google fixes bug that led AI Overviews to say it's now 2024

AI tools are touted as capable helpers that can easily help you research, code, summarize, write and bring you knowledge of any kind. But sometimes simple questions befuddle them. Google's AI Overviews, for example, is confused what year it is. Several users reported over the past few days that when they asked Google what year it is, AI Overviews said the current year is 2024. How long until they manually take this one down? 😂😂 — Lily Ray 😏 (@lilyraynyc) May 29, 2025 wait did google announce a time machine at i/o? — Dmitry Shevelenko (@dmitry140) May 29, 2025 This reporter got the same answer on Thursday morning when Google was asked if it's 2025 right now. Image Credits: TechCrunch Google finally fixed the bug late on Thursday. When asked why this happened, Google didn't provide a particular reason, only saying that it was working on an update to avoid such issues. 'As with all Search features, we rigorously make improvements and use examples like this to update our systems. The vast majority of AI Overviews provide helpful, factual information, and we're actively working on an update to address this type of issue,' a Google spokesperson said in a statement. Google has been working on AI Overviews for a few years now, and the feature has so far made some notable blunders — to name just a couple, it has recommended users eat 'one small rock per day,' saying rocks are a good source of vitamins and minerals, and even suggested adding glue in order to help cheese stick to a pizza. When the company rolled out the feature in Hindi in India, TechCrunch found that its answers were often inconsistent and confusing. In recent weeks, Google CEO Sundar Pichai has touted usage numbers for AI Overviews, saying that the feature is now being used by 1.5 billion users in over 100 countries. He also added that in markets like the U.S. and India, the feature is driving over 10% usage of the search engine for related queries. The company is betting a lot on driving people to use its AI-powered search and Q&A features more.

Black Forest Labs' Kontext AI models can edit pics as well as generate them
Black Forest Labs' Kontext AI models can edit pics as well as generate them

Yahoo

time9 hours ago

  • Yahoo

Black Forest Labs' Kontext AI models can edit pics as well as generate them

Black Forest Labs, the AI startup whose models once powered the image generation features of X's Grok chatbot, on Thursday released a new suite of image-generating models — some of which can both create and edit pics. The most capable of the models in the new family, called Flux.1 Kontext, can be prompted with text and, optionally, a reference image to create new images, writes Black Forest Labs in a blog post. "The Flux.1 Kontext models deliver state-of-the-art image generation results with strong prompt following, photorealistic rendering, and competitive typography — all at inference speeds up to 8x faster than current leading models," the company writes in its post. Flux.1 Kontext comes as the race to build competitive image generators heats up. Google debuted its latest image-generating model, Imagen 4, earlier this month at the company's I/O developer conference. Earlier this year, OpenAI brought a vastly improved image-generating model to ChatGPT — a model that quickly went viral for its ability to create art in the style of Studio Ghibli movies. There are two models in the Flux.1 Kontext family: Flux.1 Kontex [pro] and Flux.1 Kontex [max]. The former allows users to generate an image and refine it through multiple "turns," all while preserving the characters and styles in the images. Flux.1 Kontex [max] focuses on speed, consistency, and adherence to prompts. Unlike some of Black Forest Labs' previous models, Flux.1 Kontex [pro] and Flux.1 Kontex [max] can't be downloaded for offline use. However, Black Forest Labs is making an "open" Kontext model, Flux.1 Kontext [dev], available in private beta for research and safety testing. Black Forest Labs is also launching a model playground that allows users to try its models without having to sign up for a third-party service. New users get 200 credits, enough to generate around 12 images with Flux.1 Kontex [pro]. Black Forest Labs, based in Germany, was said to be in talks to raise $100 million at a $1 billion valuation toward the end of last year. Many of the founders hail from Stability AI, the creator of the notorious Stable Diffusion image-generating model. Backers include Andreessen Horowitz, Oculus co-founder Brendan Iribe, and Y Combinator's Garry Tan. In the months since it emerged from stealth, Black Forest Labs has released a number of new image-generating models and enterprise-focused services, including an API. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Google AI Overviews Says It's Still 2024
Google AI Overviews Says It's Still 2024

WIRED

time17 hours ago

  • WIRED

Google AI Overviews Says It's Still 2024

When asked to confirm the current year, Google's AI-generated top result confidently answers, 'No, it is not 2025.' Photo-Illustration:I've covered Google's AI Overviews since its messy rollout last year, when screenshots of absurdly wrong answers started popping up at the top of search results and going viral on social media. Still, when I first saw images of AI Overviews confidently saying that it's still 2024 in Reddit posts today, I thought they might be altered screenshots. I mean, over a billion Google users currently see the AI-generated results every month. Surely, such an obvious error would have been caught and patched by now? I was wrong. Repeating the prompt 'is it 2025' verbatim three times gave me three different AI Overview results, yet the first sentence was the same in each result: 'No, it is not 2025.' The erroneous results cited multiple websites, ranging from Reddit's r/AskHistorians to a Wikipedia entry for the year 2025. The first time I asked Google Search for confirmation of the year, the AI results included a shot at the exact date as well: 'It is May 28, 2024.' (I ran these tests on May 29, 2025.) The synthetic gaslighting got wilder, somehow, on the second try: 'According to the provided information, for some locations it might be 2025, but not everywhere on Earth.' However, my third attempt was so nonsensical and riddled with paradoxes that it's worth publishing the whole thing: No, it is not 2025. The current year is 2024. According to the provided information, for some locations around the world, like Kiribati, New Zealand, and some parts of Russia and Australia, it is currently 2025. However, the current date is May 29, 2025 in San Francisco, CA [zip code redacted]. The twists! The turns! This AI Overview has everything. Starting off with a claim that it's actually last year. Then, a repeat of a previous mistake—this time more detailed—about it being 2025 somewhere else in the world right now. A New Zealand shout-out for some reason. Then, a Shyamalan-esque conclusion where it says the real date, paired with my home zip code. Although Google likely knows where I am most waking minutes, having my zip code folded into the AI Overview result did make me feel uncomfortable. A spokesperson for Google was not immediately able to comment. After the initial launch for AI Overviews a year ago, Liz Reid, who leads Search at Google, responded to the viral errors in a blog post that admitted the company screwed up. 'There's nothing quite like having millions of people using the feature with many novel searches. We've also seen nonsensical new searches, seemingly aimed at producing erroneous results,' she wrote. At the bottom of every AI Overview, the results today still contain a disclaimer that AI results may not be accurate. At Google's I/O developer conference earlier this month, which I feel like I need to reassure you was in 2025, one of the biggest software launches was AI Mode. It's a chatbot-style version of Google Search designed for longer queries that's available for all users in the United States. In one of my tests where AI Overviews got the year wrong, the result included a big button at the bottom inviting me to 'dig deeper' by opening AI Mode. The newer AI search option did, in fact, say the correct year on the first try. (A low bar.) Even though Google is trying to improve its AI answers as it expands to additional generative search tools, odd results are sticking around. Recently, users also discovered that you could type a nonsensical phrase into Google Search, slap the word 'meaning' at the end, and get an AI Overview attempting to pass off whatever you input as some kind of well-known saying. Similar to that case, this is an important reminder about the continued mistakes present in any kind of AI-generated output. While you're using software that's built on predicting the next word, like the underlying large language models powering generative AI tools are designed to do, a consistent skepticism is still required. Online, these kinds of AI errors aren't going away anytime soon.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store