
Humans will soon understand dolphins, claim experts
The launch of a new artifical intelligence model has brought humans closer to understanding dolphins, experts claim.
Google DeepMind's DolphinGemma is programmed with the world's largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project.
It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals' clicks and whistles.
Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: 'We do not know if animals have words. Dolphins can recognise themselves in the mirror, they use tools, so they're smart – but language is still the last barrier.
'So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can't pick out.
'You're going to understand what priorities they have, what they are talking about.
'The goal would someday be to 'speak dolphin', and we're really trying to crack the code. I've been waiting for this for 40 years.'
Dolphins have complex communication, and from birth will squawk, click, and squeak to each other, and even use unique whistles to address individuals by name.
Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting, or chasing sharks, make buzzing sounds.
For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns.
The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language.
Dr Thad Starner, a Google DeepMind research scientist, said: 'By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication – a task previously requiring immense human effort.
'We're not just listening any more. We're beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.
'We can keep on fine tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.'
Talking 'in dolphin'
The team is hoping that eventually it will be able to synthesise the sounds in order to talk back 'in dolphin' or develop a new shared vocabulary. It invented a device that can play whistles in the water so that dolphins can learn to associate the noise with certain objects.
Describing the technique, Dr Starner added: 'Two researchers get into the water with a group of dolphins and Researcher A might have a scarf – a toy that the dolphins want to play with – and Researcher B is going to ask for that scarf. So Researcher B can play a whistle and Researcher A will hand Researcher B that scarf.
'They might pass the scarf back and forth a couple of times, playing that whistle over and over, and the hope is the dolphins who are watching all of this will figure out the social content and can repeat that sound to ask for the scarf. If that happens, then dolphins have mimicked one word in our tiny made-up dolphin language'
Researchers will be able to input their own data into DolphinGemma, released by Open Source on Monday, to try and accelerate advancements in the field.
Separately, the University of La Laguna in Spain announced this week that it had developed a new AI system for classifying the vocalisations of orcas in real-time.
The research project, funded by the Loro Parque Foundation, used more than 75,000 orca sounds, recorded and classified over nearly two decades at Loro Parque, to develop a neural network.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
18 hours ago
- The Guardian
We're close to translating animal languages – what happens then?
Charles Darwin suggested that humans learned to speak by mimicking birdsong: our ancestors' first words may have been a kind of interspecies exchange. Perhaps it won't be long before we join the conversation once again. The race to translate what animals are saying is heating up, with riches as well as a place in history at stake. The Jeremy Coller Foundation has promised $10m to whichever researchers can crack the code. This is a race fuelled by generative AI; large language models can sort through millions of recorded animal vocalisations to find their hidden grammars. Most projects focus on cetaceans because, like us, they learn through vocal imitation and, also like us, they communicate via complex arrangements of sound that appear to have structure and hierarchy. Sperm whales communicate in codas – rapid sequences of clicks, each as brief as 1,000th of a second. Project Ceti (the Cetacean Translation Initiative) is using AI to analyse codas in order to reveal the mysteries of sperm whale speech. There is evidence the animals take turns, use specific clicks to refer to one another, and even have distinct dialects. Ceti has already isolated a click that may be a form of punctuation, and they hope to speak whaleish as soon as 2026. The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals' interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another's native vocabulary. The prospect of speaking dolphin or whale is irresistible. And it seems that they are just as enthusiastic. In November last year, scientists in Alaska recorded an acoustic 'conversation' with a humpback whale called Twain, in which they exchanged a call-and-response form known as 'whup/throp' with the animal over a 20-minute period. In Florida, a dolphin named Zeus was found to have learned to mimic the vowel sounds, A, E, O, and U. But in the excitement we should not ignore the fact that other species are already bearing eloquent witness to our impact on the natural world. A living planet is a loud one. Healthy coral reefs pop and crackle with life. But soundscapes can decay just as ecosystems can. Degraded reefs are hushed deserts. Since the 1960s, shipping and mining have raised background noise in the oceans by about three decibels a decade. Humpback whale song occupies the same low-frequency bandwidth as deep-sea dredging and drilling for the rare earths that are vital for electronic devices. Ironically, mining the minerals we need to communicate cancels out whales' voices. Humpback whale songs are incredible vocal performances, sometimes lasting up to 24 hours. 'Song' is apt: they seem to include rhymed phrases, and their compositions travel the oceans with them, evolving as they go in a process called 'song revolutions', where a new cycle replaces the old. (Imagine if Nina Simone or the Beatles had erased their back catalogue with every new release.) They're crucial to migration and breeding seasons. But in today's louder soundscape, whale song is crowded out of its habitual bandwidth and even driven to silence – from up to 1.2 km away from commercial ships, humpback whales will cease singing rather than compete with the noise. In interspecies translation, sound only takes us so far. Animals communicate via an array of visual, chemical, thermal and mechanical cues, inhabiting worlds of perception very different to ours. Can we really understand what sound means to echolocating animals, for whom sound waves can be translated visually? The German ecologist Jakob von Uexküll called these impenetrable worlds umwelten. To truly translate animal language, we would need to step into that animal's umwelt – and then, what of us would be imprinted on her, or her on us? 'If a lion could talk,' writes Stephen Budiansky, revising Wittgenstein's famous aphorism in Philosophical Investigations, 'we probably could understand him. He just would not be a lion any more.' We should ask, then, how speaking with other beings might change us. Talking to another species might be very like talking to alien life. It's no coincidence that Ceti echoes Nasa's Seti – Search for Extraterrestrial Intelligence – Institute. In fact, a Seti team recorded the whup/throp exchange, on the basis that learning to speak with whales may help us if we ever meet intelligent extraterrestrials. In Denis Villeneuve's movie Arrival, whale-like aliens communicate via a script in which the distinction between past, present and future times collapses. For Louise, the linguist who translates the script, learning Heptapod lifts her mind out of linear time and into a reality in which her own past and future are equally available. The film mentions Edward Sapir and Benjamin Whorf's theory of linguistic determinism – the idea that our experience of reality is encoded in language – to explain this. The Sapir-Whorf hypothesis was dismissed in the mid-20th century, but linguists have since argued that there may be some truth to it. Pormpuraaw speakers in northern Australia refer to time moving from east to west, rather than forwards or backwards as in English, making time indivisible from the relationship between their body and the land. Whale songs are born from an experience of time that is radically different to ours. Humpbacks can project their voices over miles of open water; their songs span the widest oceans. Imagine the swell of oceanic feeling on which such sounds are borne. Speaking whale would expand our sense of space and time into a planetary song. I imagine we'd think very differently about polluting the ocean soundscape so carelessly. Sign up to Inside Saturday The only way to get a look behind the scenes of the Saturday magazine. Sign up to get the inside story from our top writers as well as all the must-read articles and columns, delivered to your inbox every weekend. after newsletter promotion Where it counts, we are perfectly able to understand what nature has to say; the problem is, we choose not to. As incredible as it would be to have a conversation with another species, we ought to listen better to what they are already telling us. David Farrier is the author of Nature's Genius: Evolution's Lessons for a Changing Planet (Canongate). Why Animals Talk by Arik Kershenbaum (Viking, £10.99) Philosophical Investigations by Ludwig Wittgenstein (Wiley-Blackwell, £24.95) An Immense World by Ed Yong (Vintage, £12.99)


Reuters
26-04-2025
- Reuters
DeepMind UK staff plan to unionise and challenge deals with Israel links, FT reports
April 26 (Reuters) - Google DeepMind staff in Britain plan to unionise to challenge the company's decision to sell its artificial intelligence (AI) technologies to defence groups with ties to the Israeli government, the Financial Times reported on Saturday. About 300 London-based staff of Google DeepMind have been seeking to join the Communication Workers Union (CWU) in recent weeks, the report said, citing people familiar with the matter. Google, Google DeepMind, and the CWU did not immediately respond to a Reuters' request for comment. Media reports that suggest Google (GOOGL.O), opens new tab is selling its cloud services and AI technology to the Israeli Ministry of Defence have caused disquiet among employees, according to the report. Google has run into trouble previously regarding its connections to Israel when it dismissed 28 employees last year who protested against the tech giant's cloud contract with the Israeli government.


Telegraph
14-04-2025
- Telegraph
Humans will soon understand dolphins, claim experts
The launch of a new artifical intelligence model has brought humans closer to understanding dolphins, experts claim. Google DeepMind's DolphinGemma is programmed with the world's largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project. It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals' clicks and whistles. Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: 'We do not know if animals have words. Dolphins can recognise themselves in the mirror, they use tools, so they're smart – but language is still the last barrier. 'So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can't pick out. 'You're going to understand what priorities they have, what they are talking about. 'The goal would someday be to 'speak dolphin', and we're really trying to crack the code. I've been waiting for this for 40 years.' Dolphins have complex communication, and from birth will squawk, click, and squeak to each other, and even use unique whistles to address individuals by name. Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting, or chasing sharks, make buzzing sounds. For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns. The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language. Dr Thad Starner, a Google DeepMind research scientist, said: 'By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication – a task previously requiring immense human effort. 'We're not just listening any more. We're beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller. 'We can keep on fine tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.' Talking 'in dolphin' The team is hoping that eventually it will be able to synthesise the sounds in order to talk back 'in dolphin' or develop a new shared vocabulary. It invented a device that can play whistles in the water so that dolphins can learn to associate the noise with certain objects. Describing the technique, Dr Starner added: 'Two researchers get into the water with a group of dolphins and Researcher A might have a scarf – a toy that the dolphins want to play with – and Researcher B is going to ask for that scarf. So Researcher B can play a whistle and Researcher A will hand Researcher B that scarf. 'They might pass the scarf back and forth a couple of times, playing that whistle over and over, and the hope is the dolphins who are watching all of this will figure out the social content and can repeat that sound to ask for the scarf. If that happens, then dolphins have mimicked one word in our tiny made-up dolphin language' Researchers will be able to input their own data into DolphinGemma, released by Open Source on Monday, to try and accelerate advancements in the field. Separately, the University of La Laguna in Spain announced this week that it had developed a new AI system for classifying the vocalisations of orcas in real-time. The research project, funded by the Loro Parque Foundation, used more than 75,000 orca sounds, recorded and classified over nearly two decades at Loro Parque, to develop a neural network.