14-04-2025
Humans will soon understand dolphins, claim experts
The launch of a new artifical intelligence model has brought humans closer to understanding dolphins, experts claim.
Google DeepMind's DolphinGemma is programmed with the world's largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project.
It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals' clicks and whistles.
Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: 'We do not know if animals have words. Dolphins can recognise themselves in the mirror, they use tools, so they're smart – but language is still the last barrier.
'So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can't pick out.
'You're going to understand what priorities they have, what they are talking about.
'The goal would someday be to 'speak dolphin', and we're really trying to crack the code. I've been waiting for this for 40 years.'
Dolphins have complex communication, and from birth will squawk, click, and squeak to each other, and even use unique whistles to address individuals by name.
Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting, or chasing sharks, make buzzing sounds.
For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns.
The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language.
Dr Thad Starner, a Google DeepMind research scientist, said: 'By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication – a task previously requiring immense human effort.
'We're not just listening any more. We're beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.
'We can keep on fine tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.'
Talking 'in dolphin'
The team is hoping that eventually it will be able to synthesise the sounds in order to talk back 'in dolphin' or develop a new shared vocabulary. It invented a device that can play whistles in the water so that dolphins can learn to associate the noise with certain objects.
Describing the technique, Dr Starner added: 'Two researchers get into the water with a group of dolphins and Researcher A might have a scarf – a toy that the dolphins want to play with – and Researcher B is going to ask for that scarf. So Researcher B can play a whistle and Researcher A will hand Researcher B that scarf.
'They might pass the scarf back and forth a couple of times, playing that whistle over and over, and the hope is the dolphins who are watching all of this will figure out the social content and can repeat that sound to ask for the scarf. If that happens, then dolphins have mimicked one word in our tiny made-up dolphin language'
Researchers will be able to input their own data into DolphinGemma, released by Open Source on Monday, to try and accelerate advancements in the field.
Separately, the University of La Laguna in Spain announced this week that it had developed a new AI system for classifying the vocalisations of orcas in real-time.
The research project, funded by the Loro Parque Foundation, used more than 75,000 orca sounds, recorded and classified over nearly two decades at Loro Parque, to develop a neural network.