logo
#

Latest news with #DeniseHerzing

Clicks & growls: Why AI's hearing the call of the wild
Clicks & growls: Why AI's hearing the call of the wild

Mint

time29-04-2025

  • Science
  • Mint

Clicks & growls: Why AI's hearing the call of the wild

Google has used artificial intelligence (AI) to decode and mimic dolphin sounds, advancing our understanding of marine life. But can AI truly outperform human insight in interpreting animal communication? Also Read | Return of Indian tech brands: Is it for real this time? Dolphins are famously socially skilled, intelligent, agile, joyful and playful, thus sharing many emotional similarities with (some) humans. Just as British ethologist Jane Goodall studied the social and family interactions of wild chimpanzees, Denise Herzing has studied dolphin communication in the wild since 1985, making The Wild Dolphin Project (WDP) the longest running underwater dolphin research project in the world. Google, in partnership with Georgia Tech and WDP, used an AI model to analyse vocal patterns much like a language model, identifying structure and predicting sequences. Also Read | Why Pakistan's trade ban is more sound than fury Dolphins use distinct sounds for different social situations: whistles act like names for individual identification, especially between mothers and calves; squawks often occur during conflicts; and click buzzes are used in courtship. DolphinGemma, Google's 400 million parameter model that runs on Pixel6 smartphones, decodes these sounds by combining audio tech with data from WDP acquired by studying wild dolphins in the Bahamas. On National Dolphin Day (14 April), Google showcased advances to its AI model that can now analyse dolphin vocal patterns and generate realistic, dolphin-like sounds. Also Read | Return of the dire wolf: Is this a Game of Clones? AI is being used to detect how parrots, crows, wolves, whales, chimpanzees and octopuses communicate. NatureLM-audio is the first audio-language model built for animal sounds and can generalize to unseen species. Other projects use AI and robotics to decode sperm whale clicks, or listen to elephants to detect possible warning calls and mating signals. It aids conservation by monitoring endangered species. Decoding communication reveals ecosystem health, alerting us to pollution and climate change. It enhances human-animal interactions and fosters empathy. AI, combined with satellite imagery, camera traps and bioacoustics, is being used in Guacamaya to monitor deforestation and protect the Amazon, a collaboration between Universidad de los Andes, Instituto SINCHI, Instituto Humboldt, Planet Labs and Microsoft AI for Good Lab. AI can detect animal sound patterns, but without context— is the animal mating, feeding or in danger?—its interpretations are limited. The risk of humans assuming animals 'talk" like humans do, messy field data, and species-specific behaviours can complicate analysis. AI might identify correlations but not true meaning or intent. Human intuition helps. These systems often require custom models and extensive resources, making large-scale, accurate decoding of animal communication a complex effort.

Humans will soon understand dolphins, claim experts
Humans will soon understand dolphins, claim experts

Telegraph

time14-04-2025

  • Science
  • Telegraph

Humans will soon understand dolphins, claim experts

The launch of a new artifical intelligence model has brought humans closer to understanding dolphins, experts claim. Google DeepMind's DolphinGemma is programmed with the world's largest collection of vocalisations from Atlantic spotted dolphins, recorded over several years by the Wild Dolphin Project. It is hoped the recently launched large language model will be able to pick out hidden patterns, potential meanings and even language from the animals' clicks and whistles. Dr Denise Herzing, the founder and research director of the Wild Dolphin Project, said: 'We do not know if animals have words. Dolphins can recognise themselves in the mirror, they use tools, so they're smart – but language is still the last barrier. 'So feeding dolphin sounds into an AI model will give us a really good look at if there are patterns, subtleties that humans can't pick out. 'You're going to understand what priorities they have, what they are talking about. 'The goal would someday be to 'speak dolphin', and we're really trying to crack the code. I've been waiting for this for 40 years.' Dolphins have complex communication, and from birth will squawk, click, and squeak to each other, and even use unique whistles to address individuals by name. Mothers often use specific noises to call their calves back, while fighting dolphins emit burst-pulses, and those courting, or chasing sharks, make buzzing sounds. For decades, researchers have been trying to decode the chatter, but monitoring pods over vast distances has proven too difficult for humans to detect patterns. The new AI is programmed to search through thousands of sounds that have been linked to behaviour to try and find sequences that could indicate words or language. Dr Thad Starner, a Google DeepMind research scientist, said: 'By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication – a task previously requiring immense human effort. 'We're not just listening any more. We're beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller. 'We can keep on fine tuning the model as we go and hopefully get better and better understanding of what dolphins are producing.' Talking 'in dolphin' The team is hoping that eventually it will be able to synthesise the sounds in order to talk back 'in dolphin' or develop a new shared vocabulary. It invented a device that can play whistles in the water so that dolphins can learn to associate the noise with certain objects. Describing the technique, Dr Starner added: 'Two researchers get into the water with a group of dolphins and Researcher A might have a scarf – a toy that the dolphins want to play with – and Researcher B is going to ask for that scarf. So Researcher B can play a whistle and Researcher A will hand Researcher B that scarf. 'They might pass the scarf back and forth a couple of times, playing that whistle over and over, and the hope is the dolphins who are watching all of this will figure out the social content and can repeat that sound to ask for the scarf. If that happens, then dolphins have mimicked one word in our tiny made-up dolphin language' Researchers will be able to input their own data into DolphinGemma, released by Open Source on Monday, to try and accelerate advancements in the field. Separately, the University of La Laguna in Spain announced this week that it had developed a new AI system for classifying the vocalisations of orcas in real-time. The research project, funded by the Loro Parque Foundation, used more than 75,000 orca sounds, recorded and classified over nearly two decades at Loro Parque, to develop a neural network.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store