
AI Is Deciphering Animal Speech. Should We Try to Talk Back?
Increasingly, animal researchers are deploying artificial intelligence to accelerate our investigations of animal communication—both within species and between branches on the tree of life. As scientists chip away at the complex communication systems of animals, they move closer to understanding what creatures are saying—and maybe even how to talk back. But as we try to bridge the linguistic gap between humans and animals, some experts are raising valid concerns about whether such capabilities are appropriate—or whether we should even attempt to communicate with animals at all.
Using AI to untangle animal language
Towards the front of the pack—or should I say pod?—is Project CETI, which has used machine learning to analyze more than 8,000 sperm whale 'codas'—structured click patterns recorded by the Dominica Sperm Whale Project. Researchers uncovered contextual and combinatorial structures in the whales' clicks, naming features like 'rubato' and 'ornamentation' to describe how whales subtly adjust their vocalizations during conversation. These patterns helped the team create a kind of phonetic alphabet for the animals—an expressive, structured system that may not be language as we know it but reveals a level of complexity that researchers weren't previously aware of. Project CETI is also working on ethical guidelines for the technology, a critical goal given the risks of using AI to 'talk' to the animals.
Meanwhile, Google and the Wild Dolphin Project recently introduced DolphinGemma, a large language model (LLM) trained on 40 years of dolphin vocalizations. Just as ChatGPT is an LLM for human inputs—taking visual information like research papers and images and producing responses to relevant queries—DolphinGemma intakes dolphin sound data and predicts what vocalization comes next. DolphinGemma can even generate dolphin-like audio, and the researchers' prototype two-way system, Cetacean Hearing Augmentation Telemetry (fittingly, CHAT), uses a smartphone-based interface that dolphins employ to request items like scarves or seagrass—potentially laying the groundwork for future interspecies dialogue.
'DolphinGemma is being used in the field this season to improve our real-time sound recognition in the CHAT system,' said Denise Herzing, founder and director of the Wild Dolphin Project, which spearheaded the development of DolphinGemma in collaboration with researchers at Google DeepMind, in an email to Gizmodo. 'This fall we will spend time ingesting known dolphin vocalizations and let Gemma show us any repeatable patterns they find,' such as vocalizations used in courtship and mother-calf discipline.
In this way, Herzing added, the AI applications are two-fold: Researchers can use it both to explore dolphins' natural sounds and to better understand the animals' responses to human mimicking of dolphin sounds, which are synthetically produced by the AI CHAT system.
Expanding the animal AI toolkit
Outside the ocean, researchers are finding that human speech models can be repurposed to decode terrestrial animal signals, too. A University of Michigan-led team used Wav2Vec2—a speech recognition model trained on human voices—to identify dogs' emotions, genders, breeds, and even individual identities based on their barks. The pre-trained human model outperformed a version trained solely on dog data, suggesting that human language model architectures could be surprisingly effective in decoding animal communication.
Of course, we need to consider the different levels of sophistication these AI models are targeting. Determining whether a dog's bark is aggressive or playful, or whether it's male or female—these are perhaps understandably easier for a model to determine than, say, the nuanced meaning encoded in sperm whale phonetics. Nevertheless, each study inches scientists closer to understanding how AI tools, as they currently exist, can be best applied to such an expansive field—and gives the AI a chance to train itself to become a more useful part of the researcher's toolkit.
And even cats—often seen as aloof—appear to be more communicative than they let on. In a 2022 study out of Paris Nanterre University, cats showed clear signs of recognizing their owner's voice, but beyond that, the felines responded more intensely when spoken to directly in 'cat talk.' That suggests cats not only pay attention to what we say, but also how we say it—especially when it comes from someone they know.
Earlier this month, a pair of cuttlefish researchers found evidence that the animals have a set of four 'waves,' or physical gestures, that they make to one another, as well as to human playback of cuttlefish waves. The group plans to apply an algorithm to categorize the types of waves, automatically track the creatures' movements, and understand the contexts in which the animals express themselves more rapidly.
Private companies (such as Google) are also getting in on the act. Last week, China's largest search engine, Baidu, filed a patent with the country's IP administration proposing to translate animal (specifically cat) vocalizations into human language. The quick and dirty on the tech is that it would intake a trove of data from your kitty, and then use an AI model to analyze the data, determine the animal's emotional state, and output the apparent human language message your pet was trying to convey.
A universal translator for animals?
Together, these studies represent a major shift in how scientists are approaching animal communication. Rather than starting from scratch, research teams are building tools and models designed for humans—and making advances that would have taken much longer otherwise. The end goal could (read: could) be a kind of Rosetta Stone for the animal kingdom, powered by AI.
'We've gotten really good at analyzing human language just in the last five years, and we're beginning to perfect this practice of transferring models trained on one dataset and applying them to new data,' said Sara Keen, a behavioral ecologist and electrical engineer at the Earth Species Project, in a video call with Gizmodo.
The Earth Species Project plans to launch its flagship audio-language model for animal sounds, NatureLM, this year, and a demo for NatureLM-audio is already live. With input data from across the tree of life—as well as human speech, environmental sounds, and even music detection—the model aims to become a converter of human speech into animal analogues. The model 'shows promising domain transfer from human speech to animal communication,' the project states, 'supporting our hypothesis that shared representations in AI can help decode animal languages.'
'A big part of our work really is trying to change the way people think about our place in the world,' Keen added. 'We're making cool discoveries about animal communication, but ultimately we're finding that other species are just as complicated and nuanced as we are. And that revelation is pretty exciting.'
The ethical dilemma
Indeed, researchers generally agree on the promise of AI-based tools for improving the collection and interpretation of animal communication data. But some feel that there's a breakdown in communication between that scholarly familiarity and the public's perception of how these tools can be applied.
'I think there's currently a lot of misunderstanding in the coverage of this topic—that somehow machine learning can create this contextual knowledge out of nothing. That so long as you have thousands of hours of audio recordings, somehow some magic machine learning black box can squeeze meaning out of that,' said Christian Rutz, an expert in animal behavior and cognition and founding president of International Bio-Logging Society, in a video call with Gizmodo. 'That's not going to happen.'
'Meaning comes through the contextual annotation and this is where I think it's really important for this field as a whole, in this period of excitement and enthusiasm, to not forget that this annotation comes from basic behavioral ecology and natural history expertise,' Rutz added. In other words, let's not put the horse before the cart, especially since the cart—in this case—is what's powering the horse.
But with great power…you know the cliché. Essentially, how can humans develop and apply these technologies in a way that is both scientifically illuminating and minimizes harm or disruption to its animal subjects? Experts have put forward ethical standards and guardrails for using the technologies that prioritize the welfare of creatures as we get closer to—well, wherever the technology is going.
As AI advances, conversations about animal rights will have to evolve. In the future, animals could become more active participants in those conversations—a notion that legal experts are exploring as a thought exercise, but one that could someday become reality.
'What we desperately need—apart from advancing the machine learning side—is to forge these meaningful collaborations between the machine learning experts and the animal behavior researchers,' Rutz said, 'because it's only when you put the two of us together that you stand a chance.'
There's no shortage of communication data to feed into data-hungry AI models, from pitch-perfect prairie dog squeaks to snails' slimy trails (yes, really). But exactly how we make use of the information we glean from these new approaches requires thorough consideration of the ethics involved in 'speaking' with animals.
A recent paper on the ethical concerns of using AI to communicate with whales outlined six major problem areas. These include privacy rights, cultural and emotional harm to whales, anthropomorphism, technological solutionism (an overreliance on technology to fix problems), gender bias, and limited effectiveness for actual whale conservation. That last issue is especially urgent, given how many whale populations are already under serious threat.
It increasingly appears that we're on the brink of learning much more about the ways animals interact with one another—indeed, pulling back the curtain on their communication could also yield insights into how they learn, socialize, and act within their environments. But there are still significant challenges to overcome, such as asking ourselves how we use the powerful technologies currently in development.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
Lightning Kills Way More Trees Than You Would Ever Believe
A first-of-its-kind study estimates that lightning strikes kill 320 million trees every year. For perspective, these dead trees account for up to 2.9 percent of annual loss in plant biomass and emit up to 1.09 billion tons of carbon dioxide. Even more striking (pun unintended), the study includes only tree deaths caused directly by lightning. It does not include tree deaths caused indirectly by lightning-induced wildfires. Altogether, these findings can improve statistical models that help researchers study forest structure and carbon storage on a worldwide scale. Researchers from the Technical University of Munich (TUM) ascertained these figures using novel mathematical models, which offer an unprecedented overview of how lightning-induced tree mortality impacts the global ecosystem. Related: Classifying tree mortality was no easy task. Dead trees often lack clear visual signs of their cause of death, while others may be too decomposed for meaningful forest forensics. Some tree deaths occur slowly, and most surveys are limited to infrequent observations of isolated deaths after the event has occurred. Some areas, like temperate and boreal forests, are not as well-studied as those in the tropics. So the researchers combined multiple methods to make their global estimate, including using results from another team's camera-based lightning detection system in an old-growth tropical forest in Panama's Barro Colorado Island (BCI). These camera observations were followed up with drone and ground surveys to confirm lightning-struck trees. This BCI data revealed that lightning is highly contagious. A lightning strike can cause a 'flashover' as electricity crosses the air gap between the crowns of neighboring trees, reaching as far away as 45 meters (almost 150 feet) from the initially struck tree. As a result, each lightning strike killed 3.5 trees on average. Testing their model against the real data, they found their model adequately simulated the trees killed by lightning strike. The researchers then applied the validated model to other temperate and tropical forests around the world. To pad out the global averages, the researchers also incorporated two hefty datasets of lightning frequency and density, one from a spaceborne optical network and the second from ground-based observations. According to the simulations, "286–328 million lightning strikes hit the Earth's surface each year," with the majority occurring over ice-free land areas, particularly in the tropics. This resulted in the annual death of 301-340 million trees over the 2004-2023 period, including 24-36 million large trees (over 60 centimeters in diameter). For comparison, natural causes kill around 50 billion trees annually, occasionally creating 'farting' ghost forests. So while lightning is only responsible for 0.69 percent tree deaths overall, it is responsible for up to 6.3 percent of large-tree deaths. Additionally, these numbers appear to be rising. "Currently, lightning-induced tree mortality is highest in tropical regions," says Andreas Krause, a computer scientist at TUM's Land Surface-Atmosphere Interactions lab and the study's lead author. "However, models suggest that lightning frequency will increase primarily in middle-and-high-latitude regions, meaning that lightning mortality could also become more relevant in temperate and boreal forests." The impact may be significant; a separate study predicts a 9-18 percent uptick in large-tree deaths for a 25-50 percent increase in lightning frequency. Most importantly, the study provides evidence that lightning-induced tree deaths are underestimated – if estimated at all. The researchers note that tree mortality is a neglected aspect in the dynamic models that scientists use to study how forests respond to environmental shifts, and should be included in future carbon calculations. This research is published in the journal Global Change Biology. Related News Scientists Found a Mysterious Barrier in The Ocean That Jellyfish Won't Cross Leopard Seal Mating Songs Are Eerily Like Our Nursery Rhymes Massive Earthquake Could Strike Canada as Ancient Fault Line Wakes Solve the daily Crossword
Yahoo
2 hours ago
- Yahoo
'Hot Blob' Heading For New York Following Ancient Greenland Rift
A vast blob of hot rock moving slowly beneath the Appalachian Mountains in the northeastern US is now thought to be the result of a divorce between Greenland and Canada some 80 million years ago. A study by an international team of researchers challenges the existing consensus in both geographical and chronological terms. It was previously thought the breaking up of the North American and African continents was responsible, some 180 million years ago. To test their assertion, the researchers used a combination of existing data and computer modeling to link the hot blob to a geological formation in the Labrador Sea in the North Atlantic dated to around 85-80 million years ago. Related: "This thermal upwelling has long been a puzzling feature of North American geology," says earth scientist Thomas Gernon, from the University of Southampton in the UK. "It lies beneath part of the continent that's been tectonically quiet for 180 million years, so the idea it was just a leftover from when the landmass broke apart never quite stacked up." Technically known as the Northern Appalachian Anomaly (NAA), the 350-kilometer- (217-mile-) wide blob of hot rock hasn't been in any particular hurry to get to its present location, moving at a rate of around 20 kilometers every million years. At that rate, the blob should pass New York in around 10 to 15 million years or so. However, the research team suggests this anomaly is one of the main reasons the Appalachians are still in place. The heat helps the continental crust remain buoyant, contributing to the mountains being uplifted further over the years. The new study builds on previous work from some of the same researchers. Known as the 'mantle wave' theory, it posits blobs of hot rock rise in a lava-lamp style when continents break apart, triggering a variety of geological phenomena such as volcanic eruptions and formation of mountains. "Our earlier research shows that these drips of rock can form in series, like domino stones when they fall one after the other, and sequentially migrate over time," says geophysicist Sascha Brune, from the GFZ Helmholtz Centre for Geosciences in Germany. "The feature we see beneath New England is very likely one of these drips, which originated far from where it now sits." Further analysis and tracking of the hot rock will help to confirm its origins. Meanwhile, the same theories and techniques can be used to identify other geological features like this. In fact, the researchers think they might have already spotted a 'mirror' to the NAA, under north-central Greenland and also originating from the Labrador Sea. "The idea that rifting of continents can cause drips and cells of circulating hot rock at depth that spread thousands of kilometers inland makes us rethink what we know about the edges of continents both today and in Earth's deep past," says Derek Keir, a geophysicist from the University of Southampton. The research has been published in Geology. Related News Prehistoric Air Has Been Reconstructed From Dinosaur Teeth in an Amazing First Lightning Kills Way More Trees Than You Would Ever Believe Scientists Found a Mysterious Barrier in The Ocean That Jellyfish Won't Cross Solve the daily Crossword
Yahoo
8 hours ago
- Yahoo
A Scientist Says He's Solved the Bermuda Triangle, Just Like That
Here's what you'll learn when you read this story: An Australian scientist says probabilities are the leading cause of the Bermuda Triangle disappearances. And he's not the only one. Add in suspect weather, and iffy plane and boat piloting, and Karl Kruszelnicki believes there's no reason to believe in the Bermuda Triangle phenomenon. While the conspiracy of the Bermuda Triangle has existed for decades, the National Oceanic and Atmospheric Association and Lloyd's of London has long championed the same ideas. Pick any one of the more than 50 ships or 20 planes that have disappeared in the Bermuda Triangle in the last century. Each one has a story without an ending, leading to a litany of conspiracy theories about the disappearances in the area, marked roughly by Florida, Bermuda, and the Greater Antilles. Australian scientist Karl Kruszelnicki doesn't subscribe to the Bermuda Triangle's supernatural reputation. Neither does the United States' own National Oceanic and Atmospheric Association (NOAA). Both have been saying for years that there's really no Bermuda Triangle mystery. In fact, the loss and disappearance of ships and planes is a mere fact of probabilities. 'There is no evidence that mysterious disappearances occur with any greater frequency in the Bermuda Triangle than in any other large, well-traveled area of the ocean,' NOAA wrote in 2010. And since 2017, Kruszelnicki has been saying the same thing. He told The Independent that the transparent volume of traffic—in a tricky area to navigate, no less—shows 'the number [of ships and planes] that go missing in the Bermuda Triangle is the same as anywhere in the world on a percentage basis.' He says that both Lloyd's of London and the U.S. Coast Guard support that idea. In fact, as The Independent notes, Lloyd's of London has had this same theory since the 1970s. NOAA says environmental considerations can explain away most of the Bermuda Triangle disappearances, highlighting the Gulf Stream's tendency towards violent changes in weather, the number of islands in the Caribbean Sea offering a complicated navigation adventure, and evidence that suggests the Bermuda Triangle may cause a magnetic compass to point to true north instead of magnetic north, causing for confusion in wayfinding. 'The U.S. Navy and U.S. Coast Guard contend that there are no supernatural explanations for disasters at sea,' NOAA says. 'Their experience suggests that the combined forces of nature and human fallibility outdo even the most incredulous science fiction.' Kruszelnicki has routinely garnered public attention for espousing these very thoughts on the Bermuda Triangle, first in 2017 and then again in 2022 before resurfacing once more in 2023. Throughout it all, he's stuck to the same idea: the numbers don't lie. Even with some high-profile disappearances—such as Flight 19, a group of five U.S. Navy TBM Avenger torpedo bombers lost in 1945—pushing the theory into popular culture, Kruszelnicki points out that every instance contains a degree of poor weather or likely human error (or both, as in the case of Flight 19) as the true culprit. But culture clings to Bermuda Triangle conspiracy theories. The concepts of sea monsters, aliens, and even the entirety of Atlantis dropping to the ocean floor—those are fodder for books, television, and movies. It sure does sound more exciting than poor weather and mathematical probabilities, anyway, even if the 'boring' story holds more water. Get the Issue Get the Issue Get the Issue Get the Issue Get the Issue Get the Issue Get the IssueGet the Issue Get the Issue You Might Also Like The Do's and Don'ts of Using Painter's Tape The Best Portable BBQ Grills for Cooking Anywhere Can a Smart Watch Prolong Your Life?