Dolphin whistle decoders win $100,000 interspecies communication prize
A $100,000 prize for communicating with animals has been scooped by researchers who have shed light on the meaning of dolphins' whistles.
The Coller-Dolittle Prize for Two-way Inter-species Communication was launched last year by the Jeremy Coller Foundation and Tel Aviv University.
The winning team, the Sarasota Dolphin Research Program led by Laela Sayigh and Peter Tyack from the Woods Hole Oceanographic Institution, has been studying bottle-nosed dolphins in waters near Sarasota, Florida, for more than four decades.
The researchers used non-invasive technologies such as hydrophones and digital acoustic tags attached by suction cups to record the animals' sounds. These include name-like 'signature' whistles, as well as 'non-signature' whistles – sounds that make up about 50% of the animals' calls but are poorly understood.
In their latest work, which has not yet been peer-reviewed, the team identified at least 20 different types of non-signature whistle that are produced by multiple dolphins, finding two types were each shared by at least 25 individuals.
When the researchers played these two sounds back to dolphins they found one triggered avoidance in the animals, suggesting it could be an alarm signal, while the other triggered a range of responses, suggesting it could be a sound made by dolphins when they encounter something unexpected.
Sayigh said the win was a surprise, adding: 'I really didn't expect it, so I am beyond thrilled. It is such an honour.'
The judging panel was led by Yossi Yovel, professor of zoology at Tel Aviv University, whose own team has previously used machine-learning algorithms to unpick the meaning of squeaks made by bats as they argue.
'We were mostly impressed by the long term, huge dataset that was created, and we're sure that it will lead to many more new and interesting results,' said Yovel, adding the judges were also impressed by team's use of non-invasive technology to record the animals' calls, and the use of drones and speakers to demonstrate the dolphins' responses in the field.
Yovel added the judges hoped the prize would aid the application of AI to the data to reveal even more impressive results.
Jonathan Birch, aprofessor of philosophy at London School of Economics and one of the judges, said the main thing stopping humans from cracking the code of animal communication was a lack of data.
'Think of the trillion words needed to train a large language model like ChatGPT. We don't have anything like this for other animals,' he said.
'That's why we need programs like the Sarasota Dolphin Research Program, which has built up an extraordinary library of dolphin whistles over 40 years. The cumulative result of all that work is that Laela Sayigh and her team can now use deep learning to analyse the whistles and perhaps, one day, crack the code.'
Yovel said about 20 teams entered this year's competition, resulting in four finalists. Besides Sayigh and Tyack's team, these included teams working on understanding communication in nightingales, cuttlefish, and marmosets. He added the 202-26 prize was now open for applications.
As well as an annual award of $100,000, there is also a grand prize up for grabs totalling either $10m in investment or $500,000 in cash. To win that, researchers must develop an algorithm to allow an animal to 'communicate independently without recognising that it is communicating with humans' – something Jeremy Coller suggested might be achieved within the next five years.
The challenge is inspired by the Turing test for AI, whereby humans must be unable to tell whether they are conversing with a computer or a real person for the system to be deemed successful.
Robert Seyfarth, emeritus professor of psychology at the University of Pennsylvania, who was not involved with the prize, welcomed the win. 'These are outstanding scientists, doing work that has revolutionised our understanding of dolphin communication and cognition. This is well-deserved recognition,' he said.
Clara Mancini, professor of animal-computer interaction at the Open University, said the dolphin work showed technology's potential to advance our understanding of animal communication, possibly one day even enabling people to communicate with them on their own terms.
'I think one of the main benefits of these advances is that they could finally demonstrate that animals' communication systems can be just as sophisticated and effective for use in the environments in which their users have evolved, as human language is for our species,' she said.
'However, on the journey towards interspecies communication, I would suggest, we need to remain mindful that deciphering a language is not the same as understanding the experience of language users and that, as well as curiosity, the challenge requires humility and respect for the unique knowledge and worldview that each species possesses.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Scientific American
3 hours ago
- Scientific American
At Secret Math Meeting, Researchers Struggle to Outsmart AI
On a weekend in mid-May, a clandestine mathematical conclave convened. Thirty of the world's most renowned mathematicians traveled to Berkeley, Calif., with some coming from as far away as the U.K. The group's members faced off in a showdown with a 'reasoning' chatbot that was tasked with solving problems the had devised to test its mathematical mettle. After throwing professor-level questions at the bot for two days, the researchers were stunned to discover it was capable of answering some of the world's hardest solvable problems. 'I have colleagues who literally said these models are approaching mathematical genius,' says Ken Ono, a mathematician at the University of Virginia, who attended the meeting. The chatbot in question is powered by o4-mini, a so-called reasoning large language model (LLM). It was trained by OpenAI to be capable of making highly intricate deductions. Google's equivalent, Gemini 2.5 Flash, has similar abilities. Like the LLMs that powered earlier versions of ChatGPT, o4-mini learns to predict the next word in a sequence. Compared with those earlier LLMs, however, o4-mini and its equivalents are lighter-weight, more nimble models that train on specialized datasets with stronger reinforcement from humans. The approach leads to a chatbot capable of diving much deeper into complex problems in math than traditional LLMs. To track the progress of o4-mini, OpenAI previously tasked Epoch AI, a nonprofit that benchmarks LLMs, to come up with 300 math questions whose solutions had not yet been published. Even traditional LLMs can correctly answer many complicated math questions. Yet when Epoch AI asked several such models these questions, which they hadn't previously been trained on, the most successful were able to solve less than 2 percent, showing these LLMs lacked the ability to reason. But o4-mini would prove to be very different. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. Epoch AI hired Elliot Glazer, who had recently finished his math Ph.D. to join the new collaboration for the benchmark, dubbed FrontierMath, in September 2024. The project collected novel questions over varying tiers of difficulty, with the first three tiers covering, undergraduate-, graduate- and research-level challenges. By February 2025, Glazer found that o4-mini could solve around 20 percent of the questions. He then moved on to a fourth tier: 100 questions that would be challenging even for an academic mathematician. Only a small group of people in the world would be capable of developing such questions, let alone answering them. The mathematicians who participated had to sign a nondisclosure agreement to communicate solely via the messaging app Signal. Other forms of contact, such as traditional e-mail, could potentially be scanned by an LLM and inadvertently train it, thereby contaminating the dataset. The group made slow, steady progress in finding questions. But Glazer wanted to speed things up, so Epoch AI hosted the in-person meeting on Saturday, May 17, and Sunday, May 18. There, the participants would find the final 10 challenge questions. The meeting was headed by Ono, who split the 30 attendees into groups of six. For two days, the academics competed against themselves to devise problems that they could solve but would trip up the AI reasoning bot. Any problems the o4-mini couldn't solve would garner the mathematician who came up with them a $7,500 reward. By the end of that Saturday night, Ono was frustrated with the team's lack of progress. 'I came up with a problem which everyone in my field knows to be an open question in number theory—a good Ph.D.-level problem,' he says. He asked o4-mini to solve the question. Over the next 10 minutes, Ono watched in stunned silence as the bot unfurled a solution in real time, showing its reasoning process along the way. The bot spent the first two minutes finding and mastering the related literature in the field. Then it wrote on the screen that it wanted to try solving a simpler 'toy' version of the question first in order to learn. A few minutes later, it wrote that it was finally prepared to solve the more difficult problem. Five minutes after that, o4-mini presented a correct but sassy solution. 'It was starting to get really cheeky,' says Ono, who is also a freelance mathematical consultant for Epoch AI. 'And at the end, it says, 'No citation necessary because the mystery number was computed by me!'' Defeated, Ono jumped onto Signal that night and alerted the rest of the participants. 'I was not prepared to be contending with an LLM like this,' he says, 'I've never seen that kind of reasoning before in models. That's what a scientist does. That's frightening.' Although the group did eventually succeed in finding 10 questions that stymied the bot, the researchers were astonished by how far AI had progressed in the span of one year. Ono likened it to working with a 'strong collaborator.' Yang Hui He, a mathematician at the London Institute for Mathematical Sciences and an early pioneer of using AI in math, says, 'This is what a very, very good graduate student would be doing—in fact, more.' The bot was also much faster than a professional mathematician, taking mere minutes to do what it would take such a human expert weeks or months to complete. While sparring with o4-mini was thrilling, its progress was also alarming. Ono and He express concern that the o4-mini's results might be trusted too much. 'There's proof by induction, proof by contradiction, and then proof by intimidation,' He says. 'If you say something with enough authority, people just get scared. I think o4-mini has mastered proof by intimidation; it says everything with so much confidence.' By the end of the meeting, the group started to consider what the future might look like for mathematicians. Discussions turned to the inevitable 'tier five'—questions that even the best mathematicians couldn't solve. If AI reaches that level, the role of mathematicians would undergo a sharp change. For instance, mathematicians may shift to simply posing questions and interacting with reasoning-bots to help them discover new mathematical truths, much the same as a professor does with graduate students. As such, Ono predicts that nurturing creativity in higher education will be a key in keeping mathematics going for future generations. 'I've been telling my colleagues that it's a grave mistake to say that generalized artificial intelligence will never come, [that] it's just a computer,' Ono says. 'I don't want to add to the hysteria, but these large language models are already outperforming most of our best graduate students in the world.'


Miami Herald
a day ago
- Miami Herald
5,000-year-old homes — a first-of-their-kind find — unearthed in China. See them
In Xianyang, China, on the banks of the Weihe River, the remains of ancient homes have been unearthed for the first time in thousands of years. During recent excavations at the Xiejiahe village site, a joint team from the Xianyang Institute of Cultural Relics and Archaeology and the School of Cultural Heritage at Northwest University uncovered nearly an acre of land, according to a June 5 news release from the organizations. Beneath the surface were cultural remains from multiple time periods, but the most interesting finds were a collection of house foundations from the middle to late Yangshao period, according to the release. A total of 19 foundations were unearthed, composed of circular homes in either single-room, double-room or multi-room constructions, researchers said. The Yangshao Period spanned from 5000 to 3000 B.C., making the houses at least 5,000 years old. Seven single-room houses have circular shapes and are partially built into the ground, according to the release. Post holes are built along the walls, and some houses had post holes at the base of the walls, likely to hold a raised platform. These houses had three types: homes with steps along the wall or that form a passageway, homes built into two levels with higher places with scorched soil for cooking areas and lower spaces for living, and flat-bottomed homes used as living spaces. The double-rooms were similar in style, but the ten houses fell into five different categories of construction, researchers said. They were likely made of one living space and one room for storage. The first version includes two irregular shaped circular spaces that were joined together. The second style had a pouch-shaped room connected to a cylindrical pit, and included an extra passage. Another style showed a shallow cylindrical chamber covered in scorched blocks used for cooking with a second side chamber that could have been used as storage or the living space. The last variation was a shallow pit built around the pouch-shaped rooms opening, where the shallow space would have been used for cooking and living and the pouch area used for storage, researchers said. Only two homes with three rooms were discovered, and both had a combination of a deep space and a high activity area with a passageway, according to the release. The styles of home from this period are the first of their kind ever discovered, researchers said, and help shed light on the daily lives of people from this era. All of the homes show some sign of function division — raised cooking areas, deep storage areas or additional pouch-shaped rooms — that show a practical intelligence among the Yangshao people. The homes were a significant find, researchers said, and was named one of the top six archaeological discoveries in Shaanxi Province in 2024, according to the release. The site in the Xiejiahe Village of Xianyang City is in the central region of the Shaanxi Province in east-central China. Chat GPT, an AI chat bot, and Google Translate were used to translate the news release from the Xianyang Institute of Cultural Relics and Archaeology and the School of Cultural Heritage at Northwest University.

Business Insider
a day ago
- Business Insider
AI isn't replacing radiologists. Instead, they're using it to tackle time-sucking administrative tasks.
Generative AI powered by large language models, such as ChatGPT, is proliferating in industries like customer service and creative content production. But healthcare has moved more cautiously. Radiology, a specialty centered on analyzing digital images and recognizing patterns, is emerging as a frontrunner for adopting new AI techniques. That's not to say AI is new to radiology. Radiology was subject to one of the most infamous AI predictions when Nobel Prize winner Geoffrey Hinton said, in 2016, that " people should stop training radiologists now." But nearly a decade later, the field's AI transformation is taking a markedly different path. Radiologists aren't being replaced, but are integrating generative AI into their workflows to tackle labor-intensive tasks that don't require clinical expertise. "Rather than being worried about AI, radiologists are hoping AI can help with workforce challenges," explained Dr. Curt Langlotz, the senior associate vice provost for research and professor of radiology at Stanford. Regulatory challenges to generative AI in radiology Hinton's notion wasn't entirely off-base. Many radiologists now have access to predictive AI models that classify images or highlight potential abnormalities. Langlotz said the rise of these tools "created an industry" of more than 100 companies that focus on AI for medical imaging. The FDA lists over 1,000 AI/ML-enabled medical devices, which can include algorithms and software, a majority of which were designed for radiology. However, the approved devices are based on more traditional machine learning techniques, not on generative AI. Ankur Sharma, the head of medical affairs for medical devices and radiology at Bayer, explained that AI tools used for radiology are categorized within computer-aided detection software, which helps analyze and interpret medical images. Examples include triage, detection, and characterization. Each tool must meet regulatory standards, which include studies to determine detection accuracy and false positive rate, among other metrics. This is especially challenging for generative AI technologies, which are newer and less well understood. Characterization tools, which analyze specific abnormalities and suggest what they might be, face the highest regulatory standards, as both false positives and negatives carry risks. The idea of a kind of gen AI radiologist capable of automated diagnosis, as Hinton envisioned, would be categorized as "characterization" and would have to meet a high standard of evidence. Regulation isn't the only hurdle generative AI must leap to see broader use in radiology, either. Today's best general-purpose large language models, like OpenAI's GPT4.1, are trained on trillions of tokens of data. Scaling the model in this way has led to superb results, as new LLMs consistently beat older models. Training a generative AI model for radiology at this scale is difficult, however, because the volume of training data available is much smaller. Medical organizations also lack access to compute resources sufficient to build models at the scale of the largest large language models, which cost hundreds of millions to train. "The size of the training data used to train the largest text or language model inside medicine, versus outside medicine, shows a one-hundred-times difference," said Langlotz. The largest LLMs train on databases that scrape nearly the entire internet; medical models are limited to whatever images and data an institution has access to. Generative AI's current reality in radiology These regulatory obstacles would seem to cast doubt on generative AI's usefulness in radiology, particularly in making diagnostic decisions. However, radiologists are finding the technology helpful in their workflows, as it can undertake some of their daily labor-intensive administrative tasks. For instance, Sharma said, some tools can take notes as radiologists dictate their observations of medical images, which helps with writing reports. Some large language models, he added, are "taking those reports and translating them into more patient-friendly language." Dr. Langlotz said a product that drafts reports can give radiologists a "substantial productivity advantage." He compared it to having resident trainees who draft reports for review, a resource that's often available in academic settings, but less so in radiology practices, such as a hospital's radiology department. Sharma said that generative AI could help radiologists by automating and streamlining reporting, follow-up management, and patient communication, giving radiologists time to focus more on their "reading expertise," which includes image interpretation and diagnosis of complex cases. For example, in June 2024, Bayer and Rad AI announced a collaboration to integrate generative AI reporting solutions into Bayer's Calantic Digital Solution Platform, a cloud-hosted platform for deploying AI tools in clinical settings. The collaboration aims to use Rad AI's technology to help radiologists create reports more efficiently. For example, RadAI can use generative AI transcription to generate written reports based on a radiologist's dictated findings. Applications like this face fewer regulatory hurdles because they don't directly influence diagnosis. Looking ahead, Dr. Langlotz said he foresees even greater AI adoption in the near future. "I think there will be a change in radiologists' day-to-day work in five years," he predicted.