UC Davis develops brain computer, helps man with ASL speak in real time
Video Above: Illness took away her voice. AI created a replica she carries in her phone (May 2024)
The new technology is able to translate the brain activity of the person attempting to speak into a voice, according to researchers in a new study published in the scientific journal Nature.
The study participant, who has amyotrophic lateral sclerosis, was able to speak to his family, change his intonation and even 'sing' simple melodies. UC Davis Health said that the system's digital vocal tract has no detectable delays.
'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the paper and an assistant professor in the UC Davis Department of Neurological Surgery.
Stavisky went on to say that with the use of instantaneous voice synthesis, neuroprosthesis users will be able to be more interactive and included in conversations.
The clinical trial at UC Davis, BrainGate2, used an investigational brain-computer interface that consists of surgically implanting four microelectrode arrays into the area of the brain that produces speech.
The firing pattern of hundreds of neurons was measured through electrodes, followed by the alignment of the patterns with the attempted speech sound the participant was producing.
The activity of neurons in the brain is recorded and then sent to a computer that interprets the signals to reconstruct voice, researchers said.
'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,' said Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis. 'Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.'
The neural signals of the participant were translated into audible speech in one-fortieth of a second, according to the study.
'This short delay is similar to the delay a person experiences when they speak and hear the sound of their own voice,' said officials.
The participant was also able to say words unknown to the system, along with making interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a sentence.
60% of the BCI-synthesized words were understandable to listeners, while only 4% were understandable when not using BCI, the study said.
Events, discounted tattoos, piercings this Friday the 13th
'Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,' said David Brandman, co-director of the UC Davis Neuroprosthetics Lab and the neurosurgeon who performed the participant's implant. 'The results of this research provide hope for people who want to talk but can't. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.'
Researchers said that brain-to-voice neuroprostheses are still in the early phase, despite promising findings. They said that a limitation is that the research was done on only one participant with ALS. The goal would be to replicate the results with more participants who have speech loss from other causes.
More information on the BrainGate2 trial can be found on braingate.org.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
7 hours ago
- Gizmodo
Scientists Identify a New Glitch in Human Thinking
Good news, everyone! Scientists at the University of California, Berkeley, have coined a new term to describe our brains being dumb. In a recent study, they provide evidence for a distinct but common kind of cognitive bias—one that makes us reluctant to take the easier path in life if it means retracing our steps. The researchers have named the bias the 'doubling-back aversion.' In several experiments, they found that people often refuse to choose a more efficient solution or route if it requires them to double back on the progress already made. The findings suggest that people's subjective fear of adding more to their workload and their hesitance to wipe the slate clean contribute to this bias, the researchers say. 'Participants' aversion to feeling their past efforts were a waste encouraged them to pursue less efficient means,' they wrote in their paper, published this May in Psychological Science. The 12 cognitive biases that prevent you from being rational Psychologists have detailed all sorts of biases related to digging our feet in when faced with important new information. People tend to stick to the status quo in choosing dinner at a favorite restaurant, for example, even when someone recommends a potentially tastier option. There's also the sunk cost fallacy, or the reluctance to veer off a disastrous path and choose another simply because they've spent so much time or resources pursuing it. The researchers argue that their newly named bias is certainly a close cousin to the sunk cost fallacy and similar biases, but that it ultimately describes a unique type of cognitive pitfall. In their paper, they provide the example of someone whose flight from San Francisco to New York becomes massively delayed early on, leaving them stuck in Los Angeles. In one scenario, the traveler can get home three hours earlier than their current itinerary if they accept the airline's offer of a new flight that first stops in Denver; in the second, the person is instead offered a flight that will also shave three hours off, but they'll first have to travel back to San Francisco. Despite both flights saving the same amount of time, people are more likely to refuse the one that requires going back to their earlier destination, the paper explains (some people might even refuse the Denver flight, but that would be an example of the status quo bias and/or sunk cost fallacy at work, they note). To test their hypothesis, the researchers ran four different types of experiments. The experiments collectively involved more than 2,500 adults, some of whom were UC Berkeley students and others who were volunteers recruited through Amazon's Mechanical Turk. In one test, people were asked to walk along different paths in virtual reality; another asked people to recite as many words starting with the same letter as possible. Across the various tests, the researchers found that people routinely exhibited this aversion. This Is What Your Brain Looks Like When You Solve a Problem In one experiment where people had to recite words starting with 'G,' for instance, everyone was asked midway through if they wanted to stay with the same letter or switch to reciting words starting with 'T' (a likely easier letter). In the control condition, this decision was framed as staying on the same task, simply with a new letter, but in the other, people were asked if they wanted to throw out the work they had done so far and start over on a new task. Importantly, the volunteers were also given progress bars for the task, allowing them to see they would perform the same amount of work no matter the choice (though again, 'T' would be easier). About 75% of participants made the choice to switch in the control condition, but only 25% did the same when the switch was presented as needing to double back. 'When I was analyzing these results, I was like, 'Oh, is there a mistake? How can there be such a big difference?'' said lead author Kristine Cho, a behavioral marketing PhD student at UC Berkely's Haas School of Business, in a statement to the Association for Psychological Science, publishers of the study. Other researchers will have to confirm the team's findings, of course. And there are still plenty of questions to answer about this aversion, including how often we fall for it and whether it's more likely to happen in some scenarios than others. But for now, it's oddly comforting to know that there's another thing I can possibly blame for my occasional stubbornness to take the faster subway train home.
Yahoo
14 hours ago
- Yahoo
How cancer could soon be detected using just a voice note
AI could soon be able to tell whether patients have cancer of the voice box using just a voice note, according to new research. Scientists recorded the voices of men with and without abormalities in their vocal folds - which can be an early sign of laryngeal cancer - and found differences in vocal qualities including pitch, volume, and clarity. They now say AI could be used to detect these 'vocal biomarkers', leading to earlier, less invasive diagnosis. Researchers at Oregon Health and Science University believe voice notes could now be used to train an AI tool that recognises vocal fold lesions. Using 12,523 voice recordings from 306 participants across North America, they found distinctive vocal differences in men suffering from laryngeal cancer, men with vocal fold lesions, and men with healthy vocal folds. However, researchers said similar hallmark differences were not detected in women. They are now hoping to collect more recordings of people with and without the distinctive vocal fold lesions to create a bigger dataset for tools to work from. In the UK, there are more than 2,000 new cases of laryngeal cancer each year. Symptoms can include a change in your voice, such as sounding hoarse, a high-pitched wheezing noise when you breathe, and a long-lasting cough. 'Here we show that with this dataset we could use vocal biomarkers to distinguish voices from patients with vocal fold lesions from those without such lesions,' said Dr Phillip Jenkins, the study's corresponding author said. 'To move from this study to an AI tool that recognises vocal fold lesions, we would train models using an even larger dataset of voice recordings, labeled by professionals. We then need to test the system to make sure it works equally well for women and men. 'Voice-based health tools are already being piloted. Building on our findings, I estimate that with larger datasets and clinical validation, similar tools to detect vocal fold lesions might enter pilot testing in the next couple of years," he predicted. It comes after research from US-based Klick Labs, which created an AI model capable of distinguishing whether a person has Type 2 diabetes from six to 10 seconds of voice audio. The study involved analysing 18,000 recordings in order to identify acoustic features that differentiated non diabetics from diabetics and reported an 89 per cent accuracy rating for women and 86 per cent for men. Jaycee Kaufman, a research scientist at Klick Labs, praised the future potential for AI-powered voice tools in healthcare, saying: 'Current methods of detection can require a lot of time, travel and cost. Voice technology has the potential to remove these barriers entirely.'


Boston Globe
a day ago
- Boston Globe
For some patients, the ‘inner voice' may soon be audible
Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. 'It's a fantastic advance,' Herff said. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. Advertisement In 2023, after ALS had made his voice unintelligible, Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. A computer recorded the electrical activity from the implants as Harrell attempted to say different words. Over time, with the help of artificial intelligence, the computer accurately predicted almost 6,000 words, with an accuracy of 97.5 percent. It could then synthesize those words using Harrell's voice, based on recordings made before he developed ALS. Advertisement But successes like this one raised a troubling question: Could a computer accidentally record more than patients wanted to say? Could it eavesdrop on their inner voice? 'We wanted to investigate if there was a risk of the system decoding words that weren't meant to be said aloud,' said Erin Kunz, a neuroscientist at Stanford University and an author of the new study. She and her colleagues also wondered if patients might actually prefer using inner speech. They noticed that Harrell and other participants became fatigued when they tried to speak; could simply imagining a sentence be easier for them and allow the system to work faster? 'If we could decode that, then that could bypass the physical effort,' Kunz said. 'It would be less tiring, so they could use the system for longer.' But it wasn't clear if the researchers could decode inner speech. Scientists don't even agree on what 'inner speech' is. Some researchers have indeed argued that language is essential for thought. But others, pointing to recent studies, maintain that much of our thinking does not involve language at all and that people who hear an inner voice are just perceiving a kind of sporadic commentary in their heads. 'Many people have no idea what you're talking about when you say you have an inner voice,' said Evelina Fedorenko, a cognitive neuroscientist at the Massachusetts Institute of Technology. 'They're like, 'You know, maybe you should go see a doctor if you're hearing words in your head.'' Fedorenko said she has an inner voice, while her husband does not. Advertisement Kunz and her colleagues decided to investigate the mystery for themselves. The scientists gave participants seven different words, including 'kite' and 'day,' then compared the brain signals when participants attempted to say the words and when they only imagined saying them. As it turned out, imagining a word produced a pattern of activity similar to that of trying to say it, but the signal was weaker. The computer did a pretty good job of predicting which of the seven words the participants were thinking. For Harrell, it didn't do much better than a random guess would have, but for another participant, it picked the right word more than 70 percent of the time. The researchers put the computer through more training, this time specifically on inner speech. Its performance improved significantly, including on Harrell. Now, when the participants imagined saying entire sentences, such as 'I don't know how long you've been here,' the computer could accurately decode most or all of the words. Herff, who has done studies on inner speech, was surprised that the experiment succeeded. Before, he would have said that inner speech is fundamentally different from the motor cortex signals that produce actual speech. 'But in this study, they show that, for some people, it really isn't that different,' he said. Kunz emphasized that the computer's current performance involving inner speech would not be good enough to let people hold conversations. 'The results are an initial proof of concept more than anything,' she said. But she is optimistic that decoding inner speech could become the new standard for brain-computer interfaces. In more recent trials, the results of which have yet to be published, she and her colleagues have improved the computer's accuracy and speed. 'We haven't hit the ceiling yet,' she said. Advertisement As for mental privacy, Kunz and her colleagues found some reason for concern: On occasion, the researchers were able to detect words that the participants weren't imagining out loud. Kunz and her colleagues explored ways to prevent the computer from eavesdropping on private thoughts. They came up with two possible solutions. One would be to only decode attempted speech, while blocking inner speech. The new study suggests this strategy could work. Even though the two kinds of thought are similar, they are different enough that a computer can learn to tell them apart. In one trial, the participants mixed sentences in their minds of both attempted and imagined speech. The computer was able to ignore the imagined speech. For people who would prefer to communicate with inner speech, Kunz and her colleagues came up with a second strategy: an inner password to turn the decoding on and off. The password would have to be a long, unusual phrase, they decided, so they chose 'Chitty Chitty Bang Bang,' the name of a 1964 novel by Ian Fleming as well as a 1968 movie starring Dick van Dyke. One of the participants, a 68-year-old woman with ALS, imagined saying 'Chitty Chitty Bang Bang' along with an assortment of other words. The computer eventually learned to recognize the password with 98.75 percent accuracy — and decoded her inner speech only after detecting the password. 'This study represents a step in the right direction, ethically speaking,' said Cohen Marcus Lionel Brown, a bioethicist at the University of Wollongong in Australia. 'If implemented faithfully, it would give patients even greater power to decide what information they share and when.' Advertisement This article originally appeared in