Latest news with #MaitreyeeWairagkar
Yahoo
15 hours ago
- Health
- Yahoo
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. [Sign up for Today in Science, a free daily newsletter] 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.' This article is reproduced with permission and was first published on June 11, 2025.


Scientific American
2 days ago
- Health
- Scientific American
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' Real-time decoder The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Freedom of speech Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.'


India Today
2 days ago
- Health
- India Today
First-of-its-kind brain computer helps man with ALS speak in real-time
In what could be one of the bioggest breakthrough in medical science and technology a newly developed investigational brain-computer interface could restore voice of people who have lost the team from University of California, Davis succesfully demonstrated this new technology, which can instantaneously translate brain activity into voice as a person tries to speak. The technology promises to create an artificial vocal details, published in journal Nature, highlight how the study participant, who has amyotrophic lateral sclerosis (ALS), spoke through a computer with his family in real time. The technology changed his intonation and 'sang' simple melodies. 'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the investigational brain-computer interface (BCI) was used during the BrainGate2 clinical trial at UC Davis Health. It consists of four microelectrode arrays surgically implanted into the region of the brain responsible for producing speech. The researchers collected data while the participant was asked to try to speak sentences shown to him on a computer screen. (Photo: UCD) advertisement'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak. Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,' Maitreyee Wairagkar, first author of the study system translated the participant's neural signals into audible speech played through a speaker very quickly — one-fortieth of a attributed the short delay to the same delay as a person experiences when they speak and hear the sound of their own technology also allowed the participant to say new words (words not already known to the system) and to make interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a process of instantaneously translating brain activity into synthesized speech is helped by advanced artificial intelligence researchers note that "although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants."


CBS News
2 days ago
- Health
- CBS News
UC Davis breakthrough lets ALS patient speak using only his thoughts
Allowing people with disabilities to talk by just thinking about a word, that's what UC Davis researchers hope to accomplish with new cutting-edge technology. It can be a breakthrough for people with ALS and other nonverbal conditions. One UC Davis Health patient has been diagnosed with ALS, a neurological disease that makes it impossible to speak out loud. Scientists have now directly wired his brain into a computer, allowing him to speak through it using only his thoughts. "It has been very exciting to see the system work," said Maitreyee Wairagkar, a UC Davis neuroprosthetics lab project scientist. The technology involves surgically implanting small electrodes. Artificial intelligence can then translate the neural activity into words. UC Davis researchers say it took the patient, who's not being publicly named, very little time to learn the technology. "Within 30 minutes, he was able to use this system to speak with a restricted vocabulary," Wairagkar said. It takes just milliseconds for brain waves to be interpreted by the computer, making it possible to hold a real-time conversation. "[The patient] has said that the voice that is synthesized with the system sounds like his own voice and that makes him happy," Wairagkar said. And it's not just words. The technology can even be used to sing. "These are just very simple melodies that we designed to see whether the system can capture his intention to change the pitch," Wairagkar said. Previously, ALS patients would use muscle or eye movements to type on a computer and generate a synthesized voice. That's how physicist Stephen Hawking, who also had ALS, was able to slowly speak. This new technology is faster but has only been used on one patient so far. Now, there's hope that these microchip implants could one day help other people with spinal cord and brain stem injuries. "There are millions of people around the world who live with speech disabilities," Wairagkar said. The UC Davis scientific study was just published in the journal "Nature," and researchers are looking for other volunteers to participate in the program.