logo
#

Latest news with #BCI-voice

UC Davis develops brain computer, helps man with ASL speak in real time
UC Davis develops brain computer, helps man with ASL speak in real time

Yahoo

time4 hours ago

  • Health
  • Yahoo

UC Davis develops brain computer, helps man with ASL speak in real time

( — Possible new hope has flourished for those who have lost the ability to speak after researchers at the University of California, Davis developed an investigational brain-computer interface that helps restore the ability to hold real-time conversations. Video Above: Illness took away her voice. AI created a replica she carries in her phone (May 2024) The new technology is able to translate the brain activity of the person attempting to speak into a voice, according to researchers in a new study published in the scientific journal Nature. The study participant, who has amyotrophic lateral sclerosis, was able to speak to his family, change his intonation and even 'sing' simple melodies. UC Davis Health said that the system's digital vocal tract has no detectable delays. 'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the paper and an assistant professor in the UC Davis Department of Neurological Surgery. Stavisky went on to say that with the use of instantaneous voice synthesis, neuroprosthesis users will be able to be more interactive and included in conversations. The clinical trial at UC Davis, BrainGate2, used an investigational brain-computer interface that consists of surgically implanting four microelectrode arrays into the area of the brain that produces speech. The firing pattern of hundreds of neurons was measured through electrodes, followed by the alignment of the patterns with the attempted speech sound the participant was producing. The activity of neurons in the brain is recorded and then sent to a computer that interprets the signals to reconstruct voice, researchers said. 'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,' said Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis. 'Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.' The neural signals of the participant were translated into audible speech in one-fortieth of a second, according to the study. 'This short delay is similar to the delay a person experiences when they speak and hear the sound of their own voice,' said officials. The participant was also able to say words unknown to the system, along with making interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a sentence. 60% of the BCI-synthesized words were understandable to listeners, while only 4% were understandable when not using BCI, the study said. Events, discounted tattoos, piercings this Friday the 13th 'Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,' said David Brandman, co-director of the UC Davis Neuroprosthetics Lab and the neurosurgeon who performed the participant's implant. 'The results of this research provide hope for people who want to talk but can't. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.' Researchers said that brain-to-voice neuroprostheses are still in the early phase, despite promising findings. They said that a limitation is that the research was done on only one participant with ALS. The goal would be to replicate the results with more participants who have speech loss from other causes. More information on the BrainGate2 trial can be found on Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

First-of-its-kind brain computer helps man with ALS speak in real-time
First-of-its-kind brain computer helps man with ALS speak in real-time

India Today

time2 days ago

  • Health
  • India Today

First-of-its-kind brain computer helps man with ALS speak in real-time

In what could be one of the bioggest breakthrough in medical science and technology a newly developed investigational brain-computer interface could restore voice of people who have lost the team from University of California, Davis succesfully demonstrated this new technology, which can instantaneously translate brain activity into voice as a person tries to speak. The technology promises to create an artificial vocal details, published in journal Nature, highlight how the study participant, who has amyotrophic lateral sclerosis (ALS), spoke through a computer with his family in real time. The technology changed his intonation and 'sang' simple melodies. 'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the investigational brain-computer interface (BCI) was used during the BrainGate2 clinical trial at UC Davis Health. It consists of four microelectrode arrays surgically implanted into the region of the brain responsible for producing speech. The researchers collected data while the participant was asked to try to speak sentences shown to him on a computer screen. (Photo: UCD) advertisement'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak. Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice,' Maitreyee Wairagkar, first author of the study system translated the participant's neural signals into audible speech played through a speaker very quickly — one-fortieth of a attributed the short delay to the same delay as a person experiences when they speak and hear the sound of their own technology also allowed the participant to say new words (words not already known to the system) and to make interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a process of instantaneously translating brain activity into synthesized speech is helped by advanced artificial intelligence researchers note that "although the findings are promising, brain-to-voice neuroprostheses remain in an early phase. A key limitation is that the research was performed with a single participant with ALS. It will be crucial to replicate these results with more participants."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store