logo
#

Latest news with #EdwardChang

‘Great progress' in the race to turn brainwaves into fluent speech
‘Great progress' in the race to turn brainwaves into fluent speech

Irish Times

time01-05-2025

  • Health
  • Irish Times

‘Great progress' in the race to turn brainwaves into fluent speech

Neuroscientists are striving to give a voice to people unable to speak in a fast-advancing quest to harness brainwaves to restore or enhance physical abilities. Researchers at universities across California, and companies such as New York-based Precision Neuroscience, are among those making headway towards generating naturalistic speech through a combination of brain implants and artificial intelligence. Investment and attention have long been focused on implants that enable severely disabled people to operate computer keyboards, control robotic arms or regain some use of their own paralysed limbs. But some labs are making strides by concentrating on technology that converts thought patterns into speech. 'We are making great progress – and making brain-to-synthetic voice as fluent as chat between two speaking people is a major goal,' says Edward Chang, a neurosurgeon at the University of California, San Francisco. 'The AI algorithms we are using are getting faster, and we are learning with every new participant in our studies.' READ MORE Chang and colleagues, including from the University of California, Berkeley, last month published a paper in Nature Neuroscience detailing their work with a quadriplegic woman – paralysed limbs and torso – who had not been able to speak for 18 years after suffering a stroke. She trained a deep-learning neural network by silently attempting to say sentences composed using 1,024 different words. The audio of her voice was created by streaming her neural data to a joint speech synthesis and text-decoding model. The technique reduced the lag between the patient's brain signals and the resultant audio from the eight seconds the group had achieved previously to one second. This is much closer to the 100-200 millisecond time gap in normal speech. The system's median decoding speed was 47.5 words per minute, or about a third the rate of normal conversation. Even if you could, you wouldn't want people to hear your inner speech — Nick Ramsey of University Medical Centre Utrecht. Many thousands of people a year could benefit from so-called voice prosthesis. Their cognitive functions remain more or less intact but they have suffered speech loss due to stroke, the neurodegenerative disorder ALS and other brain conditions. If successful, researchers hope the technique can be extended to help people who have difficulty vocalising because of conditions such as cerebral palsy or autism. The potential of voice neuroprosthesis is beginning to trigger interest among businesses. Precision Neuroscience claims to be capturing higher-resolution brain signals than academic researchers, since the electrodes of its implants are more densely packed. The company has worked with 38 patients and plans soon to collect data from more, providing a potential pathway to commercialisation. Precision received regulatory clearance on April 17th to leave its sensors implanted for up to 30 days at a time. That would enable its scientists to train their system with what could within a year be the 'largest repository of high-resolution neural data that exists on planet Earth', says chief executive Michael Mager. The next step would be to 'miniaturise the components and put them in hermetically sealed packages that are biocompatible so they can be planted in the body forever', says Mager. [ Brain tech breakthrough restores ALS patient's ability to speak Opens in new window ] Elon Musk's Neuralink, the best-known brain-computer interface (BCI) company, has focused on enabling people with paralysis to control computers rather than giving them a synthetic voice. An important obstacle to the development of brain-to-voice technology is the time patients take to learn how to use the system. A key unanswered question is how much the response patterns in the motor cortex – the part of the brain that controls voluntary actions, including speech – vary between people. If they remained very similar, machine-learning models trained on previous individuals could be used for new patients, says Nick Ramsey, a BCI researcher at University Medical Centre Utrecht. That would accelerate a process that today takes 'tens or hundreds of hours, generating enough data by showing a participant text and asking them to try to speak it'. Ultimately a voice neuroprosthesis should provide the full expressive range of the human voice Ramsey says all brain-to-voice research focuses on the motor cortex where neurons activate the muscles involved in speaking, with no evidence that speech could be generated from other brain areas or by decoding inner thoughts. 'Even if you could, you wouldn't want people to hear your inner speech,' he adds. 'There are a lot of things I don't say out loud because they wouldn't be to my benefit or they might hurt people.' The development of a synthetic voice as good as healthy speech could still be 'quite a ways away', says Sergey Stavisky, co-director of the neuroprosthetics lab at University of California, Davis. His lab has demonstrated it can decode what someone is trying to say with about 98 per cent accuracy, he says. But the voice output isn't instantaneous and it doesn't capture important speech qualities such as tone. It is unclear if the recording hardware – electrodes – being used can enable the synthesis to match a healthy human voice, he adds. Scientists need to develop a deeper understanding of how the brain encodes speech production and better algorithms to translate neural activity into vocal outputs, says Stavisky. 'Ultimately a voice neuroprosthesis should provide the full expressive range of the human voice, so that for example they can precisely control their pitch and timing and do things like sing.' – Copyright The Financial Times Limited 2025

AI system restores speech for paralyzed patients using own voice
AI system restores speech for paralyzed patients using own voice

Fox News

time16-04-2025

  • Health
  • Fox News

AI system restores speech for paralyzed patients using own voice

Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak. This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech. Compared to other recent attempts to create speech from brain signals, this new system is a major advancement. The system uses devices such as high-density electrode arrays that record neural activity directly from the brain's surface. It also works with microelectrodes that penetrate the brain's surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient's voice. The neuroprosthesis samples neural data from the brain's motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control. One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient's pre-injury voice to fill in the missing details. This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally. "It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future," UCSF neurosurgeon Edward Chang said. The next steps include speeding up the AI's processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness. What's truly amazing about this AI is that it doesn't just translate brain signals into any kind of speech. It's aiming for natural speech, using the patient's own voice. It's like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals. What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store