Latest news with #CheolJunCho
Yahoo
21-04-2025
- Health
- Yahoo
AI system restores speech for paralyzed patients using own voice
Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak. This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech. Compared to other recent attempts to create speech from brain signals, this new system is a major advancement. STAY PROTECTED & INFORMED! GET SECURITY ALERTS & EXPERT TECH TIPS – SIGN UP FOR KURT'S 'THE CYBERGUY REPORT' NOW The system uses devices such as high-density electrode arrays that record neural activity directly from the brain's surface. It also works with microelectrodes that penetrate the brain's surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient's voice. The neuroprosthesis samples neural data from the brain's motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control. Read On The Fox News App Ai Enables Paralyzed Man To Control Robotic Arm With Brain Signals Real-time speech synthesis: The AI-based model streams intelligible speech from the brain in near-real time, addressing the challenge of latency in speech neuroprostheses. This "streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses," according to Gopala Anumanchipalli, co-principal investigator of the study. The model decodes neural data in 80-ms increments, enabling uninterrupted use of the decoder, further increasing speed. Naturalistic speech: The technology aims to restore naturalistic speech, allowing for more fluent and expressive communication. Personalized voice: The AI is trained using the patient's own voice before their injury, generating audio that sounds like them. In cases where patients have no residual vocalization, the researchers utilize a pre-trained text-to-speech model and the patient's pre-injury voice to fill in the missing details. Speed and accuracy: The system can begin decoding brain signals and outputting speech within a second of the patient attempting to speak, a significant improvement from the eight-second delay in a previous study from 2023. What Is Artificial Intelligence (Ai)? Exoskeleton Helps Paralyzed People Regain Independence One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient's pre-injury voice to fill in the missing details. How Elon Musk's Neuralink Brain Chip Works This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally. "It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future," UCSF neurosurgeon Edward Chang said. The next steps include speeding up the AI's processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness. Subscribe To Kurt's Youtube Channel For Quick Video Tips On How To Work All Of Your Tech Devices What's truly amazing about this AI is that it doesn't just translate brain signals into any kind of speech. It's aiming for natural speech, using the patient's own voice. It's like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals. What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Ask Kurt a question or let us know what stories you'd like us to cover. Follow Kurt on his social channels: Facebook YouTube Instagram Answers to the most-asked CyberGuy questions: What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked? What is the best way to stay private, secure and anonymous while browsing the web? How can I get rid of robocalls with apps and data removal services? How do I remove my private data from the internet? New from Kurt: Try CyberGuy's new games (crosswords, word searches, trivia and more!) CyberGuy's exclusive coupons and deals Copyright 2025 All rights article source: AI system restores speech for paralyzed patients using own voice


Fox News
16-04-2025
- Health
- Fox News
AI system restores speech for paralyzed patients using own voice
Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak. This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech. Compared to other recent attempts to create speech from brain signals, this new system is a major advancement. The system uses devices such as high-density electrode arrays that record neural activity directly from the brain's surface. It also works with microelectrodes that penetrate the brain's surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient's voice. The neuroprosthesis samples neural data from the brain's motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control. One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient's pre-injury voice to fill in the missing details. This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally. "It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future," UCSF neurosurgeon Edward Chang said. The next steps include speeding up the AI's processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness. What's truly amazing about this AI is that it doesn't just translate brain signals into any kind of speech. It's aiming for natural speech, using the patient's own voice. It's like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals. What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.


Express Tribune
02-04-2025
- Health
- Express Tribune
Ai-powered system enables real-time speech generation for paralyzed individuals
Listen to article California-based researchers have developed a groundbreaking AI-powered system that enables real-time speech generation for individuals with paralysis, using their own voices. This cutting-edge technology, created by scientists at the University of California, Berkeley, and the University of California, San Francisco, represents a significant advancement in brain-computer interface (BCI) research. The system utilizes neural interfaces to measure brain activity and AI algorithms that reconstruct speech patterns. It marks a major leap forward from previous efforts, allowing for near-instantaneous voice synthesis—a capability previously thought to be years away. "Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses," said Gopala Anumanchipalli, assistant professor of electrical engineering and computer sciences at UC Berkeley and co-principal investigator of the study, which was published this week in Nature Neuroscience. "Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis." The technology can work with various brain-sensing interfaces, including high-density electrode arrays placed directly on the brain's surface, microelectrodes that penetrate brain tissue, and non-invasive Surface Electromyography (sEMG) sensors that measure muscle activity on the face. The neuroprosthetic device samples neural data from the motor cortex—the brain region responsible for speech production. AI then decodes this data into audible speech. Study co-author Cheol Jun Cho explained, "What we're decoding is after a thought has happened—after we've decided what to say, after we've chosen the words and planned our vocal tract movements." To train the AI, researchers collected data from patients silently attempting to speak words displayed on a screen. This enabled the system to map neural activity to specific speech patterns. Additionally, a text-to-speech model was developed using recordings of the patient's voice from before their paralysis, ensuring a more natural sound. The system can begin decoding brain signals and producing speech within a second of a patient attempting to speak—an improvement from the eight-second delay recorded in a 2023 study. While the generated speech is not yet perfectly fluid, it is significantly more natural and intelligible compared to previous BCI-based speech synthesis technologies. This innovation could dramatically enhance the quality of life for individuals with conditions like ALS or severe paralysis by enabling more expressive and natural communication with caregivers, loved ones, and the broader world. Researchers plan to further refine the AI model to speed up processing times and enhance the expressiveness of synthesized speech. As advancements continue, this breakthrough could pave the way for broader accessibility and improved communication tools for those with severe speech impairments.


Telegraph
31-03-2025
- Health
- Telegraph
Mind-reading device allows paralysed people to speak fluently
A mind-reading device that allows paralysed patients to speak fluently just by thinking has been developed. The technology can quickly decode brain signals produced by the motor cortex when a person wants to say a word before translating them into sound waves that can be 'spoken' by a synthesised voice. Although similar devices have been trialled in the past, there has always been a lengthy delay between a person thinking a word and it being said out loud, making it tricky to form coherent sentences. However, a team at the University of California have used the same rapid speech-decoding capacity of AI devices such as Alexa and Siri to help speed up the process and produce more natural speech. 'Intercepting signals' Cheol Jun Cho, a doctoral student at UC Berkeley, said: 'We are essentially intercepting signals where the thought is translated into articulation. 'So what we're decoding is after a thought has happened, after we've decided what to say, after we've decided what words to use and how to move our vocal-tract muscles. 'This proof-of-concept framework is quite a breakthrough. We will continue to push the algorithm to see how we can generate speech better and faster.' Many people are unable to speak because of paralysis, disease or injury and current speech generators – which often involve gazing at individual words or letters on a screen – are time-consuming and laborious. The new device was trialled on a paralysed patient named Ann, who had an electronic array implanted over her motor cortex, to pick up her brain signals. She was then asked to look at phrases on a screen, such as 'Hey, how are you?' and silently attempt to speak the sentences. The programme was able to pick out chunks of neural activity behind certain sounds so they could be reproduced as a synthesised voice. Prof Gopala Anumanchipalli, an assistant professor of electrical engineering and computer sciences at UC Berkeley, said: 'Within one second, we are getting the first sound out. And the device can continuously decode speech, so Ann can keep speaking without interruption.' He added: 'Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses. 'Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.' The team used samples of Ann's pre-injury voice for the synthesised audio so that it would sound more like her, and found that even when she was thinking quickly, the algorithm could keep up. The researchers believe that the technology may also work without the need for invasive electrodes, by using sensors on the face to measure muscle activity. They also want to improve the algorithm so that it can pick up and convey the changes in tone, pitch and loudness that occur during speech, such as when someone is excited. 'Long-standing problem' Kaylo Littlejohn, a doctoral student at UC Berkeley's department of electrical engineering and computer sciences, said: 'Previously, it was not known if intelligible speech could be streamed from the brain in real time. 'That's ongoing work, to try to see how well we can actually decode these paralinguistic features from brain activity. 'This is a long-standing problem even in classical audio synthesis fields and would bridge the gap to full and complete naturalism.' Edward Chang, the senior co-principal investigator of the study who leads the clinical trial at UC San Francisco, added: 'This new technology has tremendous potential for improving quality of life for people living with severe paralysis affecting speech. 'It is exciting that the latest AI advances are greatly accelerating brain computer interfaces for practical real-world use in the near future.'