logo
#

Latest news with #ChristianHerff

For some patients, the 'inner voice' may soon be audible
For some patients, the 'inner voice' may soon be audible

Time of India

time3 days ago

  • Health
  • Time of India

For some patients, the 'inner voice' may soon be audible

For decades, neuro-engineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or ALS, weakens the muscles in the airway. Tired of too many ads? go ad free now A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could record the brain's electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. "It's a fantastic advance," Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations. In 2023, after ALS had made his voice unintelligible, Harrell agreed to have electrodes implanted in his brain. A computer recorded the electrical activity from the implants as Harrell attempted to say different words. Over time, with the help of AI, the computer predicted 6,000 words, with 97.5% accuracy. But successes like this raised a troubling question: Could a computer accidentally record more than patients actually wanted to say?Could it eavesdrop on their inner voice? "We wanted to investigate if there was a risk of the system decoding words that weren't meant to be said aloud," said Erin Kunz, a neuroscientist at Stanford University and an author of the study. Tired of too many ads? go ad free now She and her colleagues also wondered if patients might actually prefer using inner speech. Kunz and her colleagues decided to investigate the mystery for themselves. The scientists gave participants seven different words, including "kite" and "day," then compared the brain signals when participants attempted to say the words and when they only imagined saying them. As it turned out, imagining a word produced a pattern of activity similar to that of trying to say it, but the signal was weaker. The computer did a good job of predicting which of the seven words the participants were thinking. For Harrell, it didn't do much better than a random guess would have, but for another participant it picked the right word more than 70% of the time. The researchers put the computer through more training, this time specifically on inner speech. Its performance improved significantly, including on Harrell. Now when the participants imagined saying entire sentences, such as "I don't know how long you've been here," the computer could accurately decode most of the words. Herff, who has done his own studies, was surprised that the experiment succeeded. Before, he would have said that inner speech is fundamentally different from the motor cortex signals that produce actual speech. "But in this study, they show that, for some people, it isn't that different," he said. Kunz emphasized that the computer's current performance involving inner speech would not be good enough to let people hold conversations. "The results are an initial proof of concept more than anything," she said. But she is optimistic that decoding inner speech could become the new standard for brain-computer interfaces. In recent trials, she and her colleagues have improved the computer's accuracy. "We haven't hit the ceiling yet," she said. NYT

For Some Patients, the ‘Inner Voice' May Soon Be Audible
For Some Patients, the ‘Inner Voice' May Soon Be Audible

New York Times

time5 days ago

  • Health
  • New York Times

For Some Patients, the ‘Inner Voice' May Soon Be Audible

For decades, neuroengineers have dreamed of helping people who have been cut off from the world of language. A disease like amyotrophic lateral sclerosis, or A.L.S., weakens the muscles in the airway. A stroke can kill neurons that normally relay commands for speaking. Perhaps, by implanting electrodes, scientists could instead record the brain's electric activity and translate that into spoken words. Now a team of researchers has made an important advance toward that goal. Previously they succeeded in decoding the signals produced when people tried to speak. In the new study, published on Thursday in the journal Cell, their computer often made correct guesses when the subjects simply imagined saying words. Christian Herff, a neuroscientist at Maastricht University in the Netherlands who was not involved in the research, said the result went beyond the merely technological and shed light on the mystery of language. 'It's a fantastic advance,' Dr. Herff said. The new study is the latest result in a long-running clinical trial, called BrainGate2, that has already seen some remarkable successes. One participant, Casey Harrell, now uses his brain-machine interface to hold conversations with his family and friends. In 2023, after A.L.S. had made his voice unintelligible, Mr. Harrell agreed to have electrodes implanted in his brain. Surgeons placed four arrays of tiny needles on the left side, in a patch of tissue called the motor cortex. The region becomes active when the brain creates commands for muscles to produce speech. Want all of The Times? Subscribe.

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

Yahoo

time13-06-2025

  • Health
  • Yahoo

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. [Sign up for Today in Science, a free daily newsletter] 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.' This article is reproduced with permission and was first published on June 11, 2025.

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

Scientific American

time12-06-2025

  • Health
  • Scientific American

Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'

A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches. The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion. In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. 'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.' Real-time decoder The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear. Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words. 'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.' The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease. The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary. Freedom of speech Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'. In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech. 'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store