
Brain Implant Lets Man with ALS Speak and Sing with His ‘Real Voice'
A man with a severe speech disability is able to speak expressively and sing using a brain implant that translates his neural activity into words almost instantly. The device conveys changes of tone when he asks questions, emphasizes the words of his choice and allows him to hum a string of notes in three pitches.
The system — known as a brain–computer interface (BCI) — used artificial intelligence (AI) to decode the participant's electrical brain activity as he attempted to speak. The device is the first to reproduce not only a person's intended words but also features of natural speech such as tone, pitch and emphasis, which help to express meaning and emotion.
In a study, a synthetic voice that mimicked the participant's own spoke his words within 10 milliseconds of the neural activity that signalled his intention to speak. The system, described today in Nature, marks a significant improvement over earlier BCI models, which streamed speech within three seconds or produced it only after users finished miming an entire sentence.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
'This is the holy grail in speech BCIs,' says Christian Herff, a computational neuroscientist at Maastricht University, the Netherlands, who was not involved in the study. 'This is now real, spontaneous, continuous speech.'
Real-time decoder
The study participant, a 45-year-old man, lost his ability to speak clearly after developing amyotrophic lateral sclerosis, a form of motor neuron disease, which damages the nerves that control muscle movements, including those needed for speech. Although he could still make sounds and mouth words, his speech was slow and unclear.
Five years after his symptoms began, the participant underwent surgery to insert 256 silicon electrodes, each 1.5-mm long, in a brain region that controls movement. Study co-author Maitreyee Wairagkar, a neuroscientist at the University of California, Davis, and her colleagues trained deep-learning algorithms to capture the signals in his brain every 10 milliseconds. Their system decodes, in real time, the sounds the man attempts to produce rather than his intended words or the constituent phonemes — the subunits of speech that form spoken words.
'We don't always use words to communicate what we want. We have interjections. We have other expressive vocalizations that are not in the vocabulary,' explains Wairagkar. 'In order to do that, we have adopted this approach, which is completely unrestricted.'
The team also personalized the synthetic voice to sound like the man's own, by training AI algorithms on recordings of interviews he had done before the onset of his disease.
The team asked the participant to attempt to make interjections such as 'aah', 'ooh' and 'hmm' and say made-up words. The BCI successfully produced these sounds, showing that it could generate speech without needing a fixed vocabulary.
Freedom of speech
Using the device, the participant spelt out words, responded to open-ended questions and said whatever he wanted, using some words that were not part of the decoder's training data. He told the researchers that listening to the synthetic voice produce his speech made him 'feel happy' and that it felt like his 'real voice'.
In other experiments, the BCI identified whether the participant was attempting to say a sentence as a question or as a statement. The system could also determine when he stressed different words in the same sentence and adjust the tone of his synthetic voice accordingly. 'We are bringing in all these different elements of human speech which are really important,' says Wairagkar. Previous BCIs could produce only flat, monotone speech.
'This is a bit of a paradigm shift in the sense that it can really lead to a real-life tool,' says Silvia Marchesotti, a neuroengineer at the University of Geneva in Switzerland. The system's features 'would be crucial for adoption for daily use for the patients in the future.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
35 minutes ago
- Yahoo
New T-Rex ancestor discovered in drawers of Mongolian institute
Misidentified bones that languished in the drawers of a Mongolian institute for 50 years belong to a new species of tyrannosaur that rewrites the family history of the mighty T-Rex, scientists said Wednesday. This slender ancestor of the massive Tyrannosaurus Rex was around four metres (13 feet) long and weighed three quarters of a tonne, according to a new study in the journal Nature. "It would have been the size of a very large horse," study co-author Darla Zelenitsky of Canada's University of Calgary told AFP. The fossils were first dug up in southeastern Mongolia in the early 1970s but at the time were identified as belonging to a different tyrannosaur, Alectrosaurus. For half a century, the fossils sat in the drawers at the Institute of Paleontology of the Mongolian Academy of Sciences in the capital Ulaanbaatar. Then PhD student Jared Voris, who was on a trip to Mongolia, started looking through the drawers and noticed something was wrong, Zelenitsky said. It turned out the fossils were well-preserved, partial skeletons of two different individuals of a completely new species. "It is quite possible that discoveries like this are sitting in other museums that just have not been recognised," Zelenitsky added. - 'Messy' family history - They named the new species Khankhuuluu mongoliensis, which roughly means the dragon prince of Mongolia because it is smaller than the "king" T-Rex. Zelenitsky said the discovery "helped us clarify a lot about the family history of the tyrannosaur group because it was really messy previously". The T-Rex represented the end of the family line. It was the apex predator in North America until 66 million years ago, when an asteroid bigger than Mount Everest slammed into the Gulf of Mexico. Three quarters of life on Earth was wiped out, including all the dinosaurs that did not evolve into birds. Around 20 million years earlier, Khankhuuluu -- or another closely related family member -- is now believed to have migrated from Asia to North America using the land bridge that once connected Siberia and Alaska. This led to tyrannosaurs evolving across North America. Then one of these species is thought to have crossed back over to Asia, where two tyrannosaur subgroups emerged. One was much smaller, weighing under a tonne, and was nicknamed Pinocchio rex for its long snout. The other subgroup was huge and included behemoths like the Tarbosaurus, which was only a little smaller than the T-rex. One of the gigantic dinosaurs then left Asia again for North America, eventually giving rise to the T-Rex, which dominated for just two million years -- until the asteroid struck. dl/gil
Yahoo
an hour ago
- Yahoo
UC Davis develops brain computer, helps man with ASL speak in real time
( — Possible new hope has flourished for those who have lost the ability to speak after researchers at the University of California, Davis developed an investigational brain-computer interface that helps restore the ability to hold real-time conversations. Video Above: Illness took away her voice. AI created a replica she carries in her phone (May 2024) The new technology is able to translate the brain activity of the person attempting to speak into a voice, according to researchers in a new study published in the scientific journal Nature. The study participant, who has amyotrophic lateral sclerosis, was able to speak to his family, change his intonation and even 'sing' simple melodies. UC Davis Health said that the system's digital vocal tract has no detectable delays. 'Translating neural activity into text, which is how our previous speech brain-computer interface works, is akin to text messaging. It's a big improvement compared to standard assistive technologies, but it still leads to delayed conversation. By comparison, this new real-time voice synthesis is more like a voice call,' said Sergey Stavisky, senior author of the paper and an assistant professor in the UC Davis Department of Neurological Surgery. Stavisky went on to say that with the use of instantaneous voice synthesis, neuroprosthesis users will be able to be more interactive and included in conversations. The clinical trial at UC Davis, BrainGate2, used an investigational brain-computer interface that consists of surgically implanting four microelectrode arrays into the area of the brain that produces speech. The firing pattern of hundreds of neurons was measured through electrodes, followed by the alignment of the patterns with the attempted speech sound the participant was producing. The activity of neurons in the brain is recorded and then sent to a computer that interprets the signals to reconstruct voice, researchers said. 'The main barrier to synthesizing voice in real-time was not knowing exactly when and how the person with speech loss is trying to speak,' said Maitreyee Wairagkar, first author of the study and project scientist in the Neuroprosthetics Lab at UC Davis. 'Our algorithms map neural activity to intended sounds at each moment of time. This makes it possible to synthesize nuances in speech and give the participant control over the cadence of his BCI-voice.' The neural signals of the participant were translated into audible speech in one-fortieth of a second, according to the study. 'This short delay is similar to the delay a person experiences when they speak and hear the sound of their own voice,' said officials. The participant was also able to say words unknown to the system, along with making interjections. He was able to modulate the intonation of his generated computer voice to ask a question or emphasize specific words in a sentence. 60% of the BCI-synthesized words were understandable to listeners, while only 4% were understandable when not using BCI, the study said. Events, discounted tattoos, piercings this Friday the 13th 'Our voice is part of what makes us who we are. Losing the ability to speak is devastating for people living with neurological conditions,' said David Brandman, co-director of the UC Davis Neuroprosthetics Lab and the neurosurgeon who performed the participant's implant. 'The results of this research provide hope for people who want to talk but can't. We showed how a paralyzed man was empowered to speak with a synthesized version of his voice. This kind of technology could be transformative for people living with paralysis.' Researchers said that brain-to-voice neuroprostheses are still in the early phase, despite promising findings. They said that a limitation is that the research was done on only one participant with ALS. The goal would be to replicate the results with more participants who have speech loss from other causes. More information on the BrainGate2 trial can be found on Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Yahoo
6 hours ago
- Yahoo
Quantum Computers Simulate Particle 'String Breaking' in a Physics Breakthrough
Subatomic particles such as quarks can pair up when linked by 'strings' of force fields — and release energy when the strings are pulled to the point of breaking. Two teams of physicists have now used quantum computers to mimic this phenomenon and watch it unfold in real time. The results, described in two Nature papers on June 4, are the latest in a series of breakthroughs towards using quantum computers for simulations that are beyond the ability of any ordinary computers. 'String breaking is a very important process that is not yet fully understood from first principles,' says Christian Bauer, a physicist at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California. Physicists can calculate the final results of particle collisions that form or break strings using classical computers, but cannot fully simulate what happens in between. The success of the quantum simulations is 'incredibly encouraging,' Bauer says. [Sign up for Today in Science, a free daily newsletter] Each experiment was conducted by an international collaboration involving academic and industry researchers — one team at QuEra Computing, a start-up company in Cambridge, Massachusetts, and another at the Google Quantum AI Lab in Santa Barbara, California. The researchers using QuEra's Aquila machine encoded information in atoms that were arranged in a 2D honeycomb pattern, each suspended in place by an optical 'tweezer'. The quantum state of each atom — a qubit that could be excited or relaxed — represented the electric field at a point in space, explains co-author Daniel González-Cuadra, a theoretical physicist now at the Institute for Theoretical Physics in Madrid. In the other experiment, researchers encoded the 2D quantum field in the states of superconducting loops on Google's Sycamore chip. The teams used diametrically opposite quantum-simulation philosophies. The atoms in Aquila were arranged so that the electrostatic forces between them mimicked the behaviour of the electric field, and continuously evolved towards their own states of lower energy — an approach called analogue quantum simulation. The Google machine was instead used as a 'digital' quantum simulator: the superconducting loops were made to follow the evolution of the quantum field 'by hand', through a discrete sequence of manipulations. In both cases, the teams set up strings in the field that effectively acted like rubber bands connecting two particles. Depending on how the researchers tuned the parameters, the strings could be stiff or wobbly, or could break up. 'In some cases, the whole string just dissolves: the particles become deconfined,' says Frank Pollmann, a physicist at the Technical University of Munich (TUM) in Garching, Germany, who helped to lead the Google experiment. Although simulating strings in a 2D electric field could have applications for studying the physics of materials, it is still a long way from fully simulating high-energy interactions, such as those that occur in particle colliders, which are in 3D and involve the much more complex strong nuclear force. 'We do not have a clear path at this point how to get there,' says Monika Aidelsburger, a physicist at the Max Planck Institute of Quantum Optics in Munich, Germany. Still, the latest results are exciting, and progress in quantum simulation in general has been 'really amazing and very fast,' Aidelsburger says. Last year, Bauer and his LBNL colleague Anthony Ciavarella were among the first teams to simulate the strong nuclear force on a quantum computer. Approaches that replace qubits with qudits — which can have more than two quantum states and can be more realistic representations of a quantum field — could also make simulations more powerful, researchers say. This article is reproduced with permission and was first published on June 5, 2025.