logo
Brain implant helps woman with paralysis speak with her own voice again

Brain implant helps woman with paralysis speak with her own voice again

Yahoo02-04-2025

Researchers have developed a new method for intercepting neural signals from the brain of a person with paralysis and translating them into audible speech—all in near real-time. The result is a brain-computer interface (BCI) system similar to an advanced version of Google Translate, but instead of converting one language to another, it deciphers neural data and transforms it into spoken sentences.
Recent advancements in machine learning have enabled researchers to train AI voice synthesizers using recordings of the individual's own voice, making the generated speech more natural and personalized. Patients with paralysis have already used BCI to improve physical motor control function by controlling computer mice and prosthetic limbs. This particular system addresses a more specific subsection of patients who have also lost their capacity to speak. In testing, the paralyzed patient was able to silently read full text sentences, which were then converted into speech by the AI voice with a delay of less than 80 milliseconds.
Results of the study were published this week in the journal Nature Neuroscience by a team of researchers from the University of California, Berkeley and the University of California, San Francisco.
'Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,' UC Berkeley professor and co-principal investigator of the study Gopala Anumanchipalli said in a statement. 'Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.'
Researchers worked with a paralyzed woman named Ann, who lost her ability to speak following an unspecified accident. To collect neural data, the team implanted a 253-channel high-density electrocorticography (ECoG) array over the area of her brain responsible for speech motor control. They recorded her brain activity as she silently mouthed or mimed phrases displayed on a screen. Ann was ultimately presented with hundreds of sentences, all based on a limited vocabulary of 1,024 words. This initial data collection phase allowed researchers to begin decoding her thoughts.
'We are essentially intercepting signals where the thought is translated into articulation and in the middle of that motor control,' study co-author Cheol Jun Cho said in a statement. 'So what we're decoding is after a thought has happened, after we've decided what to say, after we've decided what words to use and how to move our vocal-tract muscles.'
The decoded neural data was then processed through a text-to-speech AI model trained on real voice recordings of Ann from before her injury. While various tools have long existed to help individuals with paralysis communicate, they are often too slow for natural, back-and-forth conversation. The late theoretical physicist Stephen Hawking, for example, used a computer and voice synthesizer to speak, but the system's limited interface allowed him to produce only 10 to 15 words per minute. More advanced brain-computer interface (BCI) models have significantly improved communication speed, but they have still struggled with input lag. A previous version of this AI model, developed by the same research team, for instance, had an average delay of eight seconds between decoding neural data and producing speech.
Related: [This cap is a big step towards universal, noninvasive brain-computer interfaces]
This latest breakthrough reduced input delay to less than a second—an improvement researchers attribute to rapid advancements in machine learning across the tech industry in recent years. Unlike previous models, which waited for Ann to complete a full thought before translating it, this system 'continuously decodes' speech while simultaneously vocalizing it. For Ann, this means she can now hear herself speak a sentence in her own voice within a second of thinking it.
A video demonstration of the clinical trial shows Ann looking at the phrase 'you love me' on a screen in front of her. Moments later, the AI model—trained on her own voice—speaks the words aloud. Seconds after that, she successfully repeats the phrases 'so did you do it' and 'where did you get this?' Ann reportedly appreciated that the synthesized speech sounded like her own voice.
'Hearing her own voice in near-real time increased her sense of embodiment,' Anumanchipalli said.
This advancement comes as BCIs are gaining public recognition. Neuralink, founded by Elon Musk in 2016, has already successfully implanted its BCI device in three human patients. The first, a 30-year-old man named Noland Arbaugh with quadriplegia, says the device has allowed him to control a computer mouse and play video games using only his thoughts. Since then, Neuralink has upgraded the system with more electrodes, which the company says should provide greater bandwidth and longer battery life. Neuralink recently received a special designation from the Food and Drug Administration (FDA) to explore a similar device aimed at restoring eyesight. Meanwhile, Synchron, another leading BCI company, recently demonstrated that a patient living with ALS could operate an Apple Vision Pro mixed reality headset using only neural inputs.
'Using this type of enhanced reality is so impactful and I can imagine it would be for others in my position or others who have lost the ability to engage in their day-to-day life,' a Synchron patient with ALS named Mark said in a statement. 'It can transport you to places you never thought you'd see or experience again.'
Though the field is mostly dominated by US startups, other countries are catching up. Just this week, a Chinese BCI company called NeuCyber NeuroTech announced it had inserted its own semi-invasive BCI chip into three patients over the past month. The company, according to Reuters, plans to implant its 'Beinao No.1' device into 10 more patients by the end of the year.
All of that said, it will still take time before BCIs can meaningfully bring back conversational dialogue in day-to-day life for those who no longer have the capacity for speech. The California researchers say their next steps involve improving their interception methods and AI models to better reflect changes in vocal tone and pitch, two elements crucial for communicating emotion. They are also working on bringing their already low latency down even further.
'That's ongoing work, to try to see how well we can actually decode these paralinguistic features from brain activity,' UC Berkeley PhD student and paper co-author Kaylo Littlejohn said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

From spatial widgets to realistic Personas: All the visionOS updates Apple announced at WWDC
From spatial widgets to realistic Personas: All the visionOS updates Apple announced at WWDC

TechCrunch

time28 minutes ago

  • TechCrunch

From spatial widgets to realistic Personas: All the visionOS updates Apple announced at WWDC

Apple's updates to visionOS 26, the operating system powering its mixed reality headset, build on last year's Apple Vision Pro spatial computer that blends digital content with the physical world. At WWDC, Apple announced a range of updates for both consumer and enterprise customers, from new spatial widgets and content to more realistic Personas and more. Personalized spatial widgets All widgets — including Calendar, shown here — are customizable, with a variety of options for frame width, color, and depth. Image Credits:Apple Apple's widgets offer personalized and useful information at a glance. With visionOS 26, they become spatial, integrating into your space. You can customize the widgets to the size, color, and depth you like, and place them where you want. New widgets include a clock that you can decorate, weather that adapts to the weather outside near you, music for quick access to tunes, and photos that can transform into a panorama or a 'window to another space.' Adding depth to 2D images Image Credits:Apple An update to the visionOS Photos app uses a new AI algorithm that leverages computational depth to create multiple perspectives for your 2D photos, bringing images to life. Apple says it will feel like you can 'lean right into them and look around.' Spatial browsing on Safari can also make web browsing a more immersive experience. With certain supported articles, spatial browsing can hide distractions and reveal inline photos that 'come alive as you scroll.' Developers can also add spatial browsing to their own apps. Talking heads With VisionOS 26, Personas are transformed to feel more natural and familiar. Image Credits:Apple Apple released Personas, an AI avatar to represent you on video calls, on the Vision Pro as a beta feature last year. With visionOS 26, Apple says Personas 'more realistically represent you.' The new Personas take advantage of 'volumetric rendering and machine learning technology' to enhance everything from how you look in full side profile view to delivering more accurate-looking hair, eyelashes, and complexion. Personas are all created on-device in a 'matter of seconds,' Apple says. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Immerse together Image Credits:Apple VisionOS 26 lets you and another headset-wearing friend watch a movie or play a spatial game together. This capability is also being marketed for enterprise clients, allowing users to collaborate. For example, 3D design software company Dassault Systèmes is leveraging the ability with its 3DLive app to visualize 3D designs in person and with remote colleagues. Logitech Muse is a spatial accessory that will enable precise input and new ways to interact with collaboration apps like Spatial Analogue. Image Credits:Apple VisionOS 26 also lets organizations easily share a common pool of devices among team members, and even securely saves your eye and hand data, vision prescription, and accessibility settings to your iPhone so users can quickly use a shared team device or a friend's Vision Pro as a guest user. Apple said it would add more APIs so enterprises can create apps designed for visionOS. There's a new 'for your eyes only' mode that ensures only those who have been given access can see any confidential materials. Finally, Apple announced Logitech Muse built for Vision Pro, a spatial accessory built for the headset that lets you draw and collaborate in 3D with precision. More Apple Intelligence features are coming to the Apple Vision Pro. VisionOS 26 supports new languages like French, German, Italian, Japanese, Korean, and Spanish, along with support for English in Australia, Canada, India, Singapore, and the U.K. Users can also now 'look to scroll' using just their eyes to explore apps and websites. They can also now unlock their iPhone while wearing the Apple Vision Pro, even when wearing the headset, and visionOS supports relaying calls from iPhone so you can accept a call from the Apple Vision Pro.

iOS 26 Developer Beta Drops Today. Here's How to Download the Unreleased iPhone OS Today
iOS 26 Developer Beta Drops Today. Here's How to Download the Unreleased iPhone OS Today

CNET

time30 minutes ago

  • CNET

iOS 26 Developer Beta Drops Today. Here's How to Download the Unreleased iPhone OS Today

Apple just kicked off WWDC 25, and the very first developer beta of iOS 26 is almost here. This early release gives developers (and the most impatient enthusiasts) their first hands-on look at the major redesign and AI upgrades Apple previewed on-stage at Apple Park this morning. The biggest new feature is of course Liquid Glass, Apple's new cohesive design language across all of its devices, with a liquid-glass aesthetic. Live Translation is integrated into Messages and FaceTime to help you communicate across languages. Call Screening, an evolution of Live Voicemail, picks up unknown numbers on your behalf, asks why they're calling and shows a live transcript so you can decide whether to jump in or let it roll to voicemail. Its sidekick, Hold Assist, listens to the hold music for you and pings you the instant a real person comes on the line. And there's so much more. By jumping straight from iOS 18 to iOS 26, Apple is syncing its mobile OS naming with the rest of its platforms and signaling a generational leap rather than the usual annual tune-up. It's the biggest OS update since iOS 7, with updates and enhancements to pretty much every part of the iPhone. If you want to try any of these features out, you can do so starting today. Here's how you can download iOS 26 when it goes live. A quick warning before you download iOS 26 Yes, the iOS 26 developer beta is free, but remember it's meant for developers, not day-to-day use. Early builds often carry bugs that can crash apps, drain your battery, overheat your phone and generally make your device feel sluggish. Unless you need to test software against Apple's next release, it's smarter to stick with the public beta, which will be released next month, on your main iPhone. Which iPhone models support iOS 26? As long as you own an iPhone 11 and newer, you can download iOS 26. That means the iPhone XR/XS generation is out, while every A13 Bionic handset forward, including the forthcoming iPhone 17 models, are in. Here's the full list of compatible iPhone models that can run iOS 26: Apple Intelligence works only on compatible phones: any iPhone 16 model and the iPhone 15 Pro and Pro Max. What you need to know before you download the iOS 26 developer beta Before you get too excited and start installing the iOS 26 developer beta, there are a few things you need to do: Check your hardware first. This beta runs only on iPhone 11 and newer . If you want the headline Apple Intelligence features, you'll need an iPhone 15 Pro, 15 Pro Max, or anything in the forthcoming iPhone 17 lineup . This beta runs only on . If you want the headline Apple Intelligence features, you'll need an . Update to the latest public release. Make sure your phone is already on the most current stable build (right now that's iOS 18.5). This helps prevent any issues that arise from updating with outdated software. Make sure your phone is already on the most current stable build (right now that's iOS 18.5). This helps prevent any issues that arise from updating with outdated software. Have a good Wi-Fi connection . You want the install to go flawless, which means you should be connected to some pretty decent Wi-Fi. . You want the install to go flawless, which means you should be connected to some pretty decent Wi-Fi. Have enough space on your phone . You need at least several GB of space on your phone to download the iOS 26 developer beta. . You need at least several GB of space on your phone to download the iOS 26 developer beta. Archive a backup. A normal iCloud backup can be overwritten; you need one that can't. On your Mac , connect your iPhone, open Finder , select your device, hit Back Up Now > Manage Backups , right-click the new backup and click Archive . On Windows , it's pretty similar. Open iTunes , click Back Up Now, locate the backup in Preferences and archive it. A normal iCloud backup can be overwritten; you need one that can't. On your , connect your iPhone, open , select your device, hit > , right-click the new backup and click . On , it's pretty similar. Open , click Back Up Now, locate the backup in and archive it. Know the escape hatch. If the beta breaks your phone for any reason (this has never happened to me), you'll have to put the phone in Recovery Mode, restore iOS 18.5, and then pull that archived backup back onto the device. Once you've followed these steps, you're pretty much ready to go. How to install the iOS 26 developer beta Apple now lets anyone install developer betas without paying the $99 annual fee. All you need to do is visit the Apple Developer site on the device you plan to update, open the ☰ menu, choose Account, and sign in with that device's Apple ID. Agree to the terms, tick the required boxes, and tap Submit. Now you can install the iOS 26 developer beta on your phone as an over-the-air update: After you sign up for the Apple Developer program, you should see the option to download the iOS 26 developer beta in your settings. On your device, open Settings > General > Software Update > Beta Updates and choose the iOS 26 Developer Beta. Go back and tap Download and Install under the new "iOS 26 Developer Beta" option that appears. Finally punch in your passcode, accept the T&C, and let the installer do its thing. On a decent Wi-Fi connection, the download-and-reboot routine should take roughly 10-15 minutes, but that all depends. When your iPhone powers back on, you'll be running the iOS 26 developer beta.

Apple announces new AI features at WWDC 25
Apple announces new AI features at WWDC 25

Yahoo

time36 minutes ago

  • Yahoo

Apple announces new AI features at WWDC 25

USA TODAY and Yahoo may earn commission from links in this article. Pricing and availability subject to change. Apple announced new artificial intelligence features at the Worldwide Developers Conference keynote June 9. The company launched testing versions of live translation, visual search and a workout assistant. Apple Intelligence will also be a part of the Reminder, Messages and Apple Wallet apps. "Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy," Craig Federighi, Apple's senior vice president of Software Engineering, said. "Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems." The company also said it will open its large language model to third party developers. "App developers will be able to build on Apple Intelligence to bring users new experiences that are intelligent, available when they're offline, and that protect their privacy, using AI inference that is free of cost," the company said in a news release accompanying the announcement. Federighi also teased that updates to Siri would be coming later this year. Apple WWDC 2025: Live updates on keynote, AI and iOS news The opening of the Foundation Models appears to be an attempt to spur the creation of new AI features and apps to help Apple catch up in the artificial intelligence market. "We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day," Federighi said. 9to5Mac noted before the conference that software announcements made last year rolled out slowly and AI upgrades to Siri and Swift Assist never materialized. "They had the big vision announcement last year. Not a lot of progress this year as they tried to build it. It is hard to build," Alex Kantrowitz, founder of the Big Technology podcast and newsletter, said on CNBC on Monday, Apple is expected to partner with Alibaba on AI in China, according to Reuters, but that has been delayed because of the tariff showdown between President Donald Trump and China. There is 'growing confidence inside and outside China this AI launch will ultimately happen between Apple and Alibaba,' Ives said in a note to investors on Monday, June 9. This article originally appeared on USA TODAY: Apple Intelligence features announced for Messages, more at WWDC 25

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store