Latest news with #EdwardChang

Associated Press
11-08-2025
- Business
- Associated Press
Two Prominent Professors to Keynote 25th Annual Neurotech Leaders Forum
SAN FRANCISCO, Calif., Aug. 11, 2025 (SEND2PRESS NEWSWIRE) — Neurotech Reports, the publisher of the Neurotech Business Report newsletter, announced that two prominent figures in the field of neurotechnology will keynote the 2025 Neurotech Leaders Forum, which takes place November 5-6 in San Francisco. Edward Chang, chair of the department of neurological surgery at UC San Francisco, will discuss future applications of implanted brain sensing and stimulation devices. J. Thomas Mortimer, emeritus professor of biomedical engineering at Case Western Reserve University, will discuss his history as a pioneer developing implanted neurostimulation systems for chronic pain, diaphragm pacing, and bladder control. The 25th annual event will also feature sessions devoted to the business of neuromodulation, the investment outlook for neurotechnology, the convergence of AI and BCI technologies, new realities in regulatory and reimbursement policies, and several other topics. Neurotech Reports editors James Cavuoto, Jeremy Koff, JoJo Platt, and Victor Pikov will moderate panel discussions with key industry leaders. Several panelists and presenters were recently added to the agenda for the two-day conference. New speakers include Ryan Field, CEO of Kernel; Andreas Forland, CEO of Cognixion; and Oliver Armitage, newly appointed vice president of Axoft. Venture capital professionals participating in the meeting include Mir Imran, managing partner at InCube Ventures, and Lu Zhang, founder and managing partner of Fusion Fund. Among the new presenters this year are Emile Radyte, CEO of Samphire Neuroscience in the U.K., which has developed a wearable neuromodulation device for treatment of PMS and menstrual pain and Tetiana Aleksandrova, CEO of Subsense, Inc., which is developing a nonsurgical nanoparticle-based BCI system. The Platinum Sponsor at this year's event is Cirtec Medical. Micro Systems Technology is the Gold Sponsor. Silver sponsors include Velentium and Valtronic. 'As we celebrate our 25th year covering the neurotechnology industry, it's gratifying to have Prof. Mortimer, one of the founders of the field, and Prof. Chang, one of the most promising researchers and entrepreneurs, talk about the past and future of our industry,' said James Cavuoto, editor and publisher of Neurotech Reports. Neurotech Reports has extended the early-bird registration deadline until September 12, 2025. Also, Neurotech Reports still has openings for a small number of startup and early-stage firms who would like to present during one of two Entrepreneur Panels at the conference. To apply, visit this link: For more information, contact Neurotech Reports at 415 546 1259 or visit MULTIMEDIA: Image link for media: Image caption: Prof. Edward Chang, M.D., from UC San Francisco (L) and J. Thomas Mortimer, Ph.D., Emeritus from Case Western University (R) will keynote the 2025 Neurotech Leaders Forum in San Francisco. Neurotech Reports logo: NEWS SOURCE: Neurotech Reports Keywords: Biotechnology, neurostimulation, spinal cord, brain devices, chronic pain neuromodulation, Neurotech Reports, neurotechnology, keynote, 2025 Neurotech Leaders Forum, AI and BCI technologies, SAN FRANCISCO, Calif. This press release was issued on behalf of the news source (Neurotech Reports) who is solely responsibile for its accuracy, by Send2Press® Newswire. Information is believed accurate but not guaranteed. Story ID: S2P128348 APNF0325A To view the original version, visit: © 2025 Send2Press® Newswire, a press release distribution service, Calif., USA. RIGHTS GRANTED FOR REPRODUCTION IN WHOLE OR IN PART BY ANY LEGITIMATE MEDIA OUTLET - SUCH AS NEWSPAPER, BROADCAST OR TRADE PERIODICAL. MAY NOT BE USED ON ANY NON-MEDIA WEBSITE PROMOTING PR OR MARKETING SERVICES OR CONTENT DEVELOPMENT. Disclaimer: This press release content was not created by nor issued by the Associated Press (AP). Content below is unrelated to this news story.


Harvard Business Review
24-06-2025
- Business
- Harvard Business Review
Ensuring Boston Ballet Stays Relevant
Ming Min Hui, executive director of Boston Ballet, is unique in her field. As a young, Asian American woman with a Harvard Business School MBA and a background in finance, she has focused her tenure on ensuring the ballet company stays true to its art form and simultaneously relevant to its times. Hui had worked for eight years at Boston Ballet as chief of staff and chief financial officer before taking the helm. Now leading one of the foremost ballet companies in the United States, she confronted evolving demographics, shifting audience habits, and an increasingly challenging financial environment. Harvard Business School Assistant Professor Edward Chang and Hui join host Brian Kenny to discuss the case Ming Min Hui at Boston Ballet. They explore how she balances the past, present, and future—and how these lessons translate from this nonprofit arts organization to any company, anywhere.


Irish Times
01-05-2025
- Health
- Irish Times
‘Great progress' in the race to turn brainwaves into fluent speech
Neuroscientists are striving to give a voice to people unable to speak in a fast-advancing quest to harness brainwaves to restore or enhance physical abilities. Researchers at universities across California, and companies such as New York-based Precision Neuroscience, are among those making headway towards generating naturalistic speech through a combination of brain implants and artificial intelligence. Investment and attention have long been focused on implants that enable severely disabled people to operate computer keyboards, control robotic arms or regain some use of their own paralysed limbs. But some labs are making strides by concentrating on technology that converts thought patterns into speech. 'We are making great progress – and making brain-to-synthetic voice as fluent as chat between two speaking people is a major goal,' says Edward Chang, a neurosurgeon at the University of California, San Francisco. 'The AI algorithms we are using are getting faster, and we are learning with every new participant in our studies.' READ MORE Chang and colleagues, including from the University of California, Berkeley, last month published a paper in Nature Neuroscience detailing their work with a quadriplegic woman – paralysed limbs and torso – who had not been able to speak for 18 years after suffering a stroke. She trained a deep-learning neural network by silently attempting to say sentences composed using 1,024 different words. The audio of her voice was created by streaming her neural data to a joint speech synthesis and text-decoding model. The technique reduced the lag between the patient's brain signals and the resultant audio from the eight seconds the group had achieved previously to one second. This is much closer to the 100-200 millisecond time gap in normal speech. The system's median decoding speed was 47.5 words per minute, or about a third the rate of normal conversation. Even if you could, you wouldn't want people to hear your inner speech — Nick Ramsey of University Medical Centre Utrecht. Many thousands of people a year could benefit from so-called voice prosthesis. Their cognitive functions remain more or less intact but they have suffered speech loss due to stroke, the neurodegenerative disorder ALS and other brain conditions. If successful, researchers hope the technique can be extended to help people who have difficulty vocalising because of conditions such as cerebral palsy or autism. The potential of voice neuroprosthesis is beginning to trigger interest among businesses. Precision Neuroscience claims to be capturing higher-resolution brain signals than academic researchers, since the electrodes of its implants are more densely packed. The company has worked with 38 patients and plans soon to collect data from more, providing a potential pathway to commercialisation. Precision received regulatory clearance on April 17th to leave its sensors implanted for up to 30 days at a time. That would enable its scientists to train their system with what could within a year be the 'largest repository of high-resolution neural data that exists on planet Earth', says chief executive Michael Mager. The next step would be to 'miniaturise the components and put them in hermetically sealed packages that are biocompatible so they can be planted in the body forever', says Mager. [ Brain tech breakthrough restores ALS patient's ability to speak Opens in new window ] Elon Musk's Neuralink, the best-known brain-computer interface (BCI) company, has focused on enabling people with paralysis to control computers rather than giving them a synthetic voice. An important obstacle to the development of brain-to-voice technology is the time patients take to learn how to use the system. A key unanswered question is how much the response patterns in the motor cortex – the part of the brain that controls voluntary actions, including speech – vary between people. If they remained very similar, machine-learning models trained on previous individuals could be used for new patients, says Nick Ramsey, a BCI researcher at University Medical Centre Utrecht. That would accelerate a process that today takes 'tens or hundreds of hours, generating enough data by showing a participant text and asking them to try to speak it'. Ultimately a voice neuroprosthesis should provide the full expressive range of the human voice Ramsey says all brain-to-voice research focuses on the motor cortex where neurons activate the muscles involved in speaking, with no evidence that speech could be generated from other brain areas or by decoding inner thoughts. 'Even if you could, you wouldn't want people to hear your inner speech,' he adds. 'There are a lot of things I don't say out loud because they wouldn't be to my benefit or they might hurt people.' The development of a synthetic voice as good as healthy speech could still be 'quite a ways away', says Sergey Stavisky, co-director of the neuroprosthetics lab at University of California, Davis. His lab has demonstrated it can decode what someone is trying to say with about 98 per cent accuracy, he says. But the voice output isn't instantaneous and it doesn't capture important speech qualities such as tone. It is unclear if the recording hardware – electrodes – being used can enable the synthesis to match a healthy human voice, he adds. Scientists need to develop a deeper understanding of how the brain encodes speech production and better algorithms to translate neural activity into vocal outputs, says Stavisky. 'Ultimately a voice neuroprosthesis should provide the full expressive range of the human voice, so that for example they can precisely control their pitch and timing and do things like sing.' – Copyright The Financial Times Limited 2025


Fox News
16-04-2025
- Health
- Fox News
AI system restores speech for paralyzed patients using own voice
Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak. This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech. Compared to other recent attempts to create speech from brain signals, this new system is a major advancement. The system uses devices such as high-density electrode arrays that record neural activity directly from the brain's surface. It also works with microelectrodes that penetrate the brain's surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient's voice. The neuroprosthesis samples neural data from the brain's motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control. One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient's pre-injury voice to fill in the missing details. This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally. "It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future," UCSF neurosurgeon Edward Chang said. The next steps include speeding up the AI's processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness. What's truly amazing about this AI is that it doesn't just translate brain signals into any kind of speech. It's aiming for natural speech, using the patient's own voice. It's like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals. What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels: Answers to the most-asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.