
Brain-computer interfaces: Unlocking the potential of man and machine
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
Hello. And welcome to The Excerpt. I'm Dana Taylor. Today is Sunday, July 6th, 2025. It is the fodder of science fiction plots and planting a device into the human brain. The Blockbuster franchise, The Matrix comes to mind. Here in the real world, though it's actually happening, while tiny computers have been implanted into less than 100 brains so far, their impact has been life-changing. Brain-computer interfaces or BCIs have done everything from allowing increased mobility to helping with speech. Could these devices become more mainstream and help the disabled do even more in the future? To dive into all these questions and more, I'm joined by Dr. Iahn Cajigas, a neurosurgeon with the University of Pennsylvania, who studied brain-computer interfaces and worked with patients using them for over three years. Thanks for joining me, Dr. Cajigas.
Dr. Iahn Cajigas:
Thank you for having me, Dana.
Dana Taylor:
For someone who's not familiar with these BCIs, can you catch us up on the landscape here? What kinds of things are they helping patients do right now?
Dr. Iahn Cajigas:
It's a very exciting field. I think the best way to think about brain-computer interface technology is really to think about what the brain does and understand what are the inputs and outputs of the brain, and then understand what can be damaged with injuries of the nervous system, because that's exactly what these devices aim to restore. So as we all know, there's five main senses, right? We have sight, smell, hearing, taste, and touch, and those are the inputs to the brain. And then we have the outputs that the brain, what can the brain do to interact with the world? That's really movement of muscles of the mouth, muscles of the hand to write, muscles of the leg to move. And therefore, what brain-computer interfaces are aiming to do is either helping get signals into the brain to restore some of the senses that have been lost or get signals out of the brain to re-enable patients to interact with the world.
Dana Taylor:
And without getting too technical, how do they work?
Dr. Iahn Cajigas:
The main language of the brain is really the electrical activity in individual neurons. And so by understanding what the neurons are trying to do and how these are related to the actions that the individual is trying to perform, we're able to make a translation between the activity in the brain to the output. So for example, if a patient's trying to reach with their arm to grab something, well, we can listen to the neurons in the motor cortex and how they're trying to recruit the muscles that are involved in that reach, and then tell a computer or a robotic arm, "Translate that movement into the movement of a cursor or the movement of a robotic arm that matches what the person's intending to do with their limb." It's really by creating a map that relates the electrical activity of the brain with the actual output that is intended, that we're then able to restore that function.
Dana Taylor:
I know there are several big name brands in the space right now, including Elon Musk's Neuralink. Apple's recently announced that it's also entering the space with tech that could one day allow a patient to control their iPhone. How do the devices functionally differ, or do they?
Dr. Iahn Cajigas:
The key is that there are different levels of granularity with which you can listen to the brain. So you can listen to individual neurons or pairs of neurons, or a small number of neurons. These are typically through invasive devices such as the Neuralink device that has multiple threads that are implanted directly into the brain substance to pick up this electrical activity. And then you can listen from very close, like in that case where you can start listening from further away and in less invasive ways. So there are other devices from other companies that sit on the brain surface itself, but yet penetrate the brain. And so listen to the brain activity by interfacing directly to the brain matter and picking up neurons, but at the surface, not deeper into the core of the brain. There are other companies that have implants that listen on the surface of the brain. That is, they do not penetrate the brain itself. These are so-called electrocorticographic BCIs instead of the penetrating BCIs, or non-penetrating BCIs, they can also be referred to.
And then there are others that can listen for much further away in a blood vessel and try to pick up the activity in the neighborhood around, say, the motor cortex. So there's different ways to get to the electrical activity, and we don't know what the best way is or what's going to pan out in the long term. So what you see is companies that are taking different strategies to try to listen to the brain at different levels of granularity to see which will be more successful. And it's a really exciting time because all of these companies are gearing up to be close to implanting these technologies in humans in the next few years with some of the companies already having tens of patients implanted and others gearing up for implant within the next year or two.
Dana Taylor:
You're working with patients who are using BCIs currently at the University of Pennsylvania. What kinds of things are your team studying with this group, and what's been the most surprising discovery?
Dr. Iahn Cajigas:
It's been very challenging to study BCI over the last 25 years, primarily because it takes patients that have suffered a neurologic injury to be recruited into a study that then requires this permanent invasive implant that goes into the brain. And for the last two decades, we didn't really have technologies that could leave the laboratory. And so these small number of patients, over 60 to 70 patients in the last 25 years, it's been limited what we've been able to do with this. And the pace of understanding the brain, it's been challenging. We've learned a lot, but it's been challenging learning from tens of patients. My lab's approach here at Penn has been, well, we have really unique access as neurosurgeons to the brain. So I do brain surgery often on my patients for other reasons, such as having Parkinson's or tremor, and I typically place electrodes into the brain to help them with these disorders.
So the approach of my lab has been we give our patients the opportunity to work with us, part of research for about 20 to 30 minutes during their surgery, where right before we do the critical portions of their surgery, we place a temporary electrode over the motor cortex where their hand area of the brain is, and we temporarily ask them to play games with us like rock, paper, scissors, or reaching tasks. And then we synchronize the activity on the brain surface to the activity of their movement. So they actually, we place special sensors on their hand and on their body, the same things that are used in Hollywood for motion capture. And we've developed ways to synchronize the activity of the behavior with the actual electrical activity on the brain surface so that we can see what this relationship looks like in able-bodied individuals, not patients that are paralyzed.
What's really been interesting about doing this in healthy individuals is, one, how excited people are to be able to contribute their time to potentially moving the field forward for others that have neurologic injury. And then just, we've had the opportunity to collaborate with one of the companies that is entering the BCI space where they have a high resolution noninvasive electrode. And so to look at the brain through such a high resolution window as somebody is moving and seeing these incredible patterns of electrical activity, spirals, traveling waves, things that we have never observed before at this level of detail is just phenomenal. We are still in the process of understanding how these patterns actually relate to the behavior being performed and developing methods to help decode what they're intending to do with their arm and hand, like showing us a gesture or reaching to an object.
Dana Taylor:
What does access to this technology look like today? Is anyone in the movement disordered group eligible?
Dr. Iahn Cajigas:
So for our surgeries, it's actually all those patients that are undergoing surgery for their movement disorder could participate in this research, and again, it's only 20, 30 minutes during their surgery. For this technology, there are other centers that are using it in other contexts, such as temporarily implanting them in the context of epilepsy surgery or somebody who may have had trauma or intraoperatively for mapping parts of the brain that may be critical during a tumor resection. So there are many institutions that are using this less invasive approach to understand the brain through temporary access to the brain, but in our case, it's open to anyone that's undergoing surgery for their movement disorder.
Dana Taylor:
What are the biggest hurdles or issues that medical professionals are facing with the technology?
Dr. Iahn Cajigas:
One is patient expectations. We hear, you've made a reference to the movie The Matrix, and the expectations that are there in lay press, and in the movies, and science fiction, the things that we are learning with these tools today about how the brain represents information related to movement to either help patients restore their ability to speak or move their hands to give them some independence, autonomy, or communication. This is the beginning of understanding that. And as this technology grows, we may be able to do more things, but I think that understanding where we are is very important. Another limitation is going to be access. I mean, these are medical devices that are going to have a cost associated with them, and it becomes challenging that as clinicians, I think we all aim to get patients things that are able to help them, but there's going to be a problem about equitable access that is going to occur, and we need to be careful with that as a society and make sure that we make it available to everybody that can benefit from it when the technology is mature enough.
Dana Taylor:
Are there any downsides for patients?
Dr. Iahn Cajigas:
Well, again, a lot of the patients that are candidates for this technology are quite debilitated. They're either, say, in a locked in state where they are unable to speak or move. They could be paralyzed from a spinal cord injury or from a stroke. So they've already undergone or had a neurologic event that has affected their life. These are surgical tools. All the implants I'm referring to are permanent implants. And so these are all surgeries that have small but inherent risks that need to be balanced as well on a patient by patient basis. So these include things like infection, bleeding, additional neurologic injury. If something were to happen during a surgery, and it really has to be this dialogue between the physician and the patient to find the best option for their condition to improve their quality of life.
Dana Taylor:
Some writers and journalists have written about this technology with an eye toward a future where even abled people are able to use BCIs to just do more, think faster, access their subconscious, the stuff really of science fiction, as you said. What are your thoughts on this as a medical professional?
Dr. Iahn Cajigas:
Yeah, I think as somebody who loves science fiction, I think it's the sorts of things that as a child got me excited about technology in the brain and understanding things. But I think as a medical professional today, I think we have to understand the risks that can be involved with these procedures. And BCI encompasses a large range of technologies. I can say that if we're talking about invasive BCI, the ones that go into the nervous system directly, those carry inherent risks that may not be worthwhile even for the able-bodied person, or puts them at risk of damaging something that is not having problems at the moment.
So if you were young and you want a brain implant, and something goes wrong with the surgery, or there was a stroke at the time of the implant, that could be very debilitating and completely affected trajectory of the young person's life. Now, if the safety profile of these devices changes over time, so where that risk becomes minuscule, then that equation might change. That risk benefit might change. But at the current iteration of this technology, I think we're very far away from a routine simple intervention with no risk, which is where really that's when things would take off for everybody wanting access to it.
Dana Taylor:
What's on the horizon for you and your team?
Dr. Iahn Cajigas:
I mean, the technology is so exciting. So my main goal with our research is to really re-enable folks that have had paralysis from stroke or spinal cord injury to be able to move their limbs or move a proxy of their limbs, maybe it's a robotic arm and, or exoskeletons that allow them to walk. We've been working on decoding gestures of the hand, so fine finger movements of the hand to maybe allow folks to control a prosthetic limb if they're an amputee. I think that in the next five years, that's where my research team will spend the majority of time, is how do we take the signals from the brain related to movement, translate them into the actions that folks are wanting to do, and give them this ability to interact with their world. And at the moment, we're focusing on upper extremity function.
Dana Taylor:
Dr. Cajigas, thank you so much for being on The Excerpt.
Dr. Iahn Cajigas:
My pleasure. Thank you for having me, Dana.
Dana Taylor:
Thanks to our senior producers, Shannon Rae Green and Kaely Monahan for their production assistance, our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I'm Dana Taylor-Wilson. We'll be back tomorrow morning with another episode of USA TODAY's The Excerpt.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


USA Today
8 minutes ago
- USA Today
Kelley Mack, 'The Walking Dead' and 'Chicago Med' actress, dies at 33
Kelley Mack, a film producer and actress who appeared in "The Walking Dead" and "Chicago Med," has died. She was 33. Born Kelley Lynne Klebenow, the Chapman University graduate died Aug. 2 in her hometown of Cincinnati following a battle with glioma of the central nervous system. Her agent, Julie Smith, confirmed the news to USA TODAY on Aug. 5. According to the National Cancer Institute, gliomas are groups of tumors formed from abnormal glial cells, which "support and protect the brain's nerve cells" and are found in the brain and spinal cord. They are "the most common type of malignant primary brain tumor in adults," per the NCI. Though Mack also did work on commercials and short films, her biggest roles saw her on screen on Season 8 of "Chicago Med" as Penelope Jacobs. She also played Addy in Season 9 of "The Walking Dead." Mack's sister, Kathryn Klebenow, shared news of her death on her Instagram, writing in the caption that "It is with indelible sadness that we are announcing the passing of our dear Kelley. Such a bright, fervent light has transitioned to the beyond, where we all eventually must go." She "passed peacefully on Saturday evening with her loving mother Kristen and steadfast aunt Karen present," per the post. Klebenow added, "As her sister, I want you all to know how brave ... (she) was, especially when she decided to make the leap to be reunited with God. I'm so ... proud of her." Previously, Mack's family shared in April that she had completed proton radiation treatment. In a statement shared on Mack's CaringBridge page, Klebenow wrote that her family will be holding "a recognition and remembrance of Kelley's life" in Glendale, Ohio, on Aug. 16, with a celebration of life planned for an unspecified date in Los Angeles.


USA Today
4 hours ago
- USA Today
Francis Ford Coppola hospitalized in Italy for heart procedure
Francis Ford Coppola is "resting nicely" after being hospitalized in Italy. The 86-year-old Oscar-winning filmmaker met with a cardiologist who specializes in electrophysiology, which primarily treats arrhythmias of the heart, "for a scheduled update procedure" with his doctor of more than 30 years, a source close to Coppola told USA TODAY on Aug. 5. USA TODAY has reached out to Coppola's representatives for comment. In July, the "Megalopolis" director spoke during the Magna Græcia Film Festival in Soverato, southern Italy, according to social media posts by the festival. This story is developing.

Engadget
8 hours ago
- Engadget
OpenAI's first new open-weight LLMs in six years are here
For the first time since GPT-2 in 2019, OpenAI is releasing new open-weight large language models. It's a major milestone for a company that has increasingly been accused of forgoing its original stated mission of "ensuring artificial general intelligence benefits all of humanity." Now, following multiple delays for additional safety testing and refinement, gpt-oss-120b and gpt-oss-20b are available to download from Hugging Face. Before going any further, it's worth taking a moment to clarify what exactly OpenAI is doing here. The company is not releasing new open-source models that include the underlying code and data the company used to train them. Instead, it's sharing the weights — that is, the numerical values the models learned to assign to inputs during their training — that inform the new systems. According to Benjamin C. Lee, professor of engineering and computer science at the University of Pennsylvania, open-weight and open-source models serve two very different purposes. "An open-weight model provides the values that were learned during the training of a large language model, and those essentially allow you to use the model and build on top of it. You could use the model out of the box, or you could redefine or fine-tune it for a particular application, adjusting the weights as you like," he said. If commercial models are an absolute black box and an open-source system allows for complete customization and modification, open-weight AIs are somewhere in the middle. OpenAI has not released open-source models, likely since a rival could use the training data and code to reverse engineer its tech. "An open-source model is more than just the weights. It would also potentially include the code used to run the training process," Lee said. And practically speaking, the average person wouldn't get much use out of an open-source model unless they had a farm of high-end NVIDIA GPUs running up their electricity bill. (They would be useful for researchers looking to learn more about the data the company used to train its models though, and there are a handful of open-source models out there like Mistral NeMo and Mistral Small 3.) With that out of the way, the primary difference between gpt-oss-120b and gpt-oss-20b is how many parameters each one offers. If you're not familiar with the term, parameters are the settings a large language model can tweak to provide you with an answer. The naming is slightly confusing here, but gpt-oss-120b is a 117 billion parameter model, while its smaller sibling is a 21-billion one. In practice, that means gpt-oss-120b requires more powerful hardware to run, with OpenAI recommending a single 80GB GPU for efficient use. The good news is the company says any modern computer with 16GB of RAM can run gpt-oss-20b. As a result, you could use the smaller model to do something like vibe code on your own computer without a connection to the internet. What's more, OpenAI is making the models available through the Apache 2.0 license, giving people a great deal of flexibility to modify the systems to their needs. Despite this not being a new commercial release, OpenAI says the new models are in many ways comparable to its proprietary systems. The one limitation of the oss models is that they don't offer multi-modal input, meaning they can't process images, video and voice. For those capabilities, you'll still need to turn to the cloud and OpenAI's commercial models, something both new open-weight systems can be configured to do. Beyond that, however, they offer many of the same capabilities, including chain-of-thought reasoning and tool use. That means the models can tackle more complex problems by breaking them into smaller steps, and if they need additional assistance, they know how to use the web and coding languages like Python. Additionally, OpenAI trained the models using techniques the company previously employed in the development of o3 and its other recent frontier systems. In competition-level coding gpt-oss-120b earned a score that is only a shade worse than o3, OpenAI's current state-of-the-art reasoning model, while gpt-oss-20b landed in between o3-mini and o4-mini. Of course, we'll have to wait for more real-world testing to see how the two new models compare to OpenAI's commercial offerings and those of its rivals. The release of gpt-oss-120b and gpt-oss-20b and OpenAI's apparent willingness to double down on open-weight models comes after Mark Zuckerberg signaled Meta would release fewer such systems to the public. Open-sourcing was previously central to Zuckerberg's messaging about his company's AI efforts, with the CEO once remarking about closed-source systems "fuck that." At least among the sect of tech enthusiasts willing to tinker with LLMs, the timing, accidental or not, is somewhat embarrassing for Meta. "One could argue that open-weight models democratize access to the largest, most capable models to people who don't have these massive, hyperscale data centers with lots of GPUs," said Professor Lee. "It allows people to use the outputs or products of a months-long training process on a massive data center without having to invest in that infrastructure on their own. From the perspective of someone who just wants a really capable model to begin with, and then wants to build for some application. I think open-weight models can be really useful." OpenAI is already working with a few different organizations to deploy their own versions of these models, including AI Sweden, the country's national center for applied AI. In a press briefing OpenAI held before today's announcement, the team that worked on gpt-oss-120b and gpt-oss-20b said they view the two models as an experiment; the more people use them, the more likely OpenAI is to release additional open-weight models in the future.