logo
Sarepta refuses to pull gene therapy despite FDA order

Sarepta refuses to pull gene therapy despite FDA order

Yahoo2 days ago
Sarepta Therapeutics is refusing to pull its gene therapy Elevidys (delandistrogene moxeparvovec), despite a request from the US Food and Drug Administration (FDA).
Speaking on 18 July, shortly after the FDA ordered the pull for Elevidys, Sarepta said it will continue to ship Elevidys to the ambulant population as studies show no signs of new or changed safety signals, adding it first heard of this potential request earlier in the day through media reports.
A statement from the company reads: 'At Sarepta, patient safety and well-being are always our top priority. We are committed to upholding the highest safety standards for all of our therapies.
"This guides every decision we make, as evidenced by our conservative decision to pause shipments of Elevidys for non-ambulant patients while we work with the FDA to update the label and evaluate the use of an enhanced immunosuppression regimen to mitigate the risk of acute liver failure (ALF).'
Trial of LGMD therapy also put on hold
The FDA has also put a hold on Sarepta's gene therapy SRP-9004 in limb girdle muscular dystrophy (LGMD) after a patient died in an early-stage study due to ALF.
Speaking about the patient death in the Phase I trial of SRP-9004, the company's statement added: 'We recognise that the death of any patient is heartbreaking, including the recent death of a 51-year-old non-ambulant LGMD patient.
"We also want to clarify that this tragic event occurred in a Phase I clinical trial for an investigational gene therapy called SRP-9004. SRP-9004 is a clinical-stage therapy that is intended to treat a different disease, is administered using a different dose, and is manufactured using a different process. The LGMD study participant who passed away was not treated with Elevidys, and the dosing for the SRP-9004 trial had concluded at the time of his death.'
This patient's death, which was reported to the FDA on 3 July, is the third that has impacted Sarepta's gene therapy programme, with the first two deaths earlier this year in patients treated with Elevidys, both also due to ALF. Both Elevidys and SRP-9004 are adeno-associated virus (AAV) gene therapies that use the same AAVrh74 serotype.
The FDA's Center for Biologics Evaluation and Research (CBER) director Dr Vinay Prasad, said: 'Protecting patient safety is our highest priority, and the FDA will not allow products whose harms are greater than benefits. The FDA will halt any clinical trial of an investigational product if clinical trial participants would be exposed to an unreasonable and significant risk of illness or injury.'
This comes just days after Sarepta said it has agreed to change the black box on the Elevidys label to include ALF and acute liver injury (ALI) warnings. In the same announcement, Sarepta said it would be cutting 500 jobs as part of company restructuring.
Speaking after the third patient death was announced, GlobalData healthcare analyst Momna Ali said the recent setbacks for Sarepta put the cell and gene therapy (CGT) sector at a 'crossroads'.
Ali said: 'Following Sarepta's announcement of a third patient death, a new label for Elevidys and them laying off 500 employees, or 36% of its workforce, shelving parts of its pipeline to save $420m, and shifting focus from gene therapy to siRNA programmes, this is going to put the CGT sector at a pivotal crossroad.
'While the scientific potential of CGT therapies remains extraordinary, this moment serves as a reminder of the complexity, cost, and responsibility involved. There is a lot of buzz around therapies 'beyond the pill'; however, for the landscape to keep evolving at the pace it has been in the last 3-5 years, there has to be greater transparency, patient safety, and sustainable innovation – otherwise, it'll be met with more setbacks.'
The news of the LGMD death came to light on 17 July in media reports. After this was reported, Sarepta's stock has dropped 43.21%, from $22.54 at market open on 17 July to $12.80 at market open on 21 July.
The FDA also said it has revoked the platform technology designation for Sarepta's AAVrh74 Platform Technology.
"Sarepta refuses to pull gene therapy despite FDA order" was originally created and published by Pharmaceutical Technology, a GlobalData owned brand.
The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

FDA panel on the use of antidepressants during pregnancy is alarming experts
FDA panel on the use of antidepressants during pregnancy is alarming experts

Los Angeles Times

time34 minutes ago

  • Los Angeles Times

FDA panel on the use of antidepressants during pregnancy is alarming experts

The U.S. Food and Drug Administration is turning its attention to selective serotonin reuptake inhibitors, a class of antidepressant drugs long criticized by Health and Human Services Secretary Robert F. Kennedy Jr. On Monday morning, the FDA hosted a 10-person expert panel on the use during pregnancy of SSRIs, which include medications like fluoxetine (Prozac), sertraline (Zoloft) and citalopram (Celexa), among others. Nine of the panel's 10 members were researchers, doctors or psychologists who have previously questioned the safety of SSRIs publicly or spoken out against antidepressant use in general. Over the course of the discussion, several panel members cited studies that lacked appropriate controls, physicians not involved with the panel said. In other words, there was no way to be certain on the basis of the studies whether the observed health problems were caused by SSRIs, the underlying mood disorder or some other factor. Other participants described study findings inaccurately or incompletely, said outside experts. For example, few panelists considered the risks of SSRI use relative to the risks associated with untreated depression, which also contributes to poor outcomes for children and mothers. In the U.S., suicide is a leading cause of maternal death in the first year of a baby's life. An FDA spokesperson said the panel was part of the agency's 'broader efforts to apply rigorous, evidence-based standards to ingredient safety and modernize regulatory oversight' and did not respond to further queries about the agency's potential next steps. But healthcare professionals expressed concern that the panel could ultimately prevent women from getting the care they need. 'I was surprised and disappointed by the amount of misinformation that was presented,' said Dr. Katie Unverferth, a reproductive psychiatrist and medical director of UCLA's Maternal Mental Health Program. 'When we look at the body of data ... we find that there are no consistent associations [of] SSRIs with cardiac defects, pulmonary hypertension or neurodevelopmental issues in offspring,' she said, naming some of the harms panelists attributed to the drugs. 'This misinformation just creates intrusive thoughts. It's not helpful.' The panel included just one specialist in maternal mood disorders — Dr. Kay Roussos-Ross, an obstetrician-gynecologist and director of the Perinatal Mood Disorders Program at the University of Florida College of Medicine — who argued that SSRIs are for most patients a safe treatment option for serious mental health disorders in pregnancy. 'Mental health disorders are no different than medical disorders,' said Roussos-Ross. 'I want to stress that treating mental illness in pregnancy is not a luxury. It's a necessity,' she said. 'We're not asking [pregnant] women to not take their anti-hypertensives and risk death to them or their baby. We're not asking women to stop their diabetes medications. We should not be withholding SSRIs as a possible treatment for women who need it.' The FDA did not respond to questions about how experts were selected for the panel. Participant Dr. Josef Witt-Doerring runs a private clinic that helps patients wean off psychiatric medication. Another panelist, Dr. Roger McFillin, is a prolific podcaster and a skeptic of germ theory, the belief — widely held as a fundamental truth in medicine since the 19th century — that infectious diseases are caused by microorganisms. Panelist Dr. David Healy, a psychiatrist from Wales, made a number of confounding and misleading statements, insisting that 'mothers who are taking SSRIs in pregnancy have a 10-fold greater risk of having a baby with fetal alcohol syndrome' (that figure describes the subject population of a single 2011 study, not the general public). Healy also stated that 'any drug that causes birth defects will cause autism spectrum disorder also,' a claim that has no basis in any scientific research. Dr. David Urato, chief of maternal and fetal medicine at MetroWest Medical Center in Framingham, Mass., was the only panelist besides Roussos-Ross who cares directly for pregnant patients. He spoke forcefully on the potential harms the drugs pose to developing babies. 'Never before in human history have we chemically altered babies like this,' he said during the discussion. 'There is now more than enough evidence to support strong warnings from the FDA about how drugs disrupt fetal development and impact the moms.' Roussos-Ross argued that the increased risk of birth defects for babies exposed to SSRIs in pregnancy was statistically insignificant, and that children of mothers with untreated depression were more likely to have later behavioral problems than those of mothers who took medication for the disease. 'Having that [medication] not be available to women who need it would really be detrimental,' she said. At this, panel moderator Tracy Beth Høeg — a sports medicine doctor who is now a senior advisor for clinical sciences at the FDA — said, 'I'm going to do something unconventional. I'm sorry to play favorites, but Dr. Urato, I want you to weigh in now.' In response, Urato questioned the idea that depression can be alleviated with antidepressant medication at all. 'This idea about depression — [that it] can cause harm and therefore we treat [it] with these chemicals, and by getting the treatment we see improved outcomes — this is something we all would want. It's wishful thinking,' he said. 'But it's not actually what the data shows.' It was not clear to which data he was referring. In 2019, the most recent year for which data are available, one in every eight U.S. adults had a prescription for antidepressant medication. While the drugs don't work for all people with major depression, analyses of multiple studies have consistently found them to be significantly better than placebos at alleviating illness symptoms. The drugs have been a target of Kennedy's Make America Healthy Again movement, along with vaccines and food dyes. In his confirmation hearings and on podcast appearances, Kennedy has claimed — inaccurately — that the drugs are both linked to school shootings and harder to quit than heroin. There is no evidence for either claim. In February, President Trump placed Kennedy at the helm of the Make America Healthy Again Commission, a group tasked with, among other things, evaluating 'the prevalence of and threat posed by the prescription of selective serotonin reuptake inhibitors, antipsychotics, mood stabilizers, stimulants, and weight-loss drugs.' Healthcare professionals expressed frustration with the FDA's approach. 'There is already so much shame and stigma that surrounds these illnesses. There is also a lot of shame and stigma around taking medication during pregnancy or the postpartum period,' said Paige Bellenbaum, a perinatal mental health therapist and adjunct professor of social work at Hunter College. 'We are taking a giant step backwards in so many ways. This will reinstill the fear that was there to begin with [and] will ultimately result in the loss of life.' Alexandre Bonnin, an associate professor of pathology at USC, has studied the effects of prenatal SSRI exposure on the developing fetal brain for years. The most recent large studies in the field haven't found a statistically significant association between SSRIs and fetal harm, he said. 'Our finding, at least at the basic science level, suggests that the use of SSRIs in pregnancy can be beneficial if the mom is under major stress, anxiety or depression, because the maternal stress actually itself has many negative effects on fetal brain development,' he said.

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

CNN

time35 minutes ago

  • CNN

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

CNN

time35 minutes ago

  • CNN

FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.

To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store