Latest news with #JeremyWalsh
Yahoo
4 days ago
- Business
- Yahoo
FDA deploys generative AI tool Elsa to transform agency operations in historic move
In a major step toward modernizing its operations, the U.S. Food and Drug Administration (FDA) on Monday launched Elsa, a generative artificial intelligence (AI) tool, designed to help agency employees, including scientific reviewers and field investigators, work smarter and faster. Elsa is the regulator's first large-scale foray into generative AI and is being dubbed as a turning point for the agency. Built in a secure GovCloud environment, the tool gives staff the ability to quickly search and summarize internal documents without compromising sensitive research and data handled by FDA staff. More than a chatbot, the tool also allows FDA scientists and subject-matter experts to spend less time on tedious, repetitive tasks that often slow down the review the AI models do not train on information submitted by regulated industries, ensuring the confidentiality of proprietary research. 'Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee,' the agency's Chief AI Officer, Jeremy Walsh, said. 'As we learn how employees are using the tool, our development team will be able to add capabilities and grow with the needs of employees and the agency.' The agency said it is already using the tool to speed up clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets. FDA Commissioner Dr. Marty Makary said the launch came ahead of schedule and under budget due to strong collaboration among internal teams. 'Following a very successful pilot program with FDA's scientific reviewers, I set an aggressive timeline to scale AI agency-wide by June 30,' he said. 'Today's rollout of Elsa is ahead of schedule and under budget, thanks to the collaboration of our in-house experts across the centers.' Elsa is a large language model–powered AI tool designed to assist with reading, writing, and summarizing. It can summarize adverse events to support safety profile assessments, perform faster label comparisons, and generate code to help develop databases for nonclinical applications. It is seen as the first step in a broader strategy to embed AI into FDA workflows. 'Prioritizing efficiency and responsibility, the FDA launched Elsa ahead of schedule using an all-center approach. Leaders and technologists across the agency collaborated, demonstrating the FDA's ability to transform its operations through AI,' the agency within the U.S. Department of Health and Human Services said in a release. As employees use the tool, its developers plan to expand capabilities to meet emerging needs. This includes improving usability, data processing, generative-AI functions, and tailoring outputs to center-specific needs while maintaining strict information security and compliance with FDA policy. The agency-wide rollout was coordinated by Walsh, the agency's newly-appointed Chief AI Officer, and Sridhar Mantha. Walsh previously led enterprise-scale technology deployments across federal health and intelligence agencies, and Mantha recently led the Office of Business Informatics in CDER.

Engadget
6 days ago
- Health
- Engadget
The FDA rolls out its own AI to speed up clinical reviews and scientific evaluations
The FDA has launched the generative AI tool, Elsa, agency-wide to help its employees with everything from clinical reviews to investigations. Sure, we're living in a time of widespread disinformation and pushbacks against science, but why not rush things through with AI? Elsa — yes, weirdly like the snow queen from Frozen — completed a "very successful pilot program with FDA's scientific reviewers." According to the FDA, the AI tool can help with reading, writing and summarizing everything from adverse events to assessments. Elsa can also do label comparisons and generate code. It's already being used to speed up clinical protocol reviews and scientific evaluations, along with finding "high-priority inspection targets." Elsa should be a secure platform, the FDA states. It's not clear how exactly the agency trained Elsa, but the FDA claims it's not through "data submitted by regulated industry." The information exists in Amazon Web Services' GovCloud that, again, should keep all information internal. The FDA calls Elsa the first step in its AI journey. "Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee," said FDA Chief AI Officer Jeremy Walsh. "As we learn how employees are using the tool, our development team will be able to add capabilities and grow with the needs of employees and the agency." If you buy something through a link in this article, we may earn commission.


Axios
12-05-2025
- Business
- Axios
FDA's plan to roll out AI agencywide raises questions
The Food and Drug Administration is rolling out an aggressive plan to make generative AI a linchpin in its decision-making, part of a bid to get faster and leaner in evaluating drugs, foods, medical devices and diagnostic tests. Why it matters: The plan raises urgent questions about what's being done to secure the vast amount of proprietary company data that's part of the process and whether sufficient guardrails are in place. Driving the news: The FDA is racing to roll out generative AI across all its centers to augment employees' work following a successful pilot, officials said. Commissioner Marty Makary has ordered immediate deployment, with all offices to run on a unified, secure system tied to internal data platforms by June 30. Leading the effort are newly appointed chief AI officer Jeremy Walsh, formerly chief technologist at Booz Allen Hamilton, and Sridhar Mantha, a longtime FDA data leader. Makary said the technology could slash tasks in the review process for new therapies from "days to just minutes." The big picture: Trump's overhaul of federal AI policy — ditching Biden-era guardrails in favor of speed and dominance — has turned the government into a tech testing ground. With Musk leading the charge under an "AI-first" strategy, critics warn rushed rollouts at a range of agencies could compromise data security, automate important decisions, and put Americans at risk. The General Services Administration is piloting an AI chatbot to automate routine tasks, and the Social Security Administration plans to use AI software to transcribe applicant hearings. GSA officials said their tool has been in development for 18 months. Several experts told Axios the integration of AI at the FDA is a good move, but the speed of the rollout and lack of specifics raise multiple questions. "There's been a lot of AI already happening across different centers [in the FDA] for a variety of different reasons, but there's never been a concerted effort," said former FDA commissioner Robert Califf. "I have nothing but enthusiasm tempered by caution about the timeline." The industry would likely welcome anything that might get their drugs to market faster and temper cost increases, but a key question pharmaceutical companies will have is how the proprietary data they submit will be secured, said Mike Hinckle, an FDA compliance expert at K&L Gates. "While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the center. We're pleased to see the FDA taking concrete action to harness the potential of AI," PhRMA spokesperson Andrew Powaleny said in a statement. Zoom in: Another key question is which models are being used to train the AI, and what inputs are being provided for specialized fine tuning, Eric Topol, founder of the Scripps Research Translational Institute, told Axios. "The idea is good, but the lack of details and the perceived 'rush' is concerning," Topol said. Last week, Wired reported the FDA was in discussions with OpenAI about a project called cderGPT, which it said seems to be an AI tool for the Center for Drug Evaluation and Research (CDER). In response to questions from Axios, a Health and Human Services spokesperson did not confirm that, but said the technology was not meant to supplant humans. "Commissioner Makary has emphasized AI is a tool to support — not replace — human expertise," the spokesperson said. "When used responsibly, AI can enhance regulatory rigor by helping predict toxicities and adverse events for certain conditions."