logo
#

Latest news with #JinzhongLiu

The FDA Launches Its Generative-AI Tool, Elsa, Ahead Of Schedule
The FDA Launches Its Generative-AI Tool, Elsa, Ahead Of Schedule

Gizmodo

time5 days ago

  • Health
  • Gizmodo

The FDA Launches Its Generative-AI Tool, Elsa, Ahead Of Schedule

Generative artificial intelligence has found another home in the federal government. On Tuesday, the U.S. Food and Drug Administration announced the early launch of its very own generative AI that it hopes will improve efficiency. The FDA's tool—nicknamed Elsa—is designed to assist employees with everything from scientific reviews to basic operations. Originally, the FDA planned to launch by June 30, so Elsa is well ahead of schedule and under budget, according to an FDA statement. It's not clear what exact information Elsa was trained on, but the FDA says that it didn't use any 'data submitted by regulated industry' in order to protect sensitive research and information. Currently, Elsa houses its information in GovCloud, an Amazon Web Services product specifically intended for classified information. As a language model, Elsa can help employees with reading, writing, and summarizing. In addition, the FDA said that it can summarize adverse events, generate code for nonclinical applications, and more. Per the agency, Elsa is already being used to 'accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.' In a May press release announcing the completion of the FDA's first AI-assisted scientific review, Makary said he was 'blown away' by Elsa's capabilities, which '[hold] tremendous promise in accelerating the review time for new therapies'. He added, 'We need to value our scientists' time and reduce the amount of non-productive busywork that has historically consumed much of the review process.' According to one scientist, Jinzhong Liu, the FDA's generative AI completed tasks in minutes that would otherwise take several days. In Tuesday's announcement, FDA Chief AI Officer Jeremy Walsh said, 'Today marks the dawn of the AI era at the FDA with the release of Elsa, AI is no longer a distant promise but a dynamic force enhancing and optimizing the performance and potential of every employee.' Generative AI can certainly be a useful tool, but every tool has its drawbacks. With AI specifically, there has been an uptick in stories about hallucinations which are outright false or misleading claims and statements. Although commonly associated with chatbots like ChatGPT, hallucinations can still pop up in federal AI models, where they can unleash even more chaos. Per IT Veterans, AI hallucinations typically stem from factors like biases in training data or a lack of fact-checking safeguards built into the model itself. Even with those in place, though, IT Veterans cautions that human oversight is 'essential to mitigate the risks and ensure the reliability of AI-integrated federal data streams'. Ideally, the FDA has thoroughly considered and taken measures to prevent any mishaps with Elsa's use. But the expansion of technology that really needs human oversight is always concerning when federal agencies are amidst mass layoffs. At the beginning of April, the FDA laid off 3,500 employees, including scientists and inspection staff (although some layoffs were later reversed). Time will reveal how Elsa ultimately performs. But eventually, the FDA plans to expand its use throughout the agency as it matures. This includes data processing and generative-AI functions to 'further support the FDA's mission.'

The FDA Will Use AI to Accelerate Approving Drugs
The FDA Will Use AI to Accelerate Approving Drugs

Yahoo

time11-05-2025

  • Business
  • Yahoo

The FDA Will Use AI to Accelerate Approving Drugs

The Food and Drug Administration just announced that it will immediately start using AI across all of its centers, after completing a new generative AI pilot for scientific reviewers. Supposedly, the AI tool will speed up the FDA's drug review process by reducing the time its scientists have to spend doing tedious, repetitive tasks — though, given AI's track record of constantly hallucinating, these claims warrant plenty of scrutiny. "This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days," said Jinzhong Liu, a deputy director in the FDA's Center for Drug Evaluation and Research (CDER), in a statement. FDA commissioner Martin Makary has directed that all FDA centers should achieve full AI integration by June 30, a questionably aggressive timeline. "By that date, all centers will be operating on a common, secure generative AI system integrated with FDA's internal data platforms," the agency said in its announcement. The announcement comes just a day after Wired reported that the FDA and OpenAI were holding talks to discuss the agency's use of AI. Notably, the FDA's new statement makes no mention of OpenAI or its potential involvement. Behind the scenes, however, Wired sources say that a team from the ChatGPT maker met with the FDA and two associates from Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks, to discuss a project called "cderGPT." The name is almost certainly a reference to the FDA's abovementioned CDER, which regulates drugs sold in the US. This may have been a long time coming. Wired notes that the FDA sponsored a fellowship in 2023 to develop large language models for internal use. And according to Robert Califf, who served as FDA commissioner between 2016 and 2017, the agency review teams have already been experimenting with AI for several years. "It will be interesting to hear the details of which parts of the review were 'AI assisted' and what that means," Califf told Wired. "There has always been a quest to shorten review times and a broad consensus that AI could help." The agency was considering using AI in other aspects of its operations, too. "Final reviews for approval are only one part of a much larger opportunity," Califf added. Makary, who was appointed commissioner by president Donald Trump, has frequently expressed his enthusiasm for the technology. "Why does it take over ten years for a new drug to come to market?" he tweeted on Wednesday. "Why are we not modernized with AI and other things?" The FDA news parallels a broader trend of AI adoption in federal agencies during the Trump administration. In March, OpenAI announced a version of its chatbot called ChatGPT Gov designed to be secure enough to process sensitive government information. Musk has pushed to fast-track the development of another AI chatbot for the US General Services Administration, while using the technology to try to rewrite the Social Security computer system. Yet, the risks of using the technology in a medical context are concerning, to say the least. Speaking to Wired, an ex-FDA staffer who has tested ChatGPT as a clinical tool pointed out the chatbot's proclivity for making up convincing-sounding lies — a problem that won't go away anytime soon. "Who knows how robust the platform will be for these reviewers' tasks," the former FDA employee told the magazine. More on medical AI: Nonverbal Neuralink Patient Is Using Brain Implant and Grok to Generate Replies

FDA says it will be 'aggressive' in adopting artificial intelligence in its labs
FDA says it will be 'aggressive' in adopting artificial intelligence in its labs

Yahoo

time08-05-2025

  • Health
  • Yahoo

FDA says it will be 'aggressive' in adopting artificial intelligence in its labs

May 8 (UPI) -- The Food and Drug Administration on Thursday announced what it called an aggressive agency-wide artificial intelligence adoption timeline, as well as a new AI tool to help scientists spend less time on tedious, repetitive tasks that can slow the review process. The FDA made the announcement in statement about "an aggressive timeline to scale use of artificial intelligence internally across all FDA centers by June 30, 2025, following the completion of a new generative AI pilot for scientific reviewers." The FDA said it plans full AI integration with FDA's internal data platforms. "I was blown away by the success of our first AI-assisted scientific review pilot. We need to value our scientists' time and reduce the amount of non-productive busywork that has historically consumed much of the review process," FDA Commissioner Dr. Martin A. Makary said in a statement. "The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies." The FDA's Jinzhong Liu said AI saves a lot of time with scientific review tasks. "This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days," Liu said in a statement. All FDA centers are being directed to deploy AI immediately to achieve full integration by the end of June. AI is a transformative technology, but issues have arisen with AI accuracy. In March researchers reported finding that AI systems can seemingly "hallucinate" similar to the way some people do. University of Washington researchers found AI systems pose hallucination risks that could have life-altering consequences, so AI work must be checked by humans to verify accuracy. In May 2023, tech experts issued a startling warning that AI poses a risk of extinction for humans. Hundreds of tech leaders signed onto a statement that said, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The FDA said Thursday it plans to expand AI across all FDA centers, using what it described as "a secure, unified platform." The AI integration is being coordinated by FDA Chief AI Officer Jeremy Walsh and Sridhar Mantha, and the FDA promised that the AI rollout will maintain "strict information security and compliance with FDA policy."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store