Latest news with #CenterforDevicesandRadiologicalHealth
Yahoo
5 days ago
- Health
- Yahoo
FDA's AI tool for medical devices struggles with simple tasks
A new Food and Drug Administration AI tool that could speed up reviews and approvals of medical devices such as pacemakers and insulin pumps is struggling with simple tasks, according to two people familiar with it. The tool — which is still in beta testing — is buggy, doesn't yet connect to the FDA's internal systems and has issues when it comes to uploading documents or allowing users to submit questions, the people say. It's also not currently connected to the internet and can't access new content, such as recently published studies or anything behind a paywall. The artificial intelligence, dubbed internally CDRH-GPT, is intended to help staffers at the agency's Center for Devices and Radiological Health, a division responsible for ensuring the safety of devices implanted in the body as well as essential tools like X-rays and CT scanners. The division was among those affected by the sweeping mass layoffs at the Department for Health and Human Services earlier this year. While many of the device reviewers were spared, the agency eliminated much of the backend support that enables them to issue approval decisions on time. The work of reviewers includes sifting through large amounts of data from animal studies and clinical trials. Depending on the applicant, it can take months or even over a year — which an AI tool could feasibly help shorten. Experts, however, are concerned that the FDA's push toward AI could outpace what the technology is actually ready for. Since taking over the agency on April 1, Commissioner Dr. Marty Makary has pushed to integrate artificial intelligence across the FDA's divisions. How this move into AI could affect the safety and effectiveness of drugs or medical devices hasn't been determined. Last month, Makary set a June 30 deadline for the AI rollout. On Monday, he said the agency was ahead of schedule. But the two people familiar with CDRH-GPT say that it still needs significant work and that FDA staff were already concerned about meeting the June deadline, at least in its original form. 'I worry that they may be moving toward AI too quickly out of desperation, before it's ready to perform,' said Arthur Caplan, the head of the medical ethics division at NYU Langone Medical Center in New York City. He stressed that reviewing medical devices accurately is essential, since people's lives depend on it. 'It still needs human supplementation,' Caplan said. AI 'is really just not intelligent enough yet to really probe the applicant or challenge or interact.' The FDA directs all media inquiries to the Department of Health and Human Services. A spokesperson for HHS did not respond to a request for comment. On Monday, Makary announced that a separate AI tool, called Elsa, had been rolled out to all FDA employees. Elsa is now intended for basic tasks agency-wide, such as summarizing data from adverse event reports. 'The first reviewer who used this AI assistant tool actually said that the AI did in six minutes what it would normally take him two to three days to do,' Makary said in an interview last week. 'And we're hoping that those increased efficiencies help. So I think we've got a bright future.' The reality inside the agency is quite different, the same two sources said. While the concept is solid and a step in the right direction, they said, some staff feel it's being rushed and not yet ready for prime time. 'AI tools to help with certain tasks for reviewers and scientists seems reasonable given the potential utility of AI,' one of the people said. However, the person said they disagree with the 'aggressive roll out' and claims that it could reduce work 'by hours and days.' To be sure, experts say, it's not uncommon for a company or government agency to launch a new product and then refine it through iterative updates over time. Staff have worked hard to get Elsa up and running, the people said, but it still can't handle some core functions and needs more development before it can support the agency's complex regulatory work. When staff tested the tool Monday with questions about FDA-approved products or other public information, it provided summaries that were either incorrect or only partially accurate, one of the people said. It's unclear, the people said, whether CDRH-GPT will eventually be integrated into Elsa or remain a standalone system. Richard Painter, a law professor at the University of Minnesota and a former government ethics lawyer, said there are also concerns about potential conflicts of interest. He wondered whether there is a protocol in place to prevent any government official — such as an FDA reviewer using the technology — from having financial ties with companies that could benefit from AI. While the technology has existed for years, he said, it's still a new venture for the FDA. 'We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts,' Painter said. 'A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.' Some at the FDA don't see AI as a solution to their overwhelming workloads — they see it as a sign that they may eventually be replaced. The FDA is 'already spread thin from the RIF [layoffs] and the steady loss of individuals while in a hiring freeze and no capacity to backfill,' one of the people said. This article was originally published on


NBC News
5 days ago
- Health
- NBC News
FDA's AI tool for medical devices struggles with simple tasks
A new Food and Drug Administration AI tool that could speed up reviews and approvals of medical devices such as pacemakers and insulin pumps is struggling with simple tasks, according to two people familiar with it. The tool — which is still in beta testing — is buggy, doesn't yet connect to the FDA's internal systems and has issues when it comes to uploading documents or allowing users to submit questions, the people say. It's also not currently connected to the internet and can't access new content, such as recently published studies or anything behind a paywall. The artificial intelligence, dubbed internally CDRH-GPT, is intended to help staffers at the agency's Center for Devices and Radiological Health, a division responsible for ensuring the safety of devices implanted in the body as well as essential tools like X-rays and CT scanners. The division was among those affected by the sweeping mass layoffs at the Department for Health and Human Services earlier this year. While many of the device reviewers were spared, the agency eliminated much of the backend support that enables them to issue approval decisions on time. The work of reviewers includes sifting through large amounts of data from animal studies and clinical trials. Depending on the applicant, it can take months or even over a year — which an AI tool could feasibly help shorten. Experts, however, are concerned that the FDA's push toward AI could outpace what the technology is actually ready for. Since taking over the agency on April 1, Commissioner Dr. Marty Makary has pushed to integrate artificial intelligence across the FDA's divisions. How this move into AI could affect the safety and effectiveness of drugs or medical devices hasn't been determined. Last month, Makary set a June 30 deadline for the AI rollout. On Monday, he said the agency was ahead of schedule. But the two people familiar with CDRH-GPT say that it still needs significant work and that FDA staff were already concerned about meeting the June deadline, at least in its original form. 'I worry that they may be moving toward AI too quickly out of desperation, before it's ready to perform,' said Arthur Caplan, the head of the medical ethics division at NYU Langone Medical Center in New York City. He stressed that reviewing medical devices accurately is essential, since people's lives depend on it. 'It still needs human supplementation,' Caplan said. AI 'is really just not intelligent enough yet to really probe the applicant or challenge or interact.' The FDA directs all media inquiries to the Department of Health and Human Services. A spokesperson for HHS did not respond to a request for comment. On Monday, Makary announced that a separate AI tool, called Elsa, had been rolled out to all FDA employees. Elsa is now intended for basic tasks agency-wide, such as summarizing data from adverse event reports. 'The first reviewer who used this AI assistant tool actually said that the AI did in six minutes what it would normally take him two to three days to do,' Makary said in an interview last week. 'And we're hoping that those increased efficiencies help. So I think we've got a bright future.' The reality inside the agency is quite different, the same two sources said. While the concept is solid and a step in the right direction, they said, some staff feel it's being rushed and not yet ready for prime time. 'AI tools to help with certain tasks for reviewers and scientists seems reasonable given the potential utility of AI,' one of the people said. However, the person said they disagree with the 'aggressive roll out' and claims that it could reduce work 'by hours and days.' To be sure, experts say, it's not uncommon for a company or government agency to launch a new product and then refine it through iterative updates over time. Staff have worked hard to get Elsa up and running, the people said, but it still can't handle some core functions and needs more development before it can support the agency's complex regulatory work. When staff tested the tool Monday with questions about FDA-approved products or other public information, it provided summaries that were either incorrect or only partially accurate, one of the people said. It's unclear, the people said, whether CDRH-GPT will eventually be integrated into Elsa or remain a standalone system. Richard Painter, a law professor at the University of Minnesota and a former government ethics lawyer, said there are also concerns about potential conflicts of interest. He wondered whether there is a protocol in place to prevent any government official — such as an FDA reviewer using the technology — from having financial ties with companies that could benefit from AI. While the technology has existed for years, he said, it's still a new venture for the FDA. 'We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts,' Painter said. 'A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.' Some at the FDA don't see AI as a solution to their overwhelming workloads — they see it as a sign that they may eventually be replaced. The FDA is 'already spread thin from the RIF [layoffs] and the steady loss of individuals while in a hiring freeze and no capacity to backfill,' one of the people said.