
What The FDA's Guidance On AI-Enabled Devices Gets Right And Wrong
Integrating AI into medical devices brings the industry a new level of innovation. The benefits can help both providers and patients but also introduce more risk.
The Food and Drug Administration (FDA), which has already authorized over 1,000 AI-enabled medical devices, issued draft guidance about the potential threats involved with these devices. Their draft guidance seeks to help device manufacturers create the best governance safeguards for AI. The FDA requested feedback, and it came in abundance.
So why is the draft guidance causing such havoc? And what's the next move?
The Guidance Focuses Too Much On Complex AI
AI's application in any technology is unique. In the medical device industry, it's even more so because every product works differently and has very specific use cases.
The biggest complaint I and others in my spaces have on the content is that it speaks to complex devices but not simpler ones. The FDA's guidance, from my perspective, isn't inclusive, possibly not grasping the variety of products on the market. They may also have misconceptions that manufacturers only integrate AI into these complex products. I believe their historical data on approvals of AI-enabled devices likely tells a different story.
The reality is that any device, from infusion pumps to glucose monitors and diagnostic tools, can benefit from AI.
The devices that fit into the complex category are a niche group. These would include systems with deep learning capabilities. You'll see this mainly in imaging, diagnostics or some applications of predictive analytics.
Frameworks Should Align With The AI System Used
With unique AI applications for every device, the FDA is not addressing the fact that this impacts cybersecurity risk. Very rigid controls are certainly necessary for many, but not all. What this does is force manufacturers to follow guidance that isn't relevant, creating burdens on these companies.
The Advanced Medical Technology Association (AdvaMed) was a key voice in the feedback, stating, "Many of the recommendations in the AI Lifecycle draft guidance are not relevant for less complex AI-machine learning (ML) models."
This position makes complete sense to me. Regulatory bodies have never made approvals easy, but this guidance just doesn't align with what experts in the industry know to be true. AI can be a threat, but it's not one type, nor is it all risk even.
Commenters also noted that the FDA's existing guidance for device software functions is sufficient for simpler AI models.
FDA Doesn't Fully Address TPLC
The FDA's approach was to cover the total product life cycle (TPLC). That's essential because elements in each cycle must heed security, and developers would need to use this framework from start to end.
From my view, the document omits quite a bit about the product life cycle. There's confusion about what pertains to compliance with internal products. Without clarity, manufacturers are left with more questions and obstacles in submitting their premarket submissions.
Another flaw that I see is that the guidance doesn't include post-market monitoring requirements. The document said it's a good practice but not a mandate, nor does it give any specifics on this, like auditing or pen-testing.
The FDA's 2023 medical device cybersecurity update does include this. It requires manufacturers to have plans for ongoing monitoring of vulnerabilities and issuing patches to correct them.
What Does The Guidance Get Right?
While some stakeholders want the FDA to scrap everything and start over, some areas make sense to me. There are many mentions and guidance related to transparency in the process, which should be a tenet. Transparency is present in the 2023 document and requires a software bill of materials (SBOMs).
The document also explains the threats of AI and cites the 2023 guidance, aligning with its content around cybersecurity controls and risk management best practices. It goes on to describe how to combat AI cybersecurity risks using the 2023 framework.
What's Next?
The period for submitting comments has closed. The FDA can integrate these concerns into new language before it moves from draft to published. The range of edits or changes they'll make may be significant or minor.
There's no questioning that AI should have governance and guardrails. As much good as it can bring, it also heightens risk. Medical device cyberattacks are already common. The additional layer of AI can make them more susceptible to these threats.
The guidance is imperfect. The topic of AI cybersecurity in medical devices is broad and has many nuances. Whatever the final document entails, it won't cover every scenario. What it should do is address as much as possible under the known use cases and risk levels because it can't be a one-size-fits-all approach.
Getting this right could usher in an era of more innovation in AI, and that's good news—as long as there are controls and transparency.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
2 minutes ago
- The Verge
Samsung reveals a mysterious $16.5 billion chip deal.
Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy See all Stories Posted Jul 28, 2025 at 3:04 AM UTC Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates. Richard Lawler Posts from this author will be added to your daily email digest and your homepage feed. See All by Richard Lawler Posts from this topic will be added to your daily email digest and your homepage feed. See All Business Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Samsung Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech


Bloomberg
32 minutes ago
- Bloomberg
Alibaba Cloud Founder on China's AI Future
In an exclusive interview with Bloomberg Television's Annabelle Droulers, Alibaba Cloud Founder and Zhejiang Lab Director Wang Jian says 'healthy competition' in China's AI industry is helping the country develop into a fast-paced test-bed to get products to market. He also addresses the big pay packets being offered in Silicon Valley to hire AI talent. (Source: Bloomberg)
Yahoo
an hour ago
- Yahoo
Alibaba Cloud Founder on China's AI Future
In an exclusive interview with Bloomberg Television's Annabelle Droulers, Alibaba Cloud Founder and Zhejiang Lab Director Wang Jian says "healthy competition" in China's AI industry is helping the country develop into a fast-paced test-bed to get products to market. He also addresses the big pay packets being offered in Silicon Valley to hire AI talent. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data