02-05-2025
Artificial intelligence and virtual care: Transforming healthcare delivery
Artificial Intelligence (AI) has the power to improve patient outcomes while driving down costs, and emerging AI systems have already changed doctor-patient interaction by making virtual visits and remote care significantly more convenient. But adoption of every new technology requires adherence to regulation and a measured, thoughtful approach to ensure that it can deliver what it promises to deliver. At Intersect 2025, three leading minds in healthcare, information, and legal best practices sat down to discuss their individual personal views of the challenges of AI adoption and the regulatory landscape facing AI-enabled solutions.
Daniel Cody is a Health Care and Life Sciences Member at leading law firm Mintz, and he spoke about the pressing need for AI-driven solutions in medical care, saying that, 'Hospitals are stressed, especially with ongoing threats to Medicaid and other programs. So, the twin goals of improving outcomes and reducing costs are universal.'
Cody went on to list key ways that AI is already improving the experience of providers and patients. 'Remote monitoring devices are more advanced, with AI capabilities. It's not just about helping folks with diabetes and chronic disease track their conditions but being predictive and giving information to their PCPs on a 24/7 basis. AI tools are also fantastic for helping radiologists evaluate images so they can diagnose and start treatment earlier.'
The tools we now call AI have actually been in use for years, giving organizations a long runway to find the ideal approach. 'Five years ago, AI was called clinical decision support,' says Adnan E. Hamid, Regional Vice President and Chief Information Officer at CommonSpirit Health. 'As one of the larger Catholic healthcare systems in the nation, CommonSpirit makes sure that when we select technology, it's human centric and mission centric. The goal is to not replace but augment the human interaction between the clinician and patient.'
expand
To reach this goal, medical organizations must navigate an ever-evolving field of regulations. 'We have a systemwide UC AI Council and similar oversight committees, and a chief AI officer at each medical center. The UC AI Council sponsored the development of the UC Responsible AI Principles, and a publicly-available model risk assessment guide with procurement process questions built in. We offer an AI primer, and many of our education webinars are open to the public. Twenty UC policies connect to UC AI guidance, considering the many privacy and security requirements on the campus and health side,' says Noelle Vidal, Healthcare Compliance and Privacy Officer for the Office of the President of the University of California.
Regulations such as HIPAA are all-important when considering whether to use an AI tool, especially since the better-known apps add user data to their own algorithms. 'When ChatGPT was released, our providers were interested in the power of generative AI,' Hamid says. 'But if you enter patient information, it's no longer private but resides as part of the tool. To ensure nobody was accessing ChatGPT from our systems, we accelerated efforts to produce our own internal generative AI tool using Google Gemini on the back end. Data and security are our IT cornerstones.'
AI adds a new layer to assess. As Vidal says, "A thorough assessment can take awhile. Whenever we get a request, it goes through a multi-team scan for privacy, security, and other UC requirements, including the new AI assessment questions. An AI tool's use of data continues to evolve and change how impactful the tool will be. Every change in the technology could contradict what we negotiated earlier in a prior contract with the same vendor. We've got different teams to rank the risk of using a tool. I know it frustrates our stakeholders who want to get every innovation in quickly, but we try to balance adoption with risk prevention.'
Ultimately, only the AI applications with the most practical uses are going to clear the vetting and regulatory process to change how practitioners improve the patient experience and the efficacy of healthcare.
'The targeted tools that solve real problems are going to win,' Cody says. 'They're going to ensure security and privacy compliance.' As noted by Hamid, 'the fastest way to get technology approved is to have a really good use case. If you can provide the details of how the tool will solve a problem, then users will complete that process faster. Ultimately, AI adoption is influenced by the structure and mission of the organization.'