logo
#

Latest news with #clinicalrecords

The Data Tsunami In Today's Interoperable And AI World
The Data Tsunami In Today's Interoperable And AI World

Forbes

time11 hours ago

  • Health
  • Forbes

The Data Tsunami In Today's Interoperable And AI World

David Lareau is CEO of Medicomp Systems, which makes medical data relevant, usable and actionable. getty Artificial intelligence (AI) systems have seemingly taken healthcare—and perhaps the world—by storm, showing great promise for revolutionizing care, research, workflows and more. As the industry evaluates what's AI hype versus true potential, one thing about AI has become clear: It generates a massive amount of data—which for healthcare creates both opportunities and challenges. Now that interoperability pipes are live and data is flowing between healthcare organizations, providers are managing a tsunami of incoming data. With AI and large language models (LLMs), it has become easier to generate even more data, which means the problem has just worsened. And unlike a regular tsunami, healthcare's giant wave of incoming data is not a one-time event. The data will continue to accumulate. What are healthcare organizations to do to keep from drowning in all the data? Despite early hopes that AI might solve data challenges, the bloom is now off that rose and users are realizing that AI needs to be a bit more reliable, accurate and trustworthy for critical healthcare tasks. Accurate clinical records are vital for patient care, reimbursement and operational efficiencies. When a physician creates a clinical note, they are responsible for ensuring the documentation is accurate and complete. With interoperability, we now have a wealth of additional clinical information coming from other clinicians, hospitals, labs, HIEs and other sources. While, on the surface, more information seems powerful, clinicians now need to figure out if the incoming data is accurate and what pieces are relevant for the patient in front of them. In other words, if clinicians are responsible for the incoming tsunami of data, how can they efficiently verify and manage it all? Consider the impact of AI-assisted documentation tools, which aim to optimize workflows, reduce documentation times and relieve physician burnout. Their output, however, is only as good as the data being fed into the systems. If a healthcare organization is building AI initiatives using data from a repository filled with errors or gaps, the resulting output will be flawed. Transparency is thus essential when it comes to AI data. When documentation is created through AI, healthcare providers need to understand the source of truth behind an assessment or recommendation so they can quickly identify potential errors that might impact patient care. Another limitation of conversational AI is the inability to create structured data. While these tools may be excellent at creating narrative text, they often fail to create the structured clinical data needed for analytics, regulatory compliance and quality metrics. AI-assisted documentation tools also rely heavily on summarization, which can be beneficial when trying to quickly make sense of a compilation of dozens of multiple-page medical records. LLMs, however, are trained to identify the next logical word, with very little reasoning behind them. To drive accuracy, these models should be trained on expertly curated clinical content using advanced algorithms. AI is also imperfect for coding, especially when derived from erroneous or incomplete documentation. Incorrect diagnosis and procedure codes can create downstream problems, including denied claims, inaccurate reimbursement and inappropriate follow-up patient care. Healthcare enterprises using AI to generate content that becomes part of a patient's medical record should consider the following: 1. Who is responsible for the final review and approval of AI-generated content that becomes part of a medical record? 2. Where in the workflow does this happen, and how are changes or corrections made and communicated to all members of the care team? 3. Organizations should consult with clinical users to obtain agreement on how information is presented, enabling those who sign off on care plans to view documentation and, if necessary, act upon the information presented to them. 4. The organization should implement a feedback loop so users can let system and workflow designers know of issues with AI-generated content and suggest refinements and improvements. To effectively manage today's tsunami of clinical data, especially in this era of interoperability and AI, clinicians need tools that can transform huge volumes of clinical data into a source of truth. This requires technologies that leverage trusted, curated clinical content to validate AI-generated outputs. To be useful for clinicians at the point of care, these tools should also sort and filter data and create structured, discrete information that is mapped to standard terminologies, value sets and reporting formats. Healthcare's data tsunami is not going away. However, by embracing tools that pair with AI outputs to improve data quality, clinicians can be empowered with rapid access to accurate and actionable data that enhances patient care. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store