
A health care framework for patient-centric use of AI
Use of AI in sensitive fields like health care is understandably mired in misconceptions and practical challenges. At the outset, it's essential to recognise AI should be seen as a tool to augment, not replace, human expertise the role of AI as a tool that supports the skills of doctor's clinicians, it's equally important to recognise that adoption of AI technologies raises valid concerns about operational efficiency, patient data privacy and algorithmic bias, in addition to broader ethical and regulatory challenges.
As AI develops through newer, relevant use-cases for health care, the need of the hour for India is to proactively build a self-reliant framework and roadmap for ethical and responsible use of AI. This can not only ease adoption of emerging AI technologies across India's private and public health care, but can also help address systemic challenges in health care systems. A clear roadmap of AI integration with guardrails can help both India's clinicians and patients benefit from the immense potential of modern AI.
In a field as dynamic and ever-evolving as health care, GAI enables critical support to clinicians by creating real-time learning opportunities. It facilitates differential diagnosis especially in complex cases with co-morbidities; by driving sharp insights from large databases to help clinicians and nurses take better-informed calls. GAI platforms trained on Large Language Models (LLMs, clinicians can be understood as a tailored assistant that can answer targeted queries at the point of care and pull up relevant answers efficiently and quickly, allowing clinicians to perform their tasks better by enabling easier access to pertinent data and information. When employing differential diagnosis for patients, GAI solutions based on latest evidence can function as a concentrated pool of knowledge for clinicians to rely on.
Modern health care systems are built on technological foundations, from data management, diagnostics to surgical support. AI is the natural next step to increase productivity and accuracy in healthcare by augmenting the skills of medical professionals. GAI can rapidly analyse and synthesise large volumes of medical literature to provide clinicians with the most relevant, reliable evidence to make clinical decisions at the point of care. One of AI's key offerings is personalisation–it can generate patient-specific treatment recommendations and tailor its research based on the unique needs of individual and population profiles. Health care outcomes, for the same conditions and diseases, can vary across regional and population groups. AI can help track and consolidate data for particular population and regional groups. This allows for improved prognostication: clinicians will be able to better track trends from their own practice or areas, instead of relying on data that may not be best suitable for their patient groups. This is particularly significant, as it can help India understand and analyse its population health metrics better. Such an approach can also empower patients to assume a more active role in collaborative decision-making with their clinicians by allowing them greater access to improved quality of information.
While GAI promises to be a gamechanger for improving individual and public health, the use of any AI, as an emerging technology, poses questions of data privacy and biases in algorithms which may impact outputs for diverse population segments.
As GAI needs voluminous datasets to train on for increased accuracy, confidential patient data must be carefully handled to avoid misuse and breaches, especially to third parties. To build trust in AI and its relevance for health care, there is an urgent need to increase public and medical trust in data handling by AI systems.
First, there is a need for clean, evidence-based data. Secondly, a clearly outlined framework for patient privacy is needed. To tap AI's potential to improve health care outcomes, it must be leveraged through a comprehensive framework that governs its usage and applicability. The creation of such a framework necessitates multi-disciplinary collaboration between clinicians, data scientists, ethicists, and policymakers, as it has broad ramifications across medical, ethical, legal, and social lines.
At the outset, data privacy vulnerabilities must be thoroughly accounted for, ensuring that any data used is with consent, and that patient data is accurate, properly anonymised and only shared with authorised parties. Algorithms require continuous human oversight to detect and mitigate gender or historical biases. Additionally, operational issues such as transparency and accountability must be prioritised by organisations adopting and integrating AI into their existing systems. As regulation around AI is still evolving, a constant eye on regulatory compliance is necessary to stay up to date with legal requirements. Continuous monitoring and evaluation are key to ensure use of GAI is both ethical and efficient.
This article is authored by Dr Arun Khemariya, senior clinical specialist, Elsevier India.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
18-07-2025
- Time of India
The coming of agentic AI: The next era of human-machine synergy
Artificial Intelligence (AI) has travelled from the confines of research labs to every aspect of our daily lives. Over the past several decades, we have witnessed an extraordinary transformation; from rule-based systems to neural networks, from statistical AI to large language models (LLMs), and now, to the threshold of Agentic AI. The trending buzzword now, which is a paradigm where machines can reason, plan, adapt, and act with increasing autonomy and human-like capability. This evolution has not only showcased the power of technological progress but has also continuously enriched human life in meaningful ways. The evolution of AI The story of AI began in the 1950s with the advent of symbolic AI; systems designed to reason using logic and handcrafted rules. While foundational, these early systems were rigid, unable to adapt to real-world complexity. The 1980s brought expert systems, which encoded domain knowledge explicitly. Though revolutionary for tasks like medical diagnosis and financial modelling, their maintenance proved unsustainable at scale. The real shift came in the late 1990s and 2000s with the re-entry of machine learning. Instead of handcrafting intelligence, we began teaching machines to learn patterns from data. Algorithms like decision trees, support vector machines, and eventually deep learning architectures unlocked the ability to process images, speech, and text at scale. In 2012, a convolutional neural network achieved ground breaking accuracy in image classification marking the arrival of deep learning as a dominant force. We saw a seismic shift in AI capabilities with the advent of transformer architectures, introduced in the seminal 2017 paper 'Attention Is All You Need.' This innovation enabled models to understand large language context, paving the way for Large Language Models (LLMs) capable of generating fluent, context-aware responses and performing tasks from summarization to reasoning. Landmark models like BERT revolutionized understanding through bidirectional context, while generative models like the GPT series demonstrated unprecedented abilities in content creation, dialogue, and code generation. This progress was driven by advanced algorithms, massive datasets, and exponentially growing computational power, catalysing the shift from narrow, task-specific AI to general-purpose systems with emergent intelligence, paving the way for Agentic AI. The rise of agentic AI Today, we stand at the edge of another monumental shift: the emergence of Agentic AI. Agentic AI systems exhibit autonomy, goal-oriented behaviour, memory, reasoning, and the ability to interact and modify plans as per real-world environment. This is built on the foundation of powerful Large Language Models (LLMs), enhanced with capabilities for self-reflection, memory, and planning. These systems not only understand and generate language but can also evaluate their own actions and adapt their behaviour, enabling continuous improvement and proactive task execution. Agentic AI's core capabilities: 1. Perception and Awareness : Understands real-world inputs across text, vision, and audio. 2. Reasoning and Planning: Makes strategic decisions, breaks down goals, and adapts through learning. 3. Autonomous Execution: Carries out tasks across systems, learns from feedback, and improves over time. Together, this LLM-driven intelligence and reflective agent architectures blur the line between tool and teammate. These systems can proactively initiate tasks, collaborate, and continuously evolve mirroring the human-like cognitive flexibility and purpose-driven actions. Impact on human life Each wave of AI advancement has expanded our collective capability. Symbolic AI gave us expert systems in finance and medicine. Machine learning unlocked personalization powering recommendation engines, fraud detection, and predictive analytics. Deep learning brought breakthroughs in vision and speech - enabling virtual assistants, real-time translation, autonomous vehicles, and medical imaging. Agentic AI takes this further by transforming how we interact with machines. Imagine an AI assistant that doesn't just draft your emails, but understands your calendar, reads context from past meetings, and autonomously books travel, schedules follow-ups, and flags opportunities, continuously learning from your preferences. In enterprise, Agentic AI will streamline complex workflows. In healthcare, it can serve as a tireless collaborator, synthesizing patient data, flagging anomalies and coordinating care across departments. In education, it will act as an always-available tutor, adjusting teaching strategies in real-time to individual student needs. In scientific research, Agentic systems can formulate hypotheses, run simulations, and interpret results at a speed and scale previously unimaginable. The road ahead: Promise and responsibility As we venture deeper into the era of Agentic AI, the possibilities are infinite. We foresee: Cognitive Companions: Agents capable of dialogue, empathy modelling, and proactive Digital Workers: Agents execute complex business processes with minimal Interfaces: AI that adapts to user preferences and behaviours, providing intuitive and context-aware Human Intelligence: Seamless collaboration between human creativity and machine precision to solve grand Collaboration: Closely working with other AI agents to solve complex problems through specialized expertise. It is important to remember that agentic systems must be designed with rigorous safeguards; embedding transparency, fairness, interpretability, and alignment with human values. Robust testing, continuous red-teaming, and human-in-the-loop oversight will be vital to ensure trust and accountability. Conclusion The evolution of Agentic AI mirrors our growing understanding of both computation and cognition. More than building smarter machines, we are shaping a new interface between human intention and digital action. Agentic AI holds the promise of being our most powerful collaborator yet, the one that understands, learns, and acts on our behalf. As we look ahead, let us embrace this transformative moment with optimism and responsibility. The future of Agentic AI is not just technological, it is deeply human.


Business Standard
08-07-2025
- Business Standard
Veeda Lifesciences announces partnership with Mango Sciences to bring AI innovation in clinical trials services
PRNewswire Ahmedabad (Gujarat) [India], July 8: Veeda Lifesciences, a global contract research organization (CRO), has announced an investment in Mango Sciences, a Boston-based healthcare AI and data company. Through this investment, Veeda intends to leverage AI capabilities to enhance the speed, efficiency, and quality of Clinical Trials services across its expanded global network, enabling more diverse, efficient recruitment and globally representative patient selection. By leveraging Mango Sciences' AI-powered Querent™ platform, Veeda will effectively use technology to automate patient identification with precision and expand its reach across Europe. "Our partnership will transform Veeda into an AI-driven oncology drug development organization, meeting the growing demand for diversity in clinical trials, which is in line with the expectations of regulators and pharmaceutical companies. We will be one of only a few CROs focused in Oncology with access to this proprietary technology," said Dr. Mahesh Bhalgat, Group CEO and Managing Director, Veeda Clinical Research Limited. This partnership promotes Veeda's objectives of investing in technology modernization and digital transformation to enhance operational efficiency and quality assurance across its clinical research operations. By deploying the AI-powered Querent™ platform, Veeda will streamline processes, improve data management capabilities, and ensure higher quality standards through broader representation of non-Caucasian populations. This will enhance clinical trial efficiency through advanced data analytics--improving patient matching, trial design, and monitoring while reducing costs and timelines. Dr. Mohit Misra, Founder & CEO of Mango Sciences, added, "We are integrating Large Language Models (LLMs) and Generative-AI into Querent™ to drive operational efficiencies and improve real-world evidence, ultimately identifying the right drug for the right patient." Veeda has strengthened its position as a tech-enabled, AI-driven CRO through strategic investments and the acquisition of Health Data Specialists (Heads) previously. This provides Veeda with access to oncology cohorts in India and Europe and with exclusive access to eligible patient pools. This supports Veeda's aim to maintain a competitive edge in the CRO sector while delivering services to pharmaceutical and biopharmaceutical clients globally. About Veeda Lifesciences: Veeda Lifesciences (Veeda Clinical Research Limited) platform provides a comprehensive portfolio of services across various stages of the drug development value chain to support small and medium biotech and pharmaceutical companies with capabilities ranging from non-clinical and pre-clinical development to clinical pharmacology and clinical trials across different modalities. Visit: About Mango Sciences: Mango Sciences is a healthcare data and AI technology company dedicated to transforming patient care through advanced data analytics and solutions. Founded by industry veterans with extensive experience in healthcare, life sciences, and data analytics, Mango Sciences is driven by a passion for improving patient representation and access to comprehensive healthcare. Visit: Disclaimer: Veeda Clinical Research Limited (the "Company") is proposing, subject to receipt of requisite approvals, market conditions and other considerations, a public issue of its equity shares soon and is in the process of filing a draft red herring prospectus with the Securities and Exchange Board of India. This document is not an offer of securities for sale in the United States. Securities may not be offered or sold in the United States absent registration or an exemption from registration. Any public offering of securities to be made in the United States will be made by means of a prospectus that may be obtained from the Company and will contain detailed information about the Company and management, as well as financial statements. The Company does not intend to register any part of the offering in the United States.


Time of India
25-06-2025
- Time of India
AI chatbots like ChatGPT can be dangerous for doctors as well as patients, as ..., warns MIT Research
FILE (AP Photo/Richard Drew, file) A new study from MIT researchers reveals that Large Language Models (LLMs) used for medical treatment recommendations can be swayed by nonclinical factors in patient messages, such as typos, extra spaces, missing gender markers, or informal and dramatic language. These stylistic quirks can lead the models to mistakenly advise patients to self-manage serious health conditions instead of seeking medical care. The inconsistencies caused by nonclinical language become even more pronounced in conversational settings where an LLM interacts with a patient, which is a common use case for patient-facing chatbots. Published ahead of the ACM Conference on Fairness, Accountability, and Transparency, the research shows a 7-9% increase in self-management recommendations when patient messages are altered with such variations. The effect is particularly pronounced for female patients, with models making about 7% more errors and disproportionately advising women to stay home, even when gender cues are absent from the clinical context. 'This is strong evidence that models must be audited before use in health care, where they're already deployed,' said Marzyeh Ghassemi, MIT associate professor and senior author. 'LLMs take nonclinical information into account in ways we didn't previously understand.' Lead author Abinitha Gourabathina, an MIT graduate student, noted that LLMs, often trained on medical exam questions, are used in tasks like assessing clinical severity, where their limitations are less studied. 'There's still so much we don't know about LLMs,' she said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 2025 Top Trending local enterprise accounting software [Click Here] Esseps Learn More Undo The study found that colorful language, like slang or dramatic expressions, had the greatest impact on model errors. Unlike LLMs, human clinicians were unaffected by these message variations in follow-up research. 'LLMs weren't designed to prioritize patient care,' Ghassemi added, urging caution in their use for high-stakes medical decisions. The researchers plan to further investigate how LLMs infer gender and design tests to capture vulnerabilities in other patient groups, aiming to improve the reliability of AI in health care.