logo
A health care framework for patient-centric use of AI

A health care framework for patient-centric use of AI

Hindustan Times3 days ago

Artificial Intelligence (AI) technologies are developing at a much faster pace than regulation or industry standards can keep up. With every new breakthrough and innovation, the potential of AI to transform the health care sector increases–whether through screening and diagnostics, public health and complex data analysis, or clinical decision-making. Generative AI (GAI) in particular has the potential to positively impact improve India's healthcare system by supporting clinical care, that is, providing clinical decision support and deepening clinical expertise at the point of care by processing vast amounts of data and knowledge. Ultimately, this leads to better informed decision making and improved health outcomes.
Use of AI in sensitive fields like health care is understandably mired in misconceptions and practical challenges. At the outset, it's essential to recognise AI should be seen as a tool to augment, not replace, human expertise the role of AI as a tool that supports the skills of doctor's clinicians, it's equally important to recognise that adoption of AI technologies raises valid concerns about operational efficiency, patient data privacy and algorithmic bias, in addition to broader ethical and regulatory challenges.
As AI develops through newer, relevant use-cases for health care, the need of the hour for India is to proactively build a self-reliant framework and roadmap for ethical and responsible use of AI. This can not only ease adoption of emerging AI technologies across India's private and public health care, but can also help address systemic challenges in health care systems. A clear roadmap of AI integration with guardrails can help both India's clinicians and patients benefit from the immense potential of modern AI.
In a field as dynamic and ever-evolving as health care, GAI enables critical support to clinicians by creating real-time learning opportunities. It facilitates differential diagnosis especially in complex cases with co-morbidities; by driving sharp insights from large databases to help clinicians and nurses take better-informed calls. GAI platforms trained on Large Language Models (LLMs, clinicians can be understood as a tailored assistant that can answer targeted queries at the point of care and pull up relevant answers efficiently and quickly, allowing clinicians to perform their tasks better by enabling easier access to pertinent data and information. When employing differential diagnosis for patients, GAI solutions based on latest evidence can function as a concentrated pool of knowledge for clinicians to rely on.
Modern health care systems are built on technological foundations, from data management, diagnostics to surgical support. AI is the natural next step to increase productivity and accuracy in healthcare by augmenting the skills of medical professionals. GAI can rapidly analyse and synthesise large volumes of medical literature to provide clinicians with the most relevant, reliable evidence to make clinical decisions at the point of care. One of AI's key offerings is personalisation–it can generate patient-specific treatment recommendations and tailor its research based on the unique needs of individual and population profiles. Health care outcomes, for the same conditions and diseases, can vary across regional and population groups. AI can help track and consolidate data for particular population and regional groups. This allows for improved prognostication: clinicians will be able to better track trends from their own practice or areas, instead of relying on data that may not be best suitable for their patient groups. This is particularly significant, as it can help India understand and analyse its population health metrics better. Such an approach can also empower patients to assume a more active role in collaborative decision-making with their clinicians by allowing them greater access to improved quality of information.
While GAI promises to be a gamechanger for improving individual and public health, the use of any AI, as an emerging technology, poses questions of data privacy and biases in algorithms which may impact outputs for diverse population segments.
As GAI needs voluminous datasets to train on for increased accuracy, confidential patient data must be carefully handled to avoid misuse and breaches, especially to third parties. To build trust in AI and its relevance for health care, there is an urgent need to increase public and medical trust in data handling by AI systems.
First, there is a need for clean, evidence-based data. Secondly, a clearly outlined framework for patient privacy is needed. To tap AI's potential to improve health care outcomes, it must be leveraged through a comprehensive framework that governs its usage and applicability. The creation of such a framework necessitates multi-disciplinary collaboration between clinicians, data scientists, ethicists, and policymakers, as it has broad ramifications across medical, ethical, legal, and social lines.
At the outset, data privacy vulnerabilities must be thoroughly accounted for, ensuring that any data used is with consent, and that patient data is accurate, properly anonymised and only shared with authorised parties. Algorithms require continuous human oversight to detect and mitigate gender or historical biases. Additionally, operational issues such as transparency and accountability must be prioritised by organisations adopting and integrating AI into their existing systems. As regulation around AI is still evolving, a constant eye on regulatory compliance is necessary to stay up to date with legal requirements. Continuous monitoring and evaluation are key to ensure use of GAI is both ethical and efficient.
This article is authored by Dr Arun Khemariya, senior clinical specialist, Elsevier India.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

A health care framework for patient-centric use of AI
A health care framework for patient-centric use of AI

Hindustan Times

time3 days ago

  • Hindustan Times

A health care framework for patient-centric use of AI

Artificial Intelligence (AI) technologies are developing at a much faster pace than regulation or industry standards can keep up. With every new breakthrough and innovation, the potential of AI to transform the health care sector increases–whether through screening and diagnostics, public health and complex data analysis, or clinical decision-making. Generative AI (GAI) in particular has the potential to positively impact improve India's healthcare system by supporting clinical care, that is, providing clinical decision support and deepening clinical expertise at the point of care by processing vast amounts of data and knowledge. Ultimately, this leads to better informed decision making and improved health outcomes. Use of AI in sensitive fields like health care is understandably mired in misconceptions and practical challenges. At the outset, it's essential to recognise AI should be seen as a tool to augment, not replace, human expertise the role of AI as a tool that supports the skills of doctor's clinicians, it's equally important to recognise that adoption of AI technologies raises valid concerns about operational efficiency, patient data privacy and algorithmic bias, in addition to broader ethical and regulatory challenges. As AI develops through newer, relevant use-cases for health care, the need of the hour for India is to proactively build a self-reliant framework and roadmap for ethical and responsible use of AI. This can not only ease adoption of emerging AI technologies across India's private and public health care, but can also help address systemic challenges in health care systems. A clear roadmap of AI integration with guardrails can help both India's clinicians and patients benefit from the immense potential of modern AI. In a field as dynamic and ever-evolving as health care, GAI enables critical support to clinicians by creating real-time learning opportunities. It facilitates differential diagnosis especially in complex cases with co-morbidities; by driving sharp insights from large databases to help clinicians and nurses take better-informed calls. GAI platforms trained on Large Language Models (LLMs, clinicians can be understood as a tailored assistant that can answer targeted queries at the point of care and pull up relevant answers efficiently and quickly, allowing clinicians to perform their tasks better by enabling easier access to pertinent data and information. When employing differential diagnosis for patients, GAI solutions based on latest evidence can function as a concentrated pool of knowledge for clinicians to rely on. Modern health care systems are built on technological foundations, from data management, diagnostics to surgical support. AI is the natural next step to increase productivity and accuracy in healthcare by augmenting the skills of medical professionals. GAI can rapidly analyse and synthesise large volumes of medical literature to provide clinicians with the most relevant, reliable evidence to make clinical decisions at the point of care. One of AI's key offerings is personalisation–it can generate patient-specific treatment recommendations and tailor its research based on the unique needs of individual and population profiles. Health care outcomes, for the same conditions and diseases, can vary across regional and population groups. AI can help track and consolidate data for particular population and regional groups. This allows for improved prognostication: clinicians will be able to better track trends from their own practice or areas, instead of relying on data that may not be best suitable for their patient groups. This is particularly significant, as it can help India understand and analyse its population health metrics better. Such an approach can also empower patients to assume a more active role in collaborative decision-making with their clinicians by allowing them greater access to improved quality of information. While GAI promises to be a gamechanger for improving individual and public health, the use of any AI, as an emerging technology, poses questions of data privacy and biases in algorithms which may impact outputs for diverse population segments. As GAI needs voluminous datasets to train on for increased accuracy, confidential patient data must be carefully handled to avoid misuse and breaches, especially to third parties. To build trust in AI and its relevance for health care, there is an urgent need to increase public and medical trust in data handling by AI systems. First, there is a need for clean, evidence-based data. Secondly, a clearly outlined framework for patient privacy is needed. To tap AI's potential to improve health care outcomes, it must be leveraged through a comprehensive framework that governs its usage and applicability. The creation of such a framework necessitates multi-disciplinary collaboration between clinicians, data scientists, ethicists, and policymakers, as it has broad ramifications across medical, ethical, legal, and social lines. At the outset, data privacy vulnerabilities must be thoroughly accounted for, ensuring that any data used is with consent, and that patient data is accurate, properly anonymised and only shared with authorised parties. Algorithms require continuous human oversight to detect and mitigate gender or historical biases. Additionally, operational issues such as transparency and accountability must be prioritised by organisations adopting and integrating AI into their existing systems. As regulation around AI is still evolving, a constant eye on regulatory compliance is necessary to stay up to date with legal requirements. Continuous monitoring and evaluation are key to ensure use of GAI is both ethical and efficient. This article is authored by Dr Arun Khemariya, senior clinical specialist, Elsevier India.

Copyright's tryst with generative AI
Copyright's tryst with generative AI

The Hindu

time18-05-2025

  • The Hindu

Copyright's tryst with generative AI

Copyright law has always been a product of technology. It was created in 1710 to deal with the outcome of the invention of the printing press, to protect publishers against any unauthorised publication while encouraging learning, and to further their economic interests. Since inception, copyright law has adapted itself to various technologies from the time of the printing press to the photocopying machine, to the recording device, and to the Internet. In each stage, the law has worked its way around technology. However, today there is a belief that generative AI has the potential to upset the copyright law. Such a debate is not new: it surfaces roughly every 20 years with each technological advent. So far, copyright law has been successful in forbidding commercial reproduction of works protected by copyright; currently, the law faces the task of prohibiting AI platforms from training on the works of the creators. There is a shift in the approach of using copyright law. In the past, the law dealt with copies of the original works; now, it has to deal with training of copyrighted material by AI platforms and not with the reproduction of copies itself. At a crossroads Generative AI companies, specifically Open AI, have found themselves at a crossroads with copyright law across countries. AI platforms employ a technology called Internet scraping by which Large Language Models (LLM) train the platform on all available knowledge. For training purposes, the platform accesses both copyrighted and non-copyrighted content. The copyright infringement cases are fought on subject matters such as literature, music, and photographs. Recently, the Federation of Indian Publishers as well as the Asian News International initiated copyright infringement claims against Open AI before the Delhi High Court for training the AI platform with the works of the publishers without their prior consent. Similar cases are pending before the American courts, where the respondents have taken the claim of 'fair learning' and 'fair use in education' as an exception provided by the U.S. Copyright Act. In these cases, Open AI has developed an opt-out mechanism which allows the publishers to opt-out from the data set training. But this strategy applies only to future and not past training. In the ongoing case in India, Professor Dr. Arul George Scaria, amicus curiae, has suggested that the court should address the issue of whether unlearning the information from the content used during training is technically and practically feasible. Further, he has also underscored the need for keeping in mind the effect of the future of AI development in India; access to legitimate information including copyrighted materials; and a direction from the court to Open AI to address falsely attributable sources. Among other things related to Open AI, it has been argued that the Indian courts lack competence to hear the case. Leaving that aside, the LLM platforms may find themselves in an uncharted territory in India, as the Indian Copyright Act adopts a different exception test and not the 'fair use' test established in the U.S. It adopts the enumerated approach, where the exact exceptions are already stated, the scope to manoeuvre is limited, and education exceptions are confined within classrooms and not beyond. In India, this could be effectively used by the right- holders in their favour. However, the law could potentially be used to prohibit access to books, much against the original purpose for which it was created. The opt-out mechanism developed by Open AI may also have a huge impact on the future of generative AI, as the efficiency of the AI depends on the material that it is trained upon. If in future, the technology is not trained on quality material, that could obfuscate the budding AI platforms, which will not have the benefit that Open AI has. The court should ensure a level playing field between generative AI with deep pockets and generative AI without deep pockets so as to strike the right balance. Solutions to the problem The claims by parties have the potential to impact the core of creation, art, and copyright law, since any creation stands on the shoulders of its predecessors. Generative AI/human creativity functions on the basis of learning from existing creativity, which acts as a nourishment to churn further creativity. Copyright law should not be turned on its head to prohibit future creators from having access to this benefit. Further, the arguments of the publishers in the case at hand has a potential of viewing human creation and machine creation differently in future and setting different consequences for both. It is pertinent to remember that a human being is not expected to create further without learning; at the same time, the law as it stands does not make any differentiation between human creation and machine creation. The foundational norms of copyright law offers solutions to the existing problem. Copyright in a work does not apply to the idea/information; rather, it is applicable only to the expression of the information. As long as the AI platform only uses the existing information for learning purposes, and does not thieve on the expression of the idea, it does not amount to infringement as per the law. When AI robs the copyright protected content, the existing norms of copyright law has its net in place to catch the infringements. The founding doctrine should not be compromised for the best interests of creativity as it acts as a medium between generative AI and creativity.

AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals
AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals

Time of India

time15-05-2025

  • Time of India

AI Models like ChatGPT and DeepSeek frequently exaggerate scientific findings, study reveals

According to a new study published in the journal Royal Society Open Science, Large Language Models (LLMs) such as ChatGPT and DeepSeek often exaggerate scientific findings while summarising research papers. Researchers Uwe Peters from Utrecht University and Benjamin Chin-Yee from Western University and the University of Cambridge analysed 4900 AI-generated summaries from ten leading LLMs. Their findings revealed that up to 73 percent of summaries contained overgeneralised or inaccurate conclusions. Surprisingly, the problem worsened when users explicitly prompted the models to prioritise accuracy, and newer models like ChatGPT 4 performed worse than older versions. What are the findings of the study The study assessed how accurately leading LLMS summarised abstracts and full-length articles from prestigious science and medical journals, including Nature, Science, and The Lancet. Over a period of one year, the researchers collected and analysed 4,900 summaries generated by AI systems such as ChatGPT, Claude, DeepSeek, and LLaMA. Six out of ten models routinely exaggerated claims, often by changing cautious, study-specific statements like 'The treatment was effective in this study' into broader, definitive assertions like 'The treatment is effective.' These subtle shifts in tone and tense can mislead readers into thinking that scientific findings apply more broadly than they actually do. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like They Lost Their Money - Learn From Their Lesson Expertinspector Click Here Undo Why are these exaggerations happening The tendency of AI models to exaggerate scientific findings appears to stem from both the data they are trained on and the behaviour they learn from user interactions. According to the study's authors, one major reason is that overgeneralizations are already common in scientific literature. When LLMs are trained on this content, they learn to replicate the same patterns, often reinforcing existing flaws rather than correcting them. Another contributing factor is user preference. Language models are optimised to generate responses that sound helpful, fluent, and widely applicable. As co-author Benjamin Chin-Yee explained, the models may learn that generalisations are more pleasing to users, even if they distort the original meaning. This results in summaries that may appear authoritative but fail to accurately represent the complexities and limitations of the research. Accuracy prompts backfire Contrary to expectations, prompting the models to be more accurate actually made the problem worse. When instructed to avoid inaccuracies, the LLMs were nearly twice as likely to produce summaries with exaggerated or overgeneralised conclusions compared to when given a simple, neutral prompt. 'This effect is concerning,' said Peters. 'Students, researchers, and policymakers may assume that if they ask ChatGPT to avoid inaccuracies, they'll get a more reliable summary. Our findings prove the opposite.' Humans still do better To compare AI and human performance directly, the researchers analysed summaries written by people alongside those generated by chatbots. The results showed that AI was nearly five times more likely to make broad generalisations than human writers. This gap underscores the need for careful human oversight when using AI tools in scientific or academic contexts. Recommendations for safer use To mitigate these risks, the researchers recommend using models like Claude, which demonstrated the highest generalisation accuracy in their tests. They also suggest setting LLMs to a lower "temperature" to reduce creative embellishments and using prompts that encourage past-tense, study-specific reporting. 'If we want AI to support science literacy rather than undermine it,' Peters noted, 'we need more vigilance and testing of these systems in science communication contexts.' AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store