logo
#

Latest news with #TranslationalHealthScienceandTechnologyInstitute

How safe AI is in healthcare depends on the humans of healthcare
How safe AI is in healthcare depends on the humans of healthcare

The Hindu

time2 days ago

  • Health
  • The Hindu

How safe AI is in healthcare depends on the humans of healthcare

Researchers at IIT-Madras and the Translational Health Science and Technology Institute in Faridabad are developing an artificially intelligent (AI) model to use ultrasonography pictures to predict the age of a growing foetus. Called Garbhini-GA2, the model was trained on scans from about 3,500 pregnant women who had visited the Gurugram Civil Hospital in Haryana. Each scan labelled different parts of the foetus, its size, and its weight — measures that can be used to predict a foetus's age. After the training, team members tested it with (unlabelled) scans from 1,500 pregnant women who had visited the same hospital and around 1,000 pregnant women who had visited the Christian Medical College Vellore. They found Garbhini-GA2 erred on the age of the foetus by only half a day. This is a significant improvement over the most common method today: using Hadlock's formula. Because the formula is based on data from Caucasian populations, it has been known to miss the age of the foetus in India by up to seven days, according to the IIT-Madras team. The team now plans to test its model in datasets from around India. This is just a glimpse of how AI tools are quietly reshaping Indian healthcare. From foetal ultrasound dating and high-risk-pregnancy guidance to virtual autopsies and clinical chatbots, they are matching expert accuracy while accelerating workflows. Yet their promise comes entwined with the systemic challenges of data and automation bias, privacy, and weak regulation, often exacerbated by the sensitivities of the healthcare sector itself. Helpful, but can get better Almost half of all pregnancies in Indian women are high-risk pregnancies (HRPs), according to a 2023 study in the Journal of Global Health. In an HRPs, there is a high chance of the mother and the newborn taking ill or dying. The conditions that cause these outcomes include severe anaemia, high blood pressure, pre-eclampsia, and hypothyroidism. The risks are higher for women with no formal education, those from rural areas, and those belonging to marginalised social groups. Experts say routine monitoring is the best way to reduce maternal and perinatal mortality in HRPs. In rural areas, this task is often carried out by auxiliary nurse-midwives (ANMs), female health workers who are the first point of contact between a pregnant woman and the medical system. ANMs are trained by medical professionals to recognise HRPs and advise women on their options. Mumbai-based NGO ARMMAN started such a training programme in 2021 in partnership with UNICEF and the Governments of Telangana and Andhra Pradesh. It has been training healthcare professionals, including ANMs, in 'end-to-end management of HRPs,' ARMMAN's director of innovation Amrita Mahale said. The NGO trains ANMs to track and manage HRPs through 'classroom training and digital learning,' Mahale said, adding that ANMs are also supported through a WhatsApp helpline 'for doubt-solving and hand-holding as they go through the learning content and apply it to real-life high-risk pregnancy cases.' When in doubt, ANMs are encouraged to reach out to their trainers with queries. However, 'the trainers themselves are overworked and do not always prioritise responding to ANM queries,' Mahale said. So ARMMAN adopted an AI chatbot earlier this year. It recognises both text and voice-based queries from ANMs and responds in the same medium with clinically validated answers. Medical professionals now 'act as humans-in-the-loop who step in when the chatbot cannot answer a question, or if the ANM is not satisfied with the chatbot's response,' Mahale said. Currently being tested with 100 ANMs, the chatbot has received '94% positive feedback' from its users, Mahale said. 'A domain expert has rated 91% of the answers to date as accurate and satisfactory.' But she also flagged a problem: 'The current lot of speech [recognition] models struggle with Indian languages, especially regional variations and accents.' This means the chatbot might fail to understand about 5% percent of the queries that are shared as voice notes rather than as text. The kindest cut Amar Jyoti Patowary heads the Department of Forensic Medicine at the North Eastern Indira Gandhi Regional Institute of Health and Medical Sciences. He is one of India's few 'virtual autopsy' experts. Autopsies don't have a good public reputation. When Dr. Patowary and his team asked the relatives of 179 deceased people who had undergone an autopsy at the department, about 63% expressed fears of the body being mutilated and delays in conducting funeral rites. Similar issues have been reported from rural Haryana, too. In a virtual autopsy, or virtopsy, a body is scanned with CT and MRI machines to generate detailed images of its internal structures. Then, a computer creates a 3D image of the body. Physicians feed this image into convolutional neural networks (CNNs) — deep-learning models adept at extracting features from one set of images and using them to classify images in others. In 2023, researchers from Tohoku University in Japan built a CNN that could distinguish individuals who had died of drowning from those who had died of other causes using chest CT scans. The model was 81% accurate 'for cases in which resuscitation was performed and 92% for cases in which resuscitation was not attempted,' the authors wrote in their paper. In 2024, Swiss scientists developed a CNN that could say whether a person had died of a cerebral haemorrhage based on postmortem CT images. While conventional autopsies take about 2.5 hours to complete, a virtopsy can be finished in about half an hour, Dr. Patowary said. In conventional autopsies, once the body has been dissected, a second dissection may be required if the first one has been inconclusive. This is harder. But virtopsies allow as many dissections as required since the scans can be used to reconstruct the body again and again. What virtopsies might miss, however, are 'small injuries in the soft tissue' and changes in the colour of tissues and organs and how the body and its fluids smell, which might indicate how a person died, Dr. Patowary cautioned. Yet he also expressed confidence that by combining a virtopsy with a 'verbal autopsy' — checking with an accompanying relative or police officer for clinically relevant details — and a visual examination of the body and its cavities, these challenges can be overcome. Access control These cases indicate that the best use of AI might be as a healthcare professional's assistant. In 2019, MediBuddy, a digital healthcare company that provides online doctor consultations and other services, experimented with an AI bot that could chat with a patient, extract clinically relevant details from the conversation, and compile and present them to a doctor along with suggested diagnoses. Nine of the 15 doctors who tested this app said it was helpful while the rest remained 'sceptical', said Krishna Chaitanya Chavati, MediBuddy's head of data science. He flagged data privacy as a key concern. In India, digital personal information, including an individual's health information, is governed by the Information Technology Act 2000 and the Digital Personal Data Protection Act 2023. Neither Act specifically mentions AI technologies, although lawyers suggest the latter could apply to AI tools. Even then, the 'DPDP Act lacks clarity on AI-driven decision-making and accountability,' lawyers wrote in a May 2025 review. To allay these concerns, Chavati said strong data security protocols are necessary. At MediBuddy, the team has deployed a few, two of which are a personal identifiable information masking engine and role-based access. A masking engine is a programme that identifies and hides all personal information from specific algorithms, preventing unauthorised users from tracing the data to a single individual. Role-based access ensures no one individual within the company is able to access all of an individual's data, only the parts relevant to their work. In the loop Shivangi Rai, a lawyer who helped draft the National Public Health Bill and the Digital Information Security in Healthcare Bill, said 'automation bias' is also another cause for concern. Rai is currently the deputy coordinator of the Centre for Health Equity, Law & Policy in Pune. Automation bias is 'the tendency to overly trust and follow the suggestions made by an automated system, even if the suggestions are incorrect,' Rai said. This happens when the 'human in the loop', such as a doctor, banks too much on the judgement of an AI-powered app 'rather than their own clinical judgement'. In 2023, researchers from Germany and Netherlands asked radiologists with different degrees of experience to evaluate mammograms (X-ray scan of breasts) and assign them a BI-RADS score. BI-RADS is a standardised metric radiologists use to report the malignancy of cancerous tissue observed in mammograms. The radiologists were told that an AI model would also parse the mammogram and assign a BI-RADS score. In truth the researchers had no such model; they arbitrarily and secretly assigned a score to some mammograms. The researchers found that when the 'AI model' reported an incorrect score, the radiologists' own accuracy fell drastically. Even those with more than a decade of experience reported the correct BI-RADS scores in only 45.5% of such cases. The researchers reported being surprised that 'even highly experienced radiologists were adversely impacted by the AI system's judgments,' the study's lead author said in 2023. For Rai, this study is evidence of a pressing need to train 'doctors on the limits of AI' and to constantly test and reassess 'AI tools being developed for and used in healthcare'. India's rapid adoption of medical AI has illuminated a path to cheaper, faster, more equitable care. But algorithms inherit human fallibility while also further obfuscating it. If technology is to augment and not supplant ethical medicine, medical AI will need robust data governance, clinician training, and enforceable accountability. Sayantan Datta is a faculty member at Krea University and an independent science journalist.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store