
How AI is revolutionising modern obstetrics and gynaecology
Of the many fields in medicine that AI is already beginning to help in, obstetrics and gynaecology is one that it seems to be on the cusp of a revolution.
How is this happening?
AI is increasingly being used in gynaecology for various applications such as diagnosis, treatment planning, and patient care, says Abha Majumdar, director and head of Centre of IVF and Human Reproduction at Sir Ganga Ram Hospital, New Delhi. 'Currently, AI-powered tools are being used for tasks such aa image analysis, predictive modelling, and personalised medicine. In the next 5-10 years, we can expect AI to become even more integral to gynaecology, with potential applications in areas including robotic-assisted surgery, fertility treatment optimisation, and menopause management.'
Sunita Tendulwadkar, president of the Federation of Obstetric and Gynaecological Societies of India, explains that AI has become like a trusty assistant. 'AI is the stethoscope of the 21st century,' she says. Used wisely, she adds, it can allow doctors to deliver safer, more personalised women's healthcare, from city hospitals to the last tribal hamlet. She notes however, that 'while it amplifies our senses, it can never replace the mind or the heart of the physician'.
AI's applications
In one generation, says Dr. Tendulwadkar, doctors have moved from paper charts to algorithms that predict complications before they happen. Today, an AI model built from routine antenatal data can warn doctors of the risk of pre-eclampsia or postpartum haemorrhage weeks earlier than the old scoring systems, she explains.
Analysing data apart, ultrasound machines now come with 'auto-measure' buttons that capture standard foetal planes in seconds. She points out that systems such as SonoCNS can automatically segment the foetal heart or brain, label every chamber, and 'hand me precise biometrics while I'm still holding the probe'.
'Of course, I still review every image, but the heavy lifting is already done when I sit down to report,' she says.
AI is also useful in interpreting ultrasound images to check for abnormalities and assess risks when dealing with high-risk pregnancies. AI can also help predict the chances of a preterm birth, and assess the risk of complications such as high blood pressure and organ damage.
It can also help build personalised treatment planning for conditions such as PCOS and menopausal symptoms, says Dr. Majumdar.
Helai Gupta, head, department, obstetrics and gynaecology, Artemis Daffodils, Delhi, says AI is definitely a 'welcome tool'' in gynaecological practice, as it is far easier to rely on an AI bot to sift through huge amounts of literature while arriving at a diagnosis. Similarly, if you want to compute or collate large amounts of data, AI is a godsend, she adds. However, 'In practical clinical settings, AI has limited utility at least in my field,' she says.
Bandana Sodhi, director, obstetrics & gynaecology, Fortis La Femme, Delhi, adds that AI can help in reducing maternal/neonatal morbidity. 'Real-time AI-powered foetal monitoring continuously analyses foetal heart rate during labour and detects abnormalities promptly – leading to an 82 per cent reduction in stillbirths,' she says.
Is AI better at diagnosing cancer?
When it comes to mammograms, AI may be faster, but not necessarily better, says Dr. Helai. She explains that mammograms are designed to be sensitive and can give false positives with borderline cases. 'While AI may diagnose or 'undiagnose', and may do it faster, it may lack the necessary empathy and human skill required to ask for a second scan or put together patient-specific personal information that may alter simple reporting,' she says.
In comparison to an ultrasound screening test, which detected abnormal results less than 5% of the time, an AI neural network accurately recognises nearly 100% of anomalies associated with ovarian cancer, according to a paper published in the online journal Cancers published by MDPI. Large, randomised trials from Sweden and the UK show AI catching up to 20% more cancers without increasing false alarms, says Dr. Tendulwadkar.
'The direction is clear: very soon the standard will be one radiologist plus an AI safety net instead of two humans double-reading every film,' she says
AI in IVF
A couple in the United States who had an 'AI-assisted pregnancy' recently made headlines, but what is the science behind this? According to media reports, the couple who had been trying to conceive for 18 years, underwent several rounds of in vitro fertilization, or IVF. In the IVF process a woman's egg is removed and combined with sperm in a laboratory to create an embryo, which is then implanted in the womb. Their IVF attempts were unsuccessful however, due to azoospermia, a rare condition wherein instead of hundreds of millions of sperm in a sample there were no measurable sperms at all. Finally, at the Columbia University Fertility Center in USA, after hours of fruitless meticulous searching under a microscope for sperms in the husband's sample, AI was used, and it helped identify and recover three sperms which were then used to fertilise the wife's eggs. The women became the first successful pregnancy enabled under this novel 'STAR' method. The baby is due in December.
AI is being used to identify the most viable oocytes and embryos -- those with a high chance of leading to a pregnancy -- as well in selecting the correct timing of embryo transfer to the uterus.
Nandita Palshetkar, medical director and IVF specialist Bloom IVF, Lilavati Hospital, Mumbai, says, 'AI is giving us incredible new tools in IVF – helping us choose the best embryos, tailor treatments and improve success rates. It does not replace the doctor's judgement but supports it with sharper insights.'
Fertility consultant Ashwani Kale, director Asha Kiran Hospital and Asha IVF Centre, Pune, adds that better embryo selection with AI may soon replace the need for invasive testing.'
Risks and privacy concerns
Privacy and data concerns have also arisen when it comes to the use of AI. There is a critical need for responsible AI in gynaecology, says Dr. Majumdar, particularly when it comes to confidentiality. 'AI systems require access to sensitive patient data, which must be protected from unauthorised access and misuse. Patient consent is crucial for building trust in AI-powered healthcare solutions.'
Data shared on the internet lives forever, warns Dr. Hellai Gupta, reiterating the need to remove patient identifiers and private information to maintain confidentiality.
Dr. Tendulwadkar emphasises that AI tools have to be fine-tuned to Indian conditions and Indian women so that they respect 'our diversity in body habitus and disease patterns'.
Another potential threat Dr. Majumdar points to is the risk of bias in decision-making. 'If AI systems are trained on biased data, they may lead to unequal treatment and their outcomes for certain patient populations,' she says.
(Satyen Mohapatra is a senior journalist based in New Delhi. satyenbabu@gmail.com)
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
ChatGPT making us dumb & dumber, but we can still come out wiser
Claude Shannon, one of the fathers of AI, once wrote rather disparagingly: 'I visualize a time when we will be to robots what dogs are to humans, and I'm rooting for the machines.' As we enter the age of AI — arguably, the most powerful technology of our times — many of us fear that this prophecy is coming true. Powerful AI models like ChatGPT can create complex essays, poetry and pictures; Google's Veo stitches together cinema-quality videos; Deep Research agents produce research reports at the drop of a prompt. Our innate human abilities of thinking, creating, and reasoning seem to be now duplicated, sometimes surpassed, by AI. This seemed to be confirmed by a recent — and quite disturbing — MIT Media Lab study, 'Your Brain on ChatGPT'. It suggested that while AI tools like ChatGPT help us write faster, they may be making our minds slower. Through a four-month meticulously executed experiment with 54 participants, researchers found that those who used ChatGPT for essay writing exhibited up to 55% lower brain activity, as measured by EEG signals, compared to those who wrote without assistance. If this was not troubling enough, in a later session where ChatGPT users were asked to write unaided, their brains remained less engaged than people without AI ('brain-only' participants, as the study quaintly labelled them). Memory also suffered — only 20% could recall what they had written, and 16% even denied authorship of their own text! The message seemed to be clear: outsourcing thinking to machines may be efficient, but it risks undermining our capacity for deep thought, retention, and ownership of ideas. Technology has always changed us, and we have seen this story many times before. There was a time when you remembered everyone's phone numbers, now you can barely recall your family's, if that. You remembered roads, lanes and routes; if you did not, you consulted a paper map or asked someone. Today, Google and other map apps do that work for us. Facebook reminds us of people's birthdays; email answers suggest themselves, sparing us of even that little effort of thinking. When autonomous cars arrive, will we even remember how to drive or just loll around in our seats as it takes us to our destination? Jonathan Haidt, in his 'The Anxious Generation,' points out how smartphones radically reshaped childhood. Unstructured outdoor play gave way to scrolling, and social bonds turned into notifications. Teen anxiety, loneliness, and attention deficits all surged. From calculators diminishing our mental arithmetic, to GPS weakening our spatial memory, every tool we invent alters us — subtly or drastically. 'Do we shape our tools, or do our tools shape us?' is a quote commonly misattributed to Marshall McLuhan but this question is hauntingly relevant in the age of AI. If we let machines do the thinking, what happens to our human capacity to think, reflect, reason, and learn? This is especially troubling for children, and more so in India. For one, India has the highest usage of ChatGPT globally. Most of it is by children and young adults, who are turning into passive consumers of AI-generated knowledge. Imagine a 16-year-old using ChatGPT to write a history essay. The output might be near-perfect, but what has she actually learned? The MIT study suggests — very little. Without effortful recall or critical thinking, she might not retain concepts, nor build the muscle of articulation. With exams still based on memory and original expression, and careers requiring problem-solving, this is a silent but real risk. The real questions, however, are not whether the study is correct or is exaggerating, or whether AI is making us dumber or not, but what can we do about it. We definitely need some guardrails and precautions, and we need to start building them now. I believe that we should teach ourselves and our children to: Ask the right questions: As answers become commodities, asking the right questions will be the differentiator. We need to relook at our education system and pedagogy and bring back this unique human skill of curiosity. Intelligence is not just about answers. It is about the courage to think, to doubt, and to create Invert classwork and homework: Reserve classroom time for 'brain-only' activities like journaling, debates, and mental maths. Homework can be about using AI tools to learn what will be discussed in class the next day. AI usage codes: Just as schools restrict smartphone use, they should set clear boundaries for when and how AI can be used. Teacher-AI synergy: Train educators to use AI as a co-teacher, and not a crutch. Think of AI as Augmented Intelligence, not an alternative one. Above all, make everyone AI literate: Much like reading, writing, and arithmetic were foundational in the digital age, knowing how to use AI wisely is the new essential skill of our time. AI literacy is more than just knowing prompts. It means understanding when to use AI, and when not to; how to verify AI output for accuracy, bias, and logic; how to collaborate with AI without losing your own voice, and how to maintain cognitive and ethical agency in the age of intelligent machines. Just as we once taught 'reading, writing, adding, multiplying,' we must now teach 'thinking, prompting, questioning, verifying.' History shows that humans adapt. The printing press did not destroy memory; calculators did not end arithmetic; smartphones did not abolish communication. We evolved with them—sometimes clumsily, but always creatively. Today, with AI, the challenge is deeper because it imitates human cognition. In fact, as AI challenges us with higher levels of creativity and cognition, human intelligence and connection will become even more prized. Take chess: a computer defeated Gary Kasparov in chess back in 1997; since then, a computer or AI can defeat any chess champion hundred times out of hundred. But human 'brains-only' chess has become much more popular now, as millions follow D Gukesh's encounters with Magnus Carlsen. So, if we cultivate AI literacy and have the right guardrails in place; if we teach ourselves and our children to think with AI but not through it, we can come out wiser, not weaker. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.


Time of India
4 hours ago
- Time of India
How Microsoft 'killed' OpenAI's $3 billion acquisition of WindSurf, making Google the 'big winner'
FILE (AP Photo/Rick Rycroft, File) OpenAI's $3 billion agreement to buy the AI coding startup WindSurf has fallen apart. The highly-anticipated acquisition deal between artificial intelligence powerhouse OpenAI and AI coding startup Windsurf. OpenAI had reportedly been close to finalizing the deal to acquire Windsurf, formally known as Exafunction Inc. , with a signed letter of intent and investor payout agreements (waterfall agreements) already in place. The acquisition was even nearing an announcement in early May, according to sources familiar with the discussions. However, an OpenAI spokesperson has confirmed that the exclusivity period for their offer has lapsed, leaving Windsurf free to explore other opportunities. In a swift turn of events, Alphabet Inc's Google has stepped in, striking a deal worth approximately $2.4 billion to acquire top talent and licensing rights from Windsurf. This move comes hot on the heels of the collapsed OpenAI acquisition . Google announced on Friday, July 11, that it is bringing Windsurf Chief Executive Officer Varun Mohan and co-founder Douglas Chen, along with a small team of staffers, into its DeepMind artificial intelligence unit. While the company declined to disclose the specific financial terms, it clarified that the agreement does not involve taking an equity stake in Windsurf itself. This development marks a significant strategic gain for Google in the competitive AI landscape, securing valuable expertise and technology that had been hotly contested by its rivals. Microsoft tensions behind OpenAI-Windsurf deal collapse by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like An engineer reveals: One simple trick to get internet without a subscription Techno Mag Learn More Undo A significant factor in the unraveling of the OpenAI-Windsurf deal appears to be friction with kMicrosoft Corp., a major investor and key partner for OpenAI. According to a Bloomberg report, sources close to the matter indicate that Windsurf was hesitant to grant Microsoft access to its intellectual property. This condition became a sticking point that OpenAI was reportedly unable to resolve with Microsoft, whose existing agreement with OpenAI grants the software giant access to the AI startup's technology. This issue was reportedly one of several points of contention in ongoing discussions between Microsoft and OpenAI regarding OpenAI's restructuring into a commercial entity. What is Windsurf into Founded in 2021, Windsurf is a prominent player in the burgeoning field of AI-driven coding assistants. These systems are designed to automate and streamline coding tasks, including generating code from natural language prompts. The startup has successfully raised over $200 million in venture capital funding from investors like Greenoaks Capital Partners and AIX Ventures, according to PitchBook data. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
5 hours ago
- Time of India
Centre okays AI-based diabetic retinopathy screening in Raj
Jaipur: The Centre has approved Rajasthan's project to screen for diabetic retinopathy (DR), a serious eye condition caused by diabetes that can lead to vision loss. The initiative aims to identify early signs of retinal damage and provide timely treatment to prevent complications. In July, the state health department launched MadhuNetr DR-AI, an artificial intelligence-based system for DR screening using fundus cameras. Fundus photography helps in documenting retinopathy and counselling patients by visually demonstrating the impact of the disease. "We started MadhuNetr DR-AI considering the rising burden of diabetes. Retinopathy is a major complication and can cause irreversible vision damage," said Dr. Sunil Singh, State Nodal Officer for Non-Communicable Diseases (NCDs). The AI system enables early detection by grading the severity of retinopathy. Based on the results, patients are referred to ophthalmologists for further treatment. Initially, the screening was launched in five locations- Pali, Jalore, Deshnok (Bikaner), Karauli, and Beawar, where fundus cameras were already available. With Central funding now approved, eight more screening centres will soon be established in govt hospitals across the state. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Các chỉ số toàn cầu đang biến động — Đã đến lúc giao dịch! IC Markets Tìm hiểu thêm Undo So far, 65 patients have been screened in the five operational centres, with 15 diagnosed with retinopathy and referred for advanced care. The health department is prioritising NCD prevention due to high prevalence rates. In May, it reported that 20% of the population above 30 years in the state suffers from diabetes, hypertension, or both—equating to 370 out of every 1,850 people in that age group. The department said 37% of the state's population is above the age of 30 years.