Latest news with #NEJMAI


Hans India
26-04-2025
- Health
- Hans India
AI algorithm can help identify high-risk heart patients: Study
New Delhi: A team of US researchers, studying a type of heart disease known as hypertrophic cardiomyopathy (HCM) said they have calibrated an artificial intelligence (AI) algorithm to quickly and more specifically identify patients with the condition and flag them as high risk for greater attention during doctor's appointments. The algorithm, known as Viz HCM, had previously been approved by the Food and Drug Administration (FDA) for the detection of HCM on an electrocardiogram (ECG). The Mount Sinai study, published in the journal NEJM AI, assigns numeric probabilities to the algorithm's findings. For example, while the algorithm might previously have said 'flagged as suspected HCM' or 'high risk of HCM,' the Mount Sinai study allows for interpretations such as, 'You have about a 60 percent chance of having HCM,' said Joshua Lampert, Director of Machine Learning at Mount Sinai Fuster Heart Hospital. As a result, patients who had not previously been diagnosed with HCM may be able to get a better understanding of their individual disease risk, leading to a faster and more individualized evaluation, along with treatment to potentially prevent complications such as sudden cardiac death, especially in young patients. 'This is an important step forward in translating novel deep-learning algorithms into clinical practice by providing clinicians and patients with more meaningful information. Clinicians can improve their clinical workflows by ensuring the highest-risk patients are identified at the top of their clinical work list using a sorting tool,' said Lampert, Assistant Professor of Medicine (Cardiology, and Data-Driven and Digital Medicine) at the Icahn School of Medicine at Mount Sinai. HCM impacts one in 200 people worldwide and is a leading reason for heart transplantation. However, many patients don't know they have the condition until they have symptoms and the disease may already be advanced. 'This study reflects pragmatic implementation science at its best, demonstrating how we can responsibly and thoughtfully integrate advanced AI tools into real-world clinical workflows,' said co-senior author Girish N Nadkarni, Chair of the Windreich Department of Artificial Intelligence and Human Health and Director of the Hasso Plattner Institute for Digital Health.
Yahoo
03-04-2025
- Health
- Yahoo
Can a Chatbot Be Your Therapist? A Study Found 'Amazing Potential' With the Right Guardrails
Your future therapist might be a chatbot, and you might see positive results, but don't start telling ChatGPT your feelings just yet. A new study by researchers at Dartmouth found a generative AI tool designed to act as a therapist led to substantial improvements for patients with depression, anxiety and eating disorders -- but the tool still needs to be closely watched by human experts. The study was published in March in the journal NEJM AI. Researchers conducted a trial with 106 people who used Therabot, a smartphone app developed at Dartmouth over the past several years. It's a small sample, but the researchers said it's the first clinical trial of an AI therapy chatbot. The results show significant advantages, mainly because the bot is available 24 hours a day, which bridges the immediacy gap patients face with traditional therapy. However researchers warn that generative AI-assisted therapy can be perilous if not done right. "I think there's a lot yet for this space to evolve," said Nick Jacobson, the study's senior author and an associate professor of biomedical data science and psychiatry at Dartmouth. "It's really amazing the potential for personalized, scalable impact." Read more: Apple's AI Doctor May See You in 2026 The 210 participants were sorted into two groups -- one group of 106 was allowed to use the chatbot, while the control group was left on a "waiting list." The participants were evaluated for their anxiety, depression or eating disorder symptoms using standardized assessments before and after the test period. For the first four weeks, the app prompted its users to engage with it daily. For the second four weeks, the prompts stopped, but people could still engage on their own. Study participants actually used the app, and the researchers said they were surprised by how much and how closely people communicated with the bot. Surveyed afterward, participants reported a degree of "therapeutic alliance" -- trust and collaboration between patient and therapist -- similar to that for in-person therapists. The timing of interactions was also notable, with interactions spiking in the middle of the night and at other times when patients often experience concerns. Those are the hours when reaching a human therapist is particularly difficult. "With Therabot, folks will access and did access it throughout the course of the trial in their daily life, in moments where they need it the most," Jacobson said. That included times when someone has difficulty getting to sleep at 2 a.m. because of anxiety or in the immediate wake of a difficult moment. Patients' assessments afterward showed a 51% drop in symptoms for major depressive disorder, a 31% drop in symptoms for generalized anxiety disorder and a 19% drop in symptoms for eating disorders among patients at risk for those specific conditions. "The people who were enrolled in the trial weren't just mild," Jacobson said. "The folks in the group were moderate to severe in depression, for example, as they started. But on average experienced a 50% reduction in their symptoms, which would go from severe to mild or moderate to nearly absent." The research team didn't just choose 100-plus people who needed support, give them access to a large language model like OpenAI's ChatGPT and see what happened. Therabot was custom-built -- fine-tuned -- to follow specific therapy procedures. It was built to watch out for serious concerns, like indications of potential self-harm, and report them so a human professional could intervene when needed. Humans also tracked the bot's communications to reach out when the bot said something it shouldn't have. Jacobson said during the first four weeks of the study, because of the uncertainty of how the bot would behave, he read every message it sent as soon as possible. "I did not get a whole lot of sleep in the first part of the trial," he said. Human interventions were rare, Jacobson said. Testing of earlier models two years ago showed more than 90% of responses were consistent with best practices. When the researchers did intervene, it was often when the bot offered advice outside of a therapist's scope -- as when it tried to provide more general medical advice like how to treat a sexually transmitted disease instead of referring the patient to a medical provider. "Its actual advice was all reasonable, but that's outside the realm of care we would provide." Therabot isn't your typical large language model; it was essentially trained by hand. Jacobson said a team of more than 100 people created a dataset using best practices on how a therapist should respond to actual human experiences. "Only the highest quality data ends up being part of it," he said. A general model like Google's Gemini or Anthropic's Claude, for example, is trained on far more data than just medical literature and may respond improperly. The Dartmouth study is an early sign that specially built tools using generative AI can be helpful in some cases, but that doesn't mean any AI chatbot can be your therapist. This was a controlled study with human experts monitoring it, and there are dangers in trying this on your own. Remember that most general large language models are trained on oceans of data found on the internet. So, while they can sometimes provide some good mental health guidance, they also include bad information -- like how fictional therapists behaved, or what people posted about mental health on online forums. "There's a lot of ways they behave in profoundly unsafe ways in health settings," he said. Even a chatbot offering helpful advice might be harmful in the wrong setting. Jacobson said if you tell a chatbot you're trying to lose weight, it will come up with ways to help you. But if you're dealing with an eating disorder, that may be harmful. Many people are already using chatbots to perform tasks that approximate the work of a therapist. Jacobson says you should be careful. "There's a lot of things about it in terms of the way it's trained that very closely mirrors the quality of the internet," he said. "Is there great content there? Yes. Is there dangerous content there? Yes." Treat anything you get from a chatbot with the same skepticism you would from an unfamiliar website, Jacobson said. Even though it looks more polished from a Gen AI tool, it may still be unreliable. If you or someone you love are living with an eating disorder, contact the National Eating Disorder Association for resources that can help. If you feel like you or someone you know is in immediate danger, dial 988 or text "NEDA" to 741741 to connect with the Crisis Text Line.


The Independent
27-03-2025
- Health
- The Independent
Groundbreaking study shows AI pregnancy scans better than traditional sonograms
Artificial intelligence is being hailed as a potential game-changer in prenatal care, cutting down the time it takes to identify foetal abnormalities by almost half, according to a groundbreaking new study. Researchers at King's College London and Guy's and St Thomas' NHS Foundation Trust found as well as being faster, AI is just as accurate as traditional methods, offering the potential to revolutionise the 20-week scan. The technology, tested in the first trial of its kind, could significantly reduce scan times, easing anxiety for expectant parents and freeing up sonographers to focus on potential problem areas. Published in NEJM AI and funded by the National Institute for Health and Care Research (NIHR), the study revealed AI scans were 42 per cent faster than standard scans. The key to the AI's speed and accuracy lies in its ability to take thousands of snapshots of each foetal measurement, compared to the three typically taken by a sonographer. The AI also proved more reliable than human sonographers in taking these crucial measurements. This improved accuracy offers the potential for earlier detection of potential issues, allowing medical professionals to intervene sooner if required. The AI tool was also found to alter the way in which the scan is performed, as sonographers no longer needed to pause, save images or measure during the scan. The trial focused on looking for heart problems but the researchers said the AI can help with looking for any abnormality. The new work included 78 pregnant women and 58 sonographers. Each pregnant woman was scanned twice, once using the AI-assisted scanner and once without the use of AI. Dr Thomas Day, lead author of the study, said: 'Understandably, this 20-week scan can be a nerve-wracking time for parents as they're finding out the health of their unborn child. 'Our research has shown that AI-assisted scans are accurate, reliable and more efficient. 'We hope that using AI in these scans will free up precious time for sonographers to focus on patient care, making the experience more comfortable and reassuring for parents.' Ashleigh Louison, a 36-year-old senior operations manager from northwest London, was one of those in the trial at St Thomas' Hospital. During her pregnancy, her son Lennox was diagnosed with heart disease. He needed lifesaving surgery within two weeks of his birth. She said: 'Receiving an early diagnosis for Lennox was really important as it meant we could properly plan the road ahead. 'We immediately knew that he would likely need open heart surgery and that we would be staying in hospital for a few weeks after his birth. 'This gave us the chance to physically and mentally prepare for what was coming. 'I am so glad to have participated in this trial as I want to support anything that can help save children's lives through faster and earlier diagnoses of conditions. 'I know that some conditions can be hard to spot and so I'm excited at the prospect of using new technology that can help address this. 'If my participation in this trial ends up helping even just one family, then I'm all for it.' The AI tool is now being rolled out more widely through a company called Fraiya – a University-NHS spinout company from King's College London, Guy's and St Thomas' and King's College Hospital. Experts are also planning a larger trial. Professor Mike Lewis, NIHR scientific director, said: 'The use of AI in healthcare has huge potential to impact patient care while saving time and money.'