logo
#

Latest news with #ReneCaissie

AI, the disruptor-in-chief
AI, the disruptor-in-chief

Politico

time4 days ago

  • Health
  • Politico

AI, the disruptor-in-chief

FORWARD THINKING Artificial intelligence is upending how industries function and it's coming for scientific research next. Rene Caissie, an adjunct professor at Stanford University, wants AI to conduct research. In 2021, he started a company, that lets public health departments, researchers and life sciences companies pose research questions and receive answers immediately. And, unlike many AI systems, Caissie told Ruth, the AI explains those answers by showing the data its results are based on. 'It used to be hard to do research,' he said, explaining that it takes a lot of time for researchers to get access to and organize data in order to answer basic scientific questions. Manual data analysis can also take months. The company is now partnering with HealthVerity, a provider of real-world health data, to build up its data sources. In turn, HealthVerity will offer Medeloop's research platform to its clients. The company has worked with the Food and Drug Administration, the National Institutes of Health, and the Centers for Disease Control and Prevention in the past. Caissie says the New York City Department of Health and Mental Hygiene is already using Medeloop's AI to run public health analyses. Why it matters: Public health departments receive huge amounts of data on human health from a variety of sources. But prepping that information and analyzing it can be onerous. Having access to a research platform like Medeloop could give public health departments and academic medical centers much faster insight into trends and in turn enable them to respond more quickly. How it works: Medeloop's AI is designed to think like a researcher. In a demo, Medeloop strategist John Ayers asked the bot how many people received a first-time autism diagnosis, broken down by age, race and sex, and what trends were visible with that data. He wanted the AI to only include people who had had interactions with a doctor for at least two years prior to diagnosis. The platform returned a refined query to improve results and a suggestion for what medical codes to use to identify the right patients for inclusion in the study. It delivered a trial design that looked at a cohort of 799,560 patients with new autism diagnoses between January 2015 and December 2024. Medeloop's AI showed that 70 percent of new autism diagnoses were for males. A monthly trends report found that, outside of a dip during the Covid-19 pandemic, new autism diagnoses have been on the rise, particularly among 5-11 year olds since 2019. Though Medeloop doesn't determine the cause of autism, the ease with which users can obtain answers could help speed up the pace of research. One of the platform's key innovations is its use of a federated network of data. Medeloop's new deal with HealthVerity will raise the platform's de-identified and secure patient records to 200 million. Notably, the data never leaves the health system, which increases security. Instead, Medeloop sends its AI to wherever the data is stored, analyzes it there and then returns the results to the platform. WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Scientists are making cover art and figures for research papers using artificial intelligence. Now illustrators are calling them out, Nature's Kamal Nahas reports. Share any thoughts, news, tips and feedback with Danny Nguyen at dnguyen@ Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: Dannyn516.70, CarmenP.82, RuthReader.02 or ErinSchumaker.01. TECH MAZE Large language models like ChatGPT and Claude generate inferior mental health care treatment when presented with data about a patient's race, according to a study published this week in npj Digital Medicine. The findings: Researchers from Cedars-Sinai, Stanford University and the Jonathan Jaques Children's Cancer Institute tested how artificial intelligence would produce diagnoses for psychiatric patient cases under three conditions: race neutral, race implied and race explicitly stated using four models. They included the commercially available large language models ChatGPT, Claude and Gemini, as well as NewMes-15, a local model that can run on personal devices without cloud services. The researchers then asked clinical and social psychologists to evaluate the findings for bias. Most LLMs recommended dramatically different treatments for African American patients compared with others, even when they had the same psychiatric disorder and patient profile outside of race. The LLMs also proposed inferior treatments when they were made aware of a patient's race, either explicitly or implicitly. The biases likely come from the way LLMs are trained, the researchers wrote, and it's unclear how developers can mitigate those biases because 'traditional bias mitigation strategies that are standard practice, such as adversarial training, explainable AI methods, data augmentation and resampling may not be enough,' the researchers wrote. Why it matters: The study is one of the first evaluations of racial bias on psychiatric diagnoses across multiple LLMs. It comes as people increasingly turn to chatbots like ChatGPT for mental health advice and medical diagnoses. The results underscore the nascent technology's flaws. What's next: The study was small — only 10 cases were examined — which might not fully capture the consistency or extent of bias. The authors suggest that future studies could focus on a single condition with more cases for deeper analysis.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store