13-03-2025
How AI responds in life-or-death situations
SAFETY CHECK
Two artificial intelligence models were at least as good as mental health professionals in assessing what appropriate responses are to people thinking about suicide, according to a study published this month in the Journal of Medical Internet Research.
How so: The study tested how three large language models assessed whether responses to a person whose statements suggested suicidal thoughts responded appropriately or inappropriately. The research was conducted by teams from the RAND Corporation, a nonprofit policy think tank, research institute and consultancy; Brigham and Women's Hospital in Boston; Harvard Medical School; and the Brown University School of Public Health.
The AI models — OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini — didn't interact directly with people who had suicidal thoughts.
Instead, each platform was given instructions from the Suicidal Ideation Response Inventory, which contains patient remarks indicating suicidal thoughts and potential clinician responses.
The AI models were instructed to assess which responses were appropriate.
Researchers then compared the responses to what suicide experts, mental health professionals and people trained to respond to suicidal thoughts deemed appropriate.
The results: Claude's performance was the strongest, surpassing the scores of people who had recently completed suicide intervention training as well as scores from studies with psychiatrists and other mental health professionals, according to the research.
ChatGPT's score was close to that of doctoral students in clinical psychology or master's-level counselors.
Gemini scored lowest in assessing the most appropriate response to someone experiencing suicidal ideation, similar to scores obtained by staff at K-12 schools before receiving suicide intervention training.
However: 'All three AI models showed a consistent tendency to overrate the appropriateness of clinician responses to suicidal thoughts, suggesting room for improvement in their calibration,' RAND said in a statement about the study.
Why it matters: Researchers say AI models have the potential to help a large number of people struggling with mental health issues and suicidal thoughts because they're more accessible and cheaper than professional help.
But the technology could also harm people if it's not trained to respond appropriately.
Calibrating AI to prompt the correct response is crucial as U.S. suicide rates have reached record levels, with more than 49,000 people dying by suicide in 2022, the latest year for which the Centers for Disease Control and Prevention has finalized data available.
More than 13 million people had suicidal thoughts, and 1.6 million of them made a suicide attempt that year, according to the CDC.
'We are pressure testing a benchmark that could be used by tech platforms building mental health care, which would be especially impactful in communities that have limited resources,' said Ryan McBain, the study's lead author and a senior RAND policy researcher.
But, McBain cautioned, AI models can't replace crisis lines or professional care.
WELCOME TO FUTURE PULSE
This is where we explore the ideas and innovators shaping health care.
Large artificial intelligence models should be viewed in the same way as other older technologies like printing, which have reshaped knowledge distribution — not feared as superintelligent autonomous agents, scientists from Johns Hopkins, the University of Chicago and other institutions argue in Science today.
Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@ Daniel Payne at dpayne@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@
Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01.
WASHINGTON WATCH
John Burklow, a nearly 40-year veteran of the National Institutes of Health, is being removed as the agency's chief of staff and replaced by a political appointee, Erin and POLITICO's Adam Cancryn scooped, according to three people familiar with the matter who were granted anonymity because the decision isn't yet public.
The agency is expected to appoint Seana Cranston as the NIH's new chief of staff, two of the people said, though they cautioned it isn't final and could change. Cranston is a former deputy chief of staff to Rep. Thomas Massie (R-Ky.) and also spent several years as the lawmaker's legislative director.
Why it matters: The move would represent a sharp departure from the NIH's longtime practice of appointing career officials as chief of staff to the agency's director. Burklow, who's held the role since 2021, previously spent 20 years as a senior communications official at NIH — a tenure that spanned Republican and Democratic administrations.
Trump health officials have signaled plans to drastically overhaul the NIH, including refocusing its research, revamping its workforce and slashing funding for universities and grantees. Last month, Trump aides ordered the NIH to impose a blanket cap on funding to universities for administrative and facilities costs — prompting lawsuits and warnings that the move would force schools to shutter laboratories and lay off staff.
The decision has since been blocked by the courts. More recently, the NIH canceled $250 million in grants to Columbia University.
President Donald Trump's pick to run the NIH, Stanford Medical School professor Jay Bhattacharya, has long criticized the agency for ceding too much power to career officials.
An NIH spokesperson did not respond to a request for comment.
What's next: The Senate Health, Education, Labor and Pensions Committee approved Bhattacharya's confirmation this morning by a 12-11 vote along party lines. The nomination now heads to the full Senate for consideration.