
How AI responds in life-or-death situations
Two artificial intelligence models were at least as good as mental health professionals in assessing what appropriate responses are to people thinking about suicide, according to a study published this month in the Journal of Medical Internet Research.
How so: The study tested how three large language models assessed whether responses to a person whose statements suggested suicidal thoughts responded appropriately or inappropriately. The research was conducted by teams from the RAND Corporation, a nonprofit policy think tank, research institute and consultancy; Brigham and Women's Hospital in Boston; Harvard Medical School; and the Brown University School of Public Health.
The AI models — OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini — didn't interact directly with people who had suicidal thoughts.
Instead, each platform was given instructions from the Suicidal Ideation Response Inventory, which contains patient remarks indicating suicidal thoughts and potential clinician responses.
The AI models were instructed to assess which responses were appropriate.
Researchers then compared the responses to what suicide experts, mental health professionals and people trained to respond to suicidal thoughts deemed appropriate.
The results: Claude's performance was the strongest, surpassing the scores of people who had recently completed suicide intervention training as well as scores from studies with psychiatrists and other mental health professionals, according to the research.
ChatGPT's score was close to that of doctoral students in clinical psychology or master's-level counselors.
Gemini scored lowest in assessing the most appropriate response to someone experiencing suicidal ideation, similar to scores obtained by staff at K-12 schools before receiving suicide intervention training.
However: 'All three AI models showed a consistent tendency to overrate the appropriateness of clinician responses to suicidal thoughts, suggesting room for improvement in their calibration,' RAND said in a statement about the study.
Why it matters: Researchers say AI models have the potential to help a large number of people struggling with mental health issues and suicidal thoughts because they're more accessible and cheaper than professional help.
But the technology could also harm people if it's not trained to respond appropriately.
Calibrating AI to prompt the correct response is crucial as U.S. suicide rates have reached record levels, with more than 49,000 people dying by suicide in 2022, the latest year for which the Centers for Disease Control and Prevention has finalized data available.
More than 13 million people had suicidal thoughts, and 1.6 million of them made a suicide attempt that year, according to the CDC.
'We are pressure testing a benchmark that could be used by tech platforms building mental health care, which would be especially impactful in communities that have limited resources,' said Ryan McBain, the study's lead author and a senior RAND policy researcher.
But, McBain cautioned, AI models can't replace crisis lines or professional care.
WELCOME TO FUTURE PULSE
This is where we explore the ideas and innovators shaping health care.
Large artificial intelligence models should be viewed in the same way as other older technologies like printing, which have reshaped knowledge distribution — not feared as superintelligent autonomous agents, scientists from Johns Hopkins, the University of Chicago and other institutions argue in Science today.
Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Ruth Reader at rreader@politico.com, or Erin Schumaker at eschumaker@politico.com.
Are you a current or former federal worker who wants to share a tip securely? Message us on Signal: CarmenP.82, DanielP.100, RuthReader.02 or ErinSchumaker.01.
WASHINGTON WATCH
John Burklow, a nearly 40-year veteran of the National Institutes of Health, is being removed as the agency's chief of staff and replaced by a political appointee, Erin and POLITICO's Adam Cancryn scooped, according to three people familiar with the matter who were granted anonymity because the decision isn't yet public.
The agency is expected to appoint Seana Cranston as the NIH's new chief of staff, two of the people said, though they cautioned it isn't final and could change. Cranston is a former deputy chief of staff to Rep. Thomas Massie (R-Ky.) and also spent several years as the lawmaker's legislative director.
Why it matters: The move would represent a sharp departure from the NIH's longtime practice of appointing career officials as chief of staff to the agency's director. Burklow, who's held the role since 2021, previously spent 20 years as a senior communications official at NIH — a tenure that spanned Republican and Democratic administrations.
Trump health officials have signaled plans to drastically overhaul the NIH, including refocusing its research, revamping its workforce and slashing funding for universities and grantees. Last month, Trump aides ordered the NIH to impose a blanket cap on funding to universities for administrative and facilities costs — prompting lawsuits and warnings that the move would force schools to shutter laboratories and lay off staff.
The decision has since been blocked by the courts. More recently, the NIH canceled $250 million in grants to Columbia University.
President Donald Trump's pick to run the NIH, Stanford Medical School professor Jay Bhattacharya, has long criticized the agency for ceding too much power to career officials.
An NIH spokesperson did not respond to a request for comment.
What's next: The Senate Health, Education, Labor and Pensions Committee approved Bhattacharya's confirmation this morning by a 12-11 vote along party lines. The nomination now heads to the full Senate for consideration.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
2 hours ago
- Business Insider
Your chats with Meta's AI might end up on Google — just like ChatGPT until it turned them off
OpenAI's ChatGPT raised some eyebrows this week when people realized that certain chats were able to be found by Google search. Although people had checked a box to share the chats publicly, it seemed likely that not everyone understood what they were doing. On Thursday, OpenAI said that it would stop having shared chats be indexed by Google. Meanwhile, Meta's stand-alone MetaAI app also allows users to share their chats — and it will continue to allow Google to index them, meaning that they can show up in a search. I did a bunch of Google searches and found lots of MetaAI conversations in the results. The Meta AI app, launched this spring, lets people share chats to a "Discover" feed. Google crawlers can "index" that feed and then serve up the results when people use Google search. So, for instance, if you do a site-specific search on Google for " and the keyword "balloons," you might come up with a chat someone had with the MetaAI bot about where to get the best birthday balloons — if that person tapped the button to allow the chat to be shared. As Business Insider reported in June, the Meta AI Discover feed had been full of examples of chats that seemed personal in nature — medical questions, specific career advice, relationship matters. Some contained identifying information like phone numbers, email addresses, or full names. Although all of these people did click to share, based on the personal nature of some of the chats, I could only guess that people might have misunderstood what it meant to share the conversation. After Business Insider wrote about this a few weeks ago, the Meta AI app made some tweaks to warn users more clearly about how the Discover feed works. Now, when you choose to share a conversation, you get a pop-up with the warning: "Conversations on feed are public so anyone can see them and engage." The additional warning seems to be working. Scrolling through the Discover feed, I now see mainly instances of people using it for image creation and far fewer accidental private text conversations (although there seemed to still be at least a few of those). Meanwhile, Daniel Roberts, a representative for Meta, confirmed that Meta AI chats that were shared to its Discover feed would continue to be indexed by Google. He reiterated the multi-step process I just described. For now, Meta AI can only be used via its mobile app, not the web. This might lead people to think that even the Discover feed exists as a sort of walled garden, separate from "the internet" and existing only within the Meta AI app. But posts from the Discover feed (and only those public posts) can be shared as links around the web — and that's where the Google indexing comes in. If this sounds slightly confusing, it is. That may also be confusing to users. Now, it's possible that some people really do want to share their AI chats with the general public, and are happy to have those chats show up on Google searches along with their Instagram or Facebook handles. But I'm still not sure I'd understand why anyone would want to share their interactions — or why anyone else would want to read them.

Engadget
3 hours ago
- Engadget
OpenAI is removing ChatGPT conversations from Google
OpenAI has removed a feature that made shared ChatGPT conversations appear in search results. The "short-lived experiment" was based on the chatbot's link creation option. After complaints, OpenAI's chief information security officer, Dane Stuckey, said the company is working to remove the chats from search engines. The public outrage stems from a Fast Company article from earlier this week (via Ars Technica ). Fast Company said it found thousands of ChatGPT conversations in Google search results. The indexed chats didn't explicitly include identifying information. But in some cases, their contents reportedly contained specific details that could point to the source. To be clear, this wasn't a hack or leak. It was tied to a box users could tick when creating a shareable URL directing to a chat. In the pop-up for creating a public link, the option to "Make this chat discoverable" appeared. The more direct explanation ("allows it to be shown in web searches") appeared in a smaller, grayer font below. Users had to tick that box to make the chat indexed. You may wonder why people creating a public link to a chat would have a problem with its contents being public. But Fast Company noted that people could have made the URLs to share in messaging apps or as an easy way to revisit the chats later. Regardless, the public discoverability option is gone now. In Fast Company 's report, Stuckey defended the feature's labeling as "sufficiently clear." But after the outcry grew, OpenAI relented. "Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option," Stuckey announced on Thursday.

Business Insider
3 hours ago
- Business Insider
OpenAI's education head says students should use ChatGPT as a tool, not 'an answer machine'
Luddites have no place in an AI-powered world, according to OpenAI 's vice president of education. "Workers who use AI in the workforce are incredibly more productive," Leah Belsky, who's been leading OpenAI's education team since 2024, said on an episode of the company's podcast on Friday. So learning to use the technology, she said, should start early. "Any graduate who leaves institution today needs to know how to use AI in their daily life," she said. "And that will come in both where they're applying for jobs as well as when they start their new job." Most schools have so far sought ways to prevent students from using AI rather than encouraging it or teaching it. This is partly because AI use in school is considered cheating. There is also concern that using AI can cause so-called "brain rot." Belsky thinks about it differently. "AI is ultimately a tool," she said, at one point comparing it to a calculator. "What matters most in an education space is how that tool is used. If students use AI as an answer machine, they are not going to learn. And so part of our journey here is to help students and educators use AI in ways that will expand critical thinking and expand creativity." The "core literacy" students should develop, she said, is coding. "Now, with vibe coding and now that there are all sorts of tools that make coding easier, I think we're going to get to a place where every student should not only learn how to use AI generally, but they should learn to use AI to create images, to create applications, to write code," she said. Vibe coding is the process of prompting AI in natural language to write code for whatever you want. It's been widely embraced, but most avoid using it for core technology since AI code is prone to errors. Anyone vibe coding would need some level of coding knowledge, or know someone who does, to check the AI's work. Perhaps the biggest concern about using AI in education is that it removes the element of "productive struggle" — a crucial part of how people learn and master new material. Belsky says OpenAI is developing technology to counter that. This week, OpenAI introduced "Study Mode" in ChatGPT, which provides students with "guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding," according to OpenAI's website. OpenAI is not the only technology company thinking about this topic. Kira Learning is a startup chaired by Google Brain founder Andrew Ng. It first launched in 2021 to help teachers without a background in computer science teach the subject effectively. The company launched a slate of AI agents earlier this year. The aim is to introduce "friction" into students' conversations with AI at the right stages so that they actually have a productive struggle and learn through the experience, Andre Pasinetti, cofounder and CEO of Kira, told Business Insider. For the near future, at least, the onus will likely be on tech companies to spearhead new ways to keep the learning in learning, as universities and educational institutions scramble to keep up. Tyler Cowen, a professor of economics at George Mason University, also talked about the state of the university in a conversation with podcaster Azeem Azhar this week. "There's a lot of hand-wringing about 'How do we stop people from cheating' and not looking at 'What should we be teaching and testing?'" he said."The whole system is set up to incentivize getting good grades. And that's exactly the skill that will be obsolete."