15 hours ago
Exclusive: How Claude became an emotional support bot
People who talk to Anthropic's Claude chatbot about emotional issues tend to grow more positive as the conversation unfolds, according to new Anthropic research shared exclusively with Axios.
Why it matters: Having a trusted confidant available 24/7 can make people feel less alone, but chatbots weren't designed for emotional support.
Bots have displayed troubling tendencies, like reinforcing delusional behavior or encouraging self-harm, that are especially problematic for young people or adults struggling with their mental health.
Driving the news: Anthropic released new research Thursday that explores how users turn to its chatbot for support and connection and what happens when they do.
While anecdotes of users prompting AI bots like Claude and ChatGPT for emotional support are widespread, Anthropic's report is the first formal acknowledgment by the AI provider of this use.
What they're saying:"We find that when people come to Claude for interpersonal advice, they're often navigating transitional moments — figuring out their next career move, working through personal growth, or untangling romantic relationships," per the report.
The report calls these interactions with chatbots "affective use," defined roughly as personal exchanges with Claude motivated by emotional or psychological needs.
Zoom in: The report found evidence that users don't necessarily turn to chatbots deliberately looking for love or companionship, but some conversations evolve that way.
"We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship — despite that not being the original reason someone reached out," per the report.
"As these conversations with Claude progress, we found that the person's expressed sentiment often becomes more positive," Anthropic societal impacts researcher Miles McCain told Axios. "And while we can't claim that these shifts represent lasting emotional benefits, the absence of clear negative spirals is reassuring."
Researchers behind the report told Axios that the results are preliminary and that measuring "expressed sentiment" can be limited.
By the numbers: Anthropic found that AI companionship isn't fully replacing the real thing anytime soon. Most people still use Claude for work tasks and content creation.
A relatively small number (2.9%) of interactions with Claude constituted "affective use" — a finding that confirms previous research from OpenAI.
Companionship and roleplay combined were 0.5% of conversations.
Romantic or sexual roleplay — which Claude's training actively discourages — was less than 0.1%, according to the report.
What they did: Anthropic analyzed user behavior with Clio, a tool it launched last year that works like Google Trends — aggregating chats while stripping out identifying details.
Clio anonymizes and aggregates Claude chats to keep specific conversations private while revealing broader trends. This is similar to the way Google tracks what people are searching for without revealing (or allowing humans to have access to) personal search histories.
The research excluded conversations that focused on writing stories, fictional dialogues, blog posts or other content.
Among conversations that included roleplaying, Anthropic says the researchers only analyzed "meaningful interactive" chats, meaning that they included four or more human messages.
Yes, but: While the internet is full of people claiming that they've cut costs on therapy by turning to a chatbot, there's plenty of evidence that bots make particularly bad therapists because they're so eager to please users.
Anthropic says it didn't study extreme usage patterns or how chatbots can reinforce delusions or conspiracy theories, which the company admits is worthy of a separate study.
To keep chats private, the researchers only looked at clusters of conversations from multiple users and didn't analyze individual users' conversations over time, which makes it difficult to study emotional dependency, Anthropic notes.
"We need large, rigorous trials that are of longer duration, because if you just relieve a person's anxiety, stress or depression on a very short term basis, that's not what we're after," physician-researcher Eric Topol told Axios. "We're after durable benefits. So I'm confident that we're going to get there, but we're not there yet."
Zoom out: Anthropic, founded by former OpenAI staff, pitches Claude as a more responsible alternative to ChatGPT.
"Safety is deeply ingrained in everything that we do, and it underpins all of our work," Alexandra Sanderford, Anthropic's head of safeguards, policy and enforcement, told Axios. "We really do try to prioritize human values and welfare."
The company has recently shared assessments of potentially alarming behavior by Claude in hypothetical test scenarios, including a willingness to blackmail users.
Last week Anthropic also released findings