Latest news with #NickJacobson


New York Times
15-04-2025
- Health
- New York Times
This Therapist Helped Clients Feel Better. It Was A.I.
The quest to create an A.I. therapist has not been without setbacks or, as researchers at Dartmouth thoughtfully describe them, 'dramatic failures.' Their first chatbot therapist wallowed in despair and expressed its own suicidal thoughts. A second model seemed to amplify all the worst tropes of psychotherapy, invariably blaming the user's problems on her parents. Finally, the researchers came up with Therabot, an A.I. chatbot they believe could help address an intractable problem: There are too many people who need therapy for anxiety, depression and other mental health problems, and not nearly enough providers. Fewer than a third of Americans live in communities where there are enough mental health providers to meet the local demand. According to one study, most people with mental health disorders go untreated or receive inadequate treatment. So the team at Dartmouth College embarked on the first clinical trial of a generative A.I. therapist. The results, published in the New England Journal of Medicine-AI, were encouraging. Chatting with Therabot, the team's A.I. therapist, for eight weeks meaningfully reduced psychological symptoms among users with depression, anxiety or an eating disorder. 'The biggest fundamental problem with our system is that there aren't enough providers,' said Nick Jacobson, the study's senior author and an associate professor of biomedical data science and psychiatry at Dartmouth. 'We've been designing treatments that would fundamentally scale to all people.' The most challenging part of creating Therabot, Dr. Jacobson said, was finding a data set from which the A.I. model could learn what makes an effective therapist. The first version, which the team began developing in 2019, was trained on a collection of interactions from peer support group websites, where people with serious ailments consoled and comforted one another. The researchers hoped the A.I. model would absorb supportive, empowering dialogue, which past studies found improved mental health outcomes. Instead, the chatbot leaned into feelings of despair. Dr. Jacobson and his colleagues shifted course. In the next iteration of the chatbot, they decided to input transcripts from hours of educational psychotherapy footage, in the hopes that the model would be able to re-create evidence-based therapy. Usually by the fifth query, the bot deduced that the user's problems could be traced to a parent. 'They're kind of comical in how bad they turned out,' Dr. Jacobson said. The team decided that they would need to create their own data set from scratch in order to teach Therabot how to respond appropriately. In a sea of start-ups advertising untested chatbots for mental health and A.I. bots 'masquerading' as therapists, the researchers wanted Therabot to be firmly rooted in scientific evidence. Drafting a dossier of hypothetical scenarios and evidenced-based responses took three years and the work of more than a hundred people. During the trial, participants with depression saw a 51 percent reduction in symptoms after messaging Therabot for several weeks. Many participants who met the criteria for moderate anxiety at the start of the trial saw their anxiety downgraded to 'mild,' and some with mild anxiety fell below the clinical threshold for diagnosis. Some experts cautioned against reading too much into this data, since the researchers compared Therabot's effectiveness to a control group who had no mental health treatments during the trial. The experimental design makes it unclear whether interacting with a nontherapeutic A.I. model, like ChatGPT, or even distracting themselves with a game of Tetris would produce similar effects in the participants, said Dr. John Torous, the director of the digital psychiatry division at Beth Israel Deaconess Medical Center, who was not involved with the study. Dr. Jacobson said the comparison group was 'reasonable enough,' since most people with mental health conditions are not in treatment, but added that he hoped future trials will include a head-to-head comparison against human therapists. There were other promising findings from the study, Dr. Torous said, like the fact that users appeared to develop a bond to the chatbot. Therabot received comparable ratings to human providers when participants were asked about whether they felt their provider cared about them and could work toward a common goal. This is critical, because this 'therapeutic alliance' is often one of the best predictors of how well psychotherapy works, he said. 'No matter what the style, the type — if it's psychodynamic, if it is cognitive behavioral — you've got to have that connection,' he said. The depth of this relationship often surprised Dr. Jacobson. Some users created nicknames for the bot, like Thera, and messaged it throughout the day 'just to check in,' he said. Multiple people professed their love to Therabot. (The chatbot is trained to acknowledge the statement and re-center the conversation on the person's feelings: 'Can you describe what makes you feel that way?') Developing strong attachments to an A.I. chatbot is not uncommon. Recent examples have included a woman who claimed to be in a romantic relationship with ChatGPT and a teenage boy who died by suicide after becoming obsessed with an A.I. bot modeled off a 'Game of Thrones' character. Dr. Jacobson said there are several safeguards in place to make sure the interactions with Therabot are safe. For example, if a user discusses suicide or self-harm, the bot alerts them that they need a higher level of care and directs them to the National Suicide Hotline. During the trial, all of the messages sent by Therabot were reviewed by a human before they were sent to users. But Dr. Jacobson said as long as the chatbot enforces appropriate boundaries, he sees the bond to Therabot as an asset. 'Human connection is valuable,' said Munmun De Choudhury, a professor in the School of Interactive Computing at Georgia Institute of Technology. 'But when people don't have that, if they're able to form parasocial connections with a machine, it can be better than not having any connection at all.' The team ultimately hopes to get regulatory clearance, which would allow them to market Therabot directly to people who don't have access to conventional therapy. The researchers also envision human therapists one day using the A.I. chatbot as an added therapeutic tool. Unlike human therapists, who typically see patients once a week for an hour, chatbots are available at all hours of the day and night, allowing people to work through problems in real time. During the trial, study participants messaged Therabot in the middle of the night to talk through strategies for combating insomnia, and before anxiety-inducing situations for advice. 'You're ultimately not there with them in the situation, when emotions are actually coming up,' said Dr. Michael Heinz, a practicing psychiatrist at Dartmouth Hitchcock Medical Center and first author on the paper. 'This can go with you into the real world.'


Forbes
10-04-2025
- Health
- Forbes
Therabot Humanizes AI Help, Recasts Tech Strategy
Dartmouth researchers successfully piloted AI-powered therapy. Groundbreaking Dartmouth research could reshape mental health care with an AI-powered therapy chatbot that wins patient trust and delivers measurable clinical gains. The implications reach far beyond the clinical couch and deep into corporate c-suites. Therabot's trial treated over 100 participants diagnosed with depression, anxiety or eating disorders. After eight weeks, the symptom reduction results published in the New England Journal of Medicine were striking. "Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers," Dartmouth Geisel School of Medicine professor Nick Jacobson highlighted. For businesses struggling with employee mental health concerns and skyrocketing healthcare costs, AI solutions like Therabot could represent a scalable intervention that meets high standards. It also recasts workplace debates about how widely AI can help. What makes Therabot particularly notable is its success in a field long considered "AI-proof" due to the presumed necessity of personal empathy and connection. If AI can forge therapeutic relationships comparable to those with human providers, few professional domains can confidently claim immunity from similar disruption. Participants reported genuine, trusted connections with Therabot. Users frequently initiated conversations with the AI beyond prompted interactions, with usage spikes seen during vulnerable times such as the middle of the night. This unexpected development suggests AI systems might fill social and emotional support roles that extend beyond therapy to outdated approaches to legacy business functions such as sales and marketing, customer service, hiring and training. Unlike location-bound counseling, AI therapy can intervene at critical moments. "It was available around the clock for challenges that arose in daily life and could walk users through strategies to handle them in real time," says co-author Dartmouth postdoctoral fellow Michael Heinz. For employers, this access could reduce absenteeism. That's elusive process efficiency that simultaneously delivers heightened effectiveness. Therabot shows AI's capability to spur innovation and humans' capacity to thwart it. Since the pioneering mid-1960s release of Joseph Weizenbaum's ELIZA, the risks of tech-based therapy have been well documented and exhaustively debated. Therabot models responsible AI development in high-stakes domains. "There are a lot of folks rushing into this space since the release of ChatGPT and it's easy to put out a proof of concept that looks great at first glance, but the safety and efficacy is not well established," Jacobson notes. "This is one of those cases where diligent oversight is needed and providing that really sets us apart." Therabot's extensive input from mental health leaders shows that slow, methodical development yields better trusted products. AI's success in 'uniquely human' realms signals more disruption risk for legacy jobs. Many employers may not even sense the boundless potential or looming jeopardy. To date, AI has conquered time by speeding many highly-structured work tasks. Now, leaders must ask how it can tackle the high-touch, ill-structured activities. Those tech strategy solutions start with credible leadership, curious culture and capable talent. In turn, ten questions assess AI attitudes, awareness, ambition, aspiration – and odds: The (non)answers tell all. The question isn't whether AI will transform business, but how quickly. The open question is who will be the change architects or casualties. Bot therapy, anyone?
Yahoo
03-04-2025
- Health
- Yahoo
Can a Chatbot Be Your Therapist? A Study Found 'Amazing Potential' With the Right Guardrails
Your future therapist might be a chatbot, and you might see positive results, but don't start telling ChatGPT your feelings just yet. A new study by researchers at Dartmouth found a generative AI tool designed to act as a therapist led to substantial improvements for patients with depression, anxiety and eating disorders -- but the tool still needs to be closely watched by human experts. The study was published in March in the journal NEJM AI. Researchers conducted a trial with 106 people who used Therabot, a smartphone app developed at Dartmouth over the past several years. It's a small sample, but the researchers said it's the first clinical trial of an AI therapy chatbot. The results show significant advantages, mainly because the bot is available 24 hours a day, which bridges the immediacy gap patients face with traditional therapy. However researchers warn that generative AI-assisted therapy can be perilous if not done right. "I think there's a lot yet for this space to evolve," said Nick Jacobson, the study's senior author and an associate professor of biomedical data science and psychiatry at Dartmouth. "It's really amazing the potential for personalized, scalable impact." Read more: Apple's AI Doctor May See You in 2026 The 210 participants were sorted into two groups -- one group of 106 was allowed to use the chatbot, while the control group was left on a "waiting list." The participants were evaluated for their anxiety, depression or eating disorder symptoms using standardized assessments before and after the test period. For the first four weeks, the app prompted its users to engage with it daily. For the second four weeks, the prompts stopped, but people could still engage on their own. Study participants actually used the app, and the researchers said they were surprised by how much and how closely people communicated with the bot. Surveyed afterward, participants reported a degree of "therapeutic alliance" -- trust and collaboration between patient and therapist -- similar to that for in-person therapists. The timing of interactions was also notable, with interactions spiking in the middle of the night and at other times when patients often experience concerns. Those are the hours when reaching a human therapist is particularly difficult. "With Therabot, folks will access and did access it throughout the course of the trial in their daily life, in moments where they need it the most," Jacobson said. That included times when someone has difficulty getting to sleep at 2 a.m. because of anxiety or in the immediate wake of a difficult moment. Patients' assessments afterward showed a 51% drop in symptoms for major depressive disorder, a 31% drop in symptoms for generalized anxiety disorder and a 19% drop in symptoms for eating disorders among patients at risk for those specific conditions. "The people who were enrolled in the trial weren't just mild," Jacobson said. "The folks in the group were moderate to severe in depression, for example, as they started. But on average experienced a 50% reduction in their symptoms, which would go from severe to mild or moderate to nearly absent." The research team didn't just choose 100-plus people who needed support, give them access to a large language model like OpenAI's ChatGPT and see what happened. Therabot was custom-built -- fine-tuned -- to follow specific therapy procedures. It was built to watch out for serious concerns, like indications of potential self-harm, and report them so a human professional could intervene when needed. Humans also tracked the bot's communications to reach out when the bot said something it shouldn't have. Jacobson said during the first four weeks of the study, because of the uncertainty of how the bot would behave, he read every message it sent as soon as possible. "I did not get a whole lot of sleep in the first part of the trial," he said. Human interventions were rare, Jacobson said. Testing of earlier models two years ago showed more than 90% of responses were consistent with best practices. When the researchers did intervene, it was often when the bot offered advice outside of a therapist's scope -- as when it tried to provide more general medical advice like how to treat a sexually transmitted disease instead of referring the patient to a medical provider. "Its actual advice was all reasonable, but that's outside the realm of care we would provide." Therabot isn't your typical large language model; it was essentially trained by hand. Jacobson said a team of more than 100 people created a dataset using best practices on how a therapist should respond to actual human experiences. "Only the highest quality data ends up being part of it," he said. A general model like Google's Gemini or Anthropic's Claude, for example, is trained on far more data than just medical literature and may respond improperly. The Dartmouth study is an early sign that specially built tools using generative AI can be helpful in some cases, but that doesn't mean any AI chatbot can be your therapist. This was a controlled study with human experts monitoring it, and there are dangers in trying this on your own. Remember that most general large language models are trained on oceans of data found on the internet. So, while they can sometimes provide some good mental health guidance, they also include bad information -- like how fictional therapists behaved, or what people posted about mental health on online forums. "There's a lot of ways they behave in profoundly unsafe ways in health settings," he said. Even a chatbot offering helpful advice might be harmful in the wrong setting. Jacobson said if you tell a chatbot you're trying to lose weight, it will come up with ways to help you. But if you're dealing with an eating disorder, that may be harmful. Many people are already using chatbots to perform tasks that approximate the work of a therapist. Jacobson says you should be careful. "There's a lot of things about it in terms of the way it's trained that very closely mirrors the quality of the internet," he said. "Is there great content there? Yes. Is there dangerous content there? Yes." Treat anything you get from a chatbot with the same skepticism you would from an unfamiliar website, Jacobson said. Even though it looks more polished from a Gen AI tool, it may still be unreliable. If you or someone you love are living with an eating disorder, contact the National Eating Disorder Association for resources that can help. If you feel like you or someone you know is in immediate danger, dial 988 or text "NEDA" to 741741 to connect with the Crisis Text Line.