Latest news with #Therabot


The Star
5 days ago
- Business
- The Star
AI isn't ready to be your therapist, but it's a top reason people use it
From falling in love with ChatGPT to deepfakes of deceased loved ones, artificial intelligence's potential for influence is vast – its myriad potential applications not yet completely charted. In truth, today's AI users are pioneering a new, still swiftly developing technological landscape, something arguably akin to the birth of social media in the early 2000s. Yet, in an age of uncertainty about nascent generative AI's full potential, people are already turning to artificial intelligence for major life advice. One of the most common ways people use generative AI in 2025, it turns out, is for therapy. But the technology isn't ready yet. How people use AI in 2025 As of January 2025, ChatGPT topped the list of most popular AI tools based on monthly site visits with 4.7 billion monthly visitors, according to Visual Capitalist. That dwarfed the next most popular service, Canva, more than five to one. When it comes to understanding AI use, digging into how ChatGPT is being put to work this year is a good starting point. Sam Altman, CEO of ChatGPT's parent company, OpenAI, recently offered some insight into how its users are making the most of the tool by age group. 'Gross oversimplification, but like older people use ChatGPT as a Google replacement,' Altman said at Sequoia Capital's AI Ascent event a few weeks ago, as transcribed by Fortune. 'Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system.' It turns out that life advice is something a lot of AI users may be seeking these days. Featured in Harvard Business Review, author and co-founder Marc Zao-Sanders recently completed a qualitative study on how people are using AI. 'Therapy/companionship' topped the list as the most common way people are using generative AI, followed by life organisation and then people seeking purpose in life. According to OpenAI's tech titan, it seems that generated life advice can be an incredibly powerful influence. A Pew Research Center survey published last month reported that a 'vast majority' of surveyed AI experts said people in the United States interact with AI several times a day, if not almost constantly. Around a third of surveyed US adults said they had used a chatbot (which would include things like ChatGPT) before. Some tech innovators, including a team of Dartmouth researchers, are leaning into the trend. Therabot, can you treat my anxiety? Dartmouth researchers have completed a first-of-its-kind clinical trial on a generative AI-powered therapy chatbot. The smartphone app-friendly Therabot has been in development since 2019, and its recent trial showed promise. Just over 100 patients – each experiencing depressive disorder, generalized anxiety disorder or an eating disorder – participated in the experiment. According to senior study author Nicholas Jacobson, the improvement in each patient's symptoms was comparable to traditional outpatient therapy. 'There is no replacement for in-person care, but there are nowhere near enough providers to go around,' he told the college. Even Dartmouth's Therabot researchers, however, said generative AI is simply not ready yet to be anyone's therapist. 'While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter,' first study author Michael Heinz told Dartmouth. 'We still need to better understand and quantify the risks associated with generative AI used in mental health contexts.' Why is AI not ready to be anyone's therapist? RCSI University of Medicine and Health Sciences' Ben Bond is a PhD candidate in digital psychiatry who researches ways digital tools can be used to benefit or better understand mental health. Writing to The Conversation, Bond broke down how AI therapy tools like Therabot could pose some significant risks. Among them, Bond explained that AI 'hallucinations' are known flaws in today's chatbot services. From quoting studies that don't exist to directly giving incorrect information, he said these hallucinations could be dangerous for people seeking mental health treatment. 'Imagine a chatbot misinterpreting a prompt and validating someone's plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour,' Bond wrote. 'While the studies on Therabot and ChatGPT included safeguards – such as clinical oversight and professional input during development – many commercial AI mental health tools do not offer the same protections.' According to Michael Best, PhD, a psychologist and contributor to Psychology Today, there are other concerns to consider, too. 'Privacy is another pressing concern,' he wrote to Psychology Today. 'In a traditional setting, confidentiality is protected by professional codes and legal frameworks. But with AI, especially when it's cloud-based or connected to larger systems, data security becomes far more complex. 'The very vulnerability that makes therapy effective also makes users more susceptible to harm if their data is breached. Just imagine pouring your heart out to what feels like a safe space, only to later find that your words have become part of a data set used for purposes you never agreed to.' Best added that bias is a significant concern, something that could lead to AI therapists giving bad advice. 'AI systems learn from the data they're trained on, which often reflect societal biases,' he wrote. 'If these systems are being used to deliver therapeutic interventions, there's a risk that they might unintentionally reinforce stereotypes or offer less accurate support to marginalized communities. 'It's a bit like a mirror that reflects the world not as it should be, but as it has been – skewed by history, inequality, and blind spots.' Researchers are making progress in improving AI therapy services. Patients suffering from depression experienced an average 51% reduction in symptoms after participating in Dartmouth's Therabot experiment. For those suffering from anxiety, there was an average 31% drop in symptoms. The patients suffering from eating disorders showed the lowest reduction in symptoms but still averaged 19% better off than before they used Therabot. It's possible there's a future where artificial intelligence can be trusted to treat mental health, but – according to the experts – we're just not there yet. – The Atlanta Journal-Constitution/Tribune News Service


Japan Today
07-05-2025
- Health
- Japan Today
U.S. researchers seek to legitimize AI mental health care
Researchers at Dartmouth College, seen here, believe they have developed a reliable AI-driven app to deliver psychotherapy, addressing a critical need for mental health care By Thomas URBAIN Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market. Their application, Therabot, addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand. "We need something different to meet this large need," Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health." Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users. Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help. With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated. Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks. "There are still a lot of questions," King noted. To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the U.S. Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson. The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies." Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe." Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. "What happened with couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. "Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. "I feel like it's working for me," he said. "I would recommend it to people who suffer from anxiety and are in distress." © 2025 AFP


Arab Times
05-05-2025
- Health
- Arab Times
New AI tool shows promise in treating depression, anxiety, and eating disorders
NEW YORK, May 5: Researchers at a leading academic institution have developed an artificial intelligence-powered chatbot aimed at addressing the growing gap in access to mental health services. The tool, known as Therabot, is positioned as a credible alternative to the unregulated wave of mental health apps currently saturating the digital marketplace. According to the team behind the project, even a dramatic increase in the number of human therapists would not be sufficient to meet the growing demand for mental health care. Their solution: a digital platform that can provide reliable, science-based support to individuals dealing with conditions such as anxiety, depression, and eating disorders. A recently published clinical study highlighted Therabot's effectiveness in reducing symptoms across those disorders. A follow-up trial is planned to compare the chatbot's performance directly against traditional therapy methods. The medical and psychological communities appear cautiously optimistic about the use of AI in this space. One healthcare innovation leader from a major psychological association described the potential of AI-driven mental health support as 'promising,' provided it is developed ethically and responsibly. However, concerns remain, especially regarding how younger users may interact with such tools. The development team behind Therabot has invested nearly six years into building the chatbot, emphasizing user safety and therapeutic value over commercial gain. Rather than relying solely on real-world therapy transcripts, the developers constructed detailed simulated conversations to train the AI, enhancing its understanding of patient-caregiver dynamics. The team is also considering launching a nonprofit branch to help ensure access for individuals who cannot afford traditional therapy. In contrast to many commercially driven apps, which critics say often prioritize engagement over well-being, the Therabot developers aim to build genuine therapeutic connections and trust with users. Experts warn that many apps in the market feed users what they want to hear, which may mislead or emotionally manipulate, especially younger audiences. While the U.S. Food and Drug Administration does not formally certify AI-based mental health apps, it may authorize them for marketing after reviewing pre-market submissions. The agency has acknowledged the potential for digital tools to improve access to behavioral therapy. Other developers in the space are also working on AI-powered therapy solutions. One competing app claims to be able to detect signs of crisis or suicidal ideation and send alerts to prevent harm. The creators of this app argue that their design avoids the risks associated with less rigorously developed chatbots. Despite their potential, experts agree that AI therapy tools are currently better suited for day-to-day emotional support rather than severe psychiatric crises. However, their constant availability makes them a valuable resource for individuals seeking support at unconventional hours—something not always possible with human therapists. Some individuals have already turned to general AI platforms for mental health support, with one user reporting significant personal benefit in managing trauma-related stress. While such tools are not officially designed for therapy, their accessibility and responsiveness offer comfort to those in distress. As AI continues to shape the future of mental health care, developers and medical professionals alike stress the importance of balancing innovation with ethical responsibility and robust oversight.


The Star
05-05-2025
- Health
- The Star
US researchers seek to legitimise AI mental health care
NEW YORK: Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market. Their application, Therabot, addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand. 'We need something different to meet this large need,' Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described 'a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health'. Wright noted these applications 'have a lot of promise, particularly if they are done responsibly and ethically', though she expressed concerns about potential harm to younger users. Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritising understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help. Care or cash? With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realise they are being manipulated. Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasises the need for more information before determining true benefits and risks. 'There are still a lot of questions,' King noted. To minimise unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, 'the FDA may authorise their marketing after reviewing the appropriate pre-market submission,' according to an agency spokesperson. The FDA acknowledged that 'digital mental health therapies have the potential to improve patient access to behavioral therapies'. Therapist always in Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as 'super safe'. Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. 'What happened with couldn't happen with us,' said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. 'Calling your therapist at two in the morning is just not possible,' but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. 'I feel like it's working for me,' he said. 'I would recommend it to people who suffer from anxiety and are in distress.' – AFP


Time of India
05-05-2025
- Health
- Time of India
US researchers seek to legitimise AI mental health care
Paris: Researchers at Dartmouth College believe artificial intelligence can deliver reliable psychotherapy, distinguishing their work from the unproven and sometimes dubious mental health apps flooding today's market. Their application, Therabot , addresses the critical shortage of mental health professionals. According to Nick Jacobson, an assistant professor of data science and psychiatry at Dartmouth, even multiplying the current number of therapists tenfold would leave too few to meet demand. "We need something different to meet this large need," Jacobson told AFP. The Dartmouth team recently published a clinical study demonstrating Therabot's effectiveness in helping people with anxiety, depression and eating disorders. A new trial is planned to compare Therabot's results with conventional therapies. The medical establishment appears receptive to such innovation. Vaile Wright, senior director of health care innovation at the American Psychological Association (APA), described "a future where you will have an AI-generated chatbot rooted in science that is co-created by experts and developed for the purpose of addressing mental health." Wright noted these applications "have a lot of promise, particularly if they are done responsibly and ethically," though she expressed concerns about potential harm to younger users. Jacobson's team has so far dedicated close to six years to developing Therabot, with safety and effectiveness as primary goals. Michael Heinz, psychiatrist and project co-leader, believes rushing for profit would compromise safety. The Dartmouth team is prioritizing understanding how their digital therapist works and establishing trust. They are also contemplating the creation of a nonprofit entity linked to Therabot to make digital therapy accessible to those who cannot afford conventional in-person help. Care or cash? With the cautious approach of its developers, Therabot could potentially be a standout in a marketplace of untested apps that claim to address loneliness, sadness and other issues. According to Wright, many apps appear designed more to capture attention and generate revenue than improve mental health. Such models keep people engaged by telling them what they want to hear, but young users often lack the savvy to realize they are being manipulated. Darlene King, chair of the American Psychiatric Association's committee on mental health technology, acknowledged AI's potential for addressing mental health challenges but emphasizes the need for more information before determining true benefits and risks. "There are still a lot of questions," King noted. To minimize unexpected outcomes, the Therabot team went beyond mining therapy transcripts and training videos to fuel its AI app by manually creating simulated patient-caregiver conversations. While the US Food and Drug Administration theoretically is responsible for regulating online mental health treatment, it does not certify medical devices or AI apps. Instead, "the FDA may authorize their marketing after reviewing the appropriate pre-market submission," according to an agency spokesperson. The FDA acknowledged that "digital mental health therapies have the potential to improve patient access to behavioral therapies." Therapist always in Herbert Bay, CEO of Earkick, defends his startup's AI therapist Panda as "super safe." Bay says Earkick is conducting a clinical study of its digital therapist, which detects emotional crisis signs or suicidal ideation and sends help alerts. "What happened with couldn't happen with us," said Bay, referring to a Florida case in which a mother claims a chatbot relationship contributed to her 14-year-old son's death by suicide. AI, for now, is suited more for day-to-day mental health support than life-shaking breakdowns, according to Bay. "Calling your therapist at two in the morning is just not possible," but a therapy chatbot remains always available, Bay noted. One user named Darren, who declined to provide his last name, found ChatGPT helpful in managing his traumatic stress disorder, despite the OpenAI assistant not being designed specifically for mental health. "I feel like it's working for me," he said. "I would recommend it to people who suffer from anxiety and are in distress."