logo
The Real Risks of Turning to AI for Therapy

The Real Risks of Turning to AI for Therapy

WebMDa day ago
Aug. 20, 2025 — Whenever Luke W Russell needs to work through something, they turn to ChatGPT. (Luke uses they/them pronouns.)
'I've wept as I've navigated things,' said the Indianapolis filmmaker, who uses the chatbot to pick apart intrusive thoughts or navigate traumatic memories. 'I've had numerous times when what ChatGPT is saying to me is so real, so powerful, and I feel so deeply seen.'
Russell's experience reflects a broader, growing reality: Many people are turning to chatbots for mental health support — for everything from managing anxiety and processing grief to coping with work conflicts and defusing marital spats.
More than half of adults ages 18-54 — and a quarter of adults 55 and up — say they would be comfortable talking with an AI chatbot about their mental health, according to a 2025 survey by the Harris Poll and the American Psychological Association (APA).
The catch: OpenAI's ChatGPT and other chatbots — like Anthropic's Claude and Google's Gemini — are not designed for this.
Even AI products promoted as emotional health tools — like Replika, Wysa, Youper, and MindDoc — were not built on validated psychological methods, said psychologist C. Vaile Wright, PhD, senior director of the APA's Office of Health Care Innovation.
'I would argue that there isn't really any commercially approved, AI-assisted therapy at the moment,' said Wright. 'You've got a whole lot of chatbots where there is no research, there's no psychological science, and there are no subject matter experts.'
Critics warn that AI's potential for bias, lack of true empathy, and limited human oversight could actually endanger users' mental health, especially among vulnerable groups like children, teens, people with mental health conditions, and those experiencing suicidal thoughts. The growing concern has led to the emergence of the terms 'ChatGPT psychosis' or ' AI psychosis ' — referring to the potential harmful mental health effects of interacting with AI. It's even drawing attention from lawmakers: This month, Illinois enacted restrictions on AI in mental health care, banning its use for therapy and prohibiting mental health professionals from using AI to communicate with clients or make therapeutic decisions. (Similar restrictions have already been passed in Nevada and Utah.)
But none of this is stopping people from turning to chatbots for support, especially amid clinician shortages, rising therapy costs, and inadequate mental health insurance coverage.
'People have absolutely reported that experiences with chatbots can be helpful,' said Wright.
The Draw of Chatbots for Mental Health
Data shows we're facing a massive shortage of mental health workers, especially in remote and rural areas, said psychologist Elizabeth Stade, PhD, a researcher in the Computational Psychology and Well-Being Lab at Stanford University in Stanford, CA.
'Of adults in the United States with significant mental health needs, only about half are able to access any form of treatment. With youth, that number is closer to 75%,' said Jessica Schleider, PhD, a child and adolescent psychologist at Northwestern University in Chicago. 'The provider shortage is clearly contributing to why so many folks are turning to their devices and, now increasingly, to generative AI to fill that gap.'
Unlike a therapist, a chatbot is available 24/7. 'When [people] need help the most, it is typically after hours,' said Wright, who suggested the right AI tool could potentially supplement human therapy. 'When it's 2 a.m. and you're in crisis, could this help provide some support?' Probably, she said.
Results of the first clinical trial of an AI-generative therapy chatbot showed 'significant, clinically meaningful reductions in depression, anxiety, and eating disorder symptoms' within four to eight weeks, said lead study author Michael V. Heinz, MD, a professor at Dartmouth College's Geisel School of Medicine and faculty affiliate at the Center for Technology and Behavioral Health in Lebanon, New Hampshire.
The chatbot — Therabot, developed at Dartmouth — combines extensive training in evidence-based psychotherapy interventions with advanced generative AI. 'We saw high levels of user engagement — six-plus hours on average across the study,' Heinz said. Participants said using Therabot was like talking to a human therapist. But results are early, and more studies are needed, Heinz said.
Access and affordability drew Russell to ChatGPT, they said. 'I didn't set out to use ChatGPT as a therapist. I quit therapy in January due to income dropping. I was already using ChatGPT on the regular for work, and then I started using it for personal idea exploration. ... I've never had a therapist who could move as fast as ChatGPT and ignore miscellaneous things,' they said.
Perhaps one of the most appealing aspects is that chatbots don't judge. 'People are reluctant to be judged, and so they are often reluctant to disclose symptoms,' said Jonathan Gratch, PhD, professor of computer science and psychology at the University of Southern California, who has researched the topic.
One of his studies found that military veterans were more likely to share PTSD symptoms with a virtual chatbot than in a survey.
When Chatbots Are Harmful
Most people don't know how AI works — they might believe it's always objective and factual, said Henry A. Willis, PhD, a psychologist and professor at the University of Maryland in College Park. But often, the data they're trained on is not representative of minority groups, leading to bias and technology-mediated racism, Willis said.
'We know that Black and brown communities are not adequately reflected in the majority of large-scale mental health research studies,' Willis said. So a chatbot's clinical symptom information or treatment recommendations may not be relevant or helpful to those from minority backgrounds.
There's also an impersonal aspect. Chatbots do what's called ecological fallacy, said H. Andrew Schwartz, PhD, associate professor of computer science at Stony Brook University in Stony Brook, NY. They treat scattered comments like random data points, making assumptions based on group-level data that may not reflect the reality of individuals.
And who's accountable if something goes wrong? Chatbots have been linked to cases involving suggestions of violence and self-harm, including the death of a teen by suicide.
Some chatbots marketed for companionship and emotional support were designed with another incentive: to make money. Wright is concerned that they may unconditionally validate patients, telling them what they want to hear so they stay on the platform — 'even if what they're telling you is actually harmful or they're validating harmful responses from the user.'
None of these conversations are bound by HIPAA regulations, either, Wright pointed out. 'So even though they may be asking for personal information or sharing your personal information, they have no legal obligation to protect it.'
The Psychological Implications of Forming Emotional Bonds With AI
In an opinion article published in April in the journal Trends in Cognitive Sciences, psychologists expressed concern about the long-term implications of forming emotional bonds with AI. Chatbots can replace users' real relationships, crowding out romantic partners, co-workers, and friends.
This may mean that individuals begin to 'trust' the opinion and feedback of chatbots over real people, said Willis.
'The ongoing positive reinforcement that can happen instantly from interacting with a chatbot may begin to overshadow any reinforcement from interacting with real people,' who may not be able to communicate as quickly, he said. 'These emotional bonds may also impair people's ability to have a healthy level of skepticism and critical evaluation skills when it comes to the responses of AI chatbots.'
Gratch compared it to hunger and food.
'We're biologically wired to seek out food when we get hungry. It is the same with social relationships. If we haven't had a relationship in a while, we may feel lonely, and then that motivates us to go out and reach out to people.' But studies suggest that social interaction with a computer program, like a chatbot, can sate a person's social needs and demotivate them to go out with friends, he said. 'That may have long-term consequences for increased loneliness. For example, research has shown people who compulsively use Facebook tend to be much more lonely.'
Counseling with a therapist involves 'a natural curiosity about the individual and their experiences that AI cannot replicate,' Willis said. 'AI chatbots respond to prompts, whereas therapists can observe and ask clinical questions based on one's body language, a synthesis of their history, and other things that may not be conscious to the client — or things the client may not even be aware are important to their mental health well-being.'
The Future of AI Therapy
"I think there is going to be a future where you have really well-developed [chatbots] for addressing mental health that are scientifically driven and where they are ensuring that there are guardrails in place when somebody is in crisis. We're just not quite there yet,' said the APA's Wright.
'We may get to a place where they're even reimbursed by insurance,' she said. 'I do think increasingly we are going to see providers begin to adopt these technology tools as a way to meet their patients' needs.'
But for now, her message is clear: The chatbots are not there yet.
'Ideally, chatbot design should encourage sustained, meaningful interaction with the primary purpose of delivering evidence-based therapy,' said Dartmouth's Heinz.
Until then, don't rely on them too heavily, the experts cautioned — and remember, they are not a substitute for professional help.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

TikTok employees raised concerns that app could be addictive, unsealed edited video shows
TikTok employees raised concerns that app could be addictive, unsealed edited video shows

CBS News

time15 minutes ago

  • CBS News

TikTok employees raised concerns that app could be addictive, unsealed edited video shows

Newly unsealed and edited video shows TikTok employees and consultants expressing concern that potentially addictive features of the app could harm users' mental health. The video compilation, which was shared with CBS News by the North Carolina Department of Justice, is part of the evidence in a 2024 lawsuit the state's former attorney general and other prosecutors filed against TikTok alleging the company misled the public about the safety of the social media platform. North Carolina Special Superior Court Judge Adam Conrad on Tuesday ordered that the video and complaint be unsealed. In a separate ruling, he also denied a motion by TikTok's parent company, China-based ByteDance, to dismiss the North Carolina lawsuit. "These clips clearly show that social media companies know they're designing their apps to hook our children even at the expense of their health," said North Carolina Attorney General Jeff Jackson in a statement to CBS News. "That's why the company fought so hard to keep the video out of the public eye." A TikTok spokesperson called the video a "shameful attempt to distort an open internal conversation about making the platform safer when TikTok was just beginning five years ago," in a statement to CBS MoneyWatch. "This manipulation relies on conversations taken out of context with the sole purpose of misleading the public and grandstanding," he added. The spokesperson also said that TikTok has over 70 features and settings designed to support the safety and well-being of teens and other app users. Those features include a 60-minute daily screen time limit and another that automatically triggers a guided meditation exercise after 10 p.m. for teen users scrolling on TikTok. In a complaint issued in October 2024, former North Carolina Attorney General Joshua Stein alleged TikTok's design fosters "excessive, compulsive and addictive use" and that the company knew about the harm it was causing. Stein also claimed that TikTok ignored the addictive nature of the app "because their business model and desire for advertising revenue require keeping consumers on the app as much as possible." The lawsuit is part of broader litigation against TikTok over allegations that it is harmful to children's mental health brought by 14 state attorney generals last year. Minnesota's attorney general joined the fight this week with a separate lawsuit. TikTok has denied the claims. "We strongly disagree with these claims, many of which we believe to be inaccurate and misleading," a TikTok spokesperson said in a statement to CBS MoneyWatch at the time. "We're proud of and remain deeply committed to the work we've done to protect teens and we will continue to update and improve our product." The 3 1/2-minute video released this week features a series of clips of internal company meetings, with speakers describing what they viewed as harmful features of the TikTok app, including some that promote "compulsive use." The meetings featured in the video took place a few years ago, according to a spokesperson for the North Carolina Department of Justice, who said they were unable to disclose the exact dates. "We obviously wanted people to spend as much time as possible on TikTok, which can be in contrast to what is best for your mental health," said Ally Mann, whose LinkedIn profile lists her as a creator marketing and events lead at TikTok. In a separate clip, Ashlen Sepulveda, who is labeled in the video as working on trust and safety at the company, explains potential pitfalls of the TikTok algorithm which she says selects content based on users' searches. "Let's say for eating disorders, for example," Sepulveda said in the video. "The more the user looks up things about fitness or diet, it turns into losing weight and then soon enough the entire 'for you' feed for this user is really soft disordered eating behavior that is being discussed by their peers with no opportunity to remove themselves from that bubble." In another clip, Brett Peters, who is global head of creator advocacy and reputation at TikTok, according to his LinkedIn profile, said TikTok's goal is to produce such a diversity of content that "you never want to leave" the app. Sixty-three percent of teens said they used TikTok in 2023, according to Pew Research Center poll. Meanwhile, TikTok continues to face an uncertain future as it stares down an approaching deadline, recently extended to Sept. 17 by President Trump, requiring the app to separate from its China-based parent company or be banned in the U.S.

Trump Budget Office Is Withholding H.I.V. Funds That Congress Appropriated
Trump Budget Office Is Withholding H.I.V. Funds That Congress Appropriated

New York Times

time15 minutes ago

  • New York Times

Trump Budget Office Is Withholding H.I.V. Funds That Congress Appropriated

The Trump administration is ignoring a directive from Congress and refusing to fully fund a landmark H.I.V. program that is widely credited with saving millions of lives over the past two decades. The Office of Management and Budget, headed by Russell T. Vought, has apportioned only $2.9 billion of $6 billion appropriated by Congress for the President's Emergency Plan for AIDS Relief in the 2025 fiscal year spending bill, according to budget documents and members of the program's staff. PEPFAR, as the program is known, was created in 2003, during the George W. Bush administration, to provide funding for H.I.V. prevention and treatment to low-income countries. It has long enjoyed broad bipartisan support and is often cited as the most effective public health campaign ever, credited with saving an estimated 26 million lives. In July, the Trump administration withdrew a proposed $400 million rescission from PEPFAR's budget in the face of bipartisan Senate opposition. But that fight was a distraction, according to staff members who work on the program, because, they said, they have repeatedly been told by Mr. Vought's budget office that the program would be receiving only about half of the $6 billion appropriated by Congress for the 2025 fiscal year, even with that $400 million restored. That cut is visible in budget documents that appear on a federal website tracking the activity of the White House Office of Management and Budget. The Trump administration took that website offline in January, but it was restored late last week after a court order. Whether the executive branch can withhold money appropriated by Congress has been a focus of a number of legal battles since the President Trump took office vowing to slash government spending and terminate many foreign aid programs. Want all of The Times? Subscribe.

Meta Poaches Apple AI Executive Even as It Plans Hiring Slowdown
Meta Poaches Apple AI Executive Even as It Plans Hiring Slowdown

Bloomberg

time15 minutes ago

  • Bloomberg

Meta Poaches Apple AI Executive Even as It Plans Hiring Slowdown

Meta Platforms Inc. is hiring another key Apple Inc. artificial intelligence executive, even as the social networking company prepares to slow its recruitment, according to people familiar with the matter. Frank Chu, who has led Apple AI teams focused on cloud infrastructure, training and search, will be joining Meta Superintelligence Labs, or MSL, said the people, who asked not to be identified because the matter is private. Spokespeople for Apple and Meta declined to comment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store