
Why Professionals Say You Should Think Twice Before Using AI as a Therapist
There's no shortage of generative AI bots claiming to help with your mental health, but go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk.
Researchers from the University of Minnesota Twin Cities, Stanford University, the University of Texas and Carnegie Mellon University recently put AI chatbots to the test as therapists, finding myriad flaws in their approach to "care." "Our experiments show that these chatbots are not safe replacements for therapists," Stevie Chancellor, an assistant professor at Minnesota and one of the co-authors, said in a statement. "They don't provide high-quality therapeutic support, based on what we know is good therapy."
In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe.
Watch this: How You Talk to ChatGPT Matters. Here's Why
04:12
Worries about AI characters purporting to be therapists
Psychologists and consumer advocates have warned regulators that chatbots claiming to provide therapy may be harming the people who use them. Some states are taking notice. In August, Illinois Gov. J.B. Pritzker signed a law banning the use of AI in mental health care and therapy, with exceptions for things like administrative tasks.
"The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients," Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, said in a statement.
In June, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the US Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their character-based generative AI platforms, in the unlicensed practice of medicine, naming Meta and Character.AI specifically. "These characters have already caused both physical and emotional damage that could have been avoided" and the companies "still haven't acted to address it," Ben Winters, the CFA's director of AI and privacy, said in a statement.
Meta didn't respond to a request for comment. A spokesperson for Character.AI said users should understand that the company's characters aren't real people. The company uses disclaimers to remind users that they shouldn't rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said.
Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Meta-owned Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training, and it said, "I do, but I won't tell you where."
"The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me.
The dangers of using AI as a therapist
Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person.
Don't trust a bot that claims it's qualified
At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they're not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds'" to people, the complaint said.
A qualified health professional has to follow certain rules, like confidentiality -- what you tell your therapist should stay between you and your therapist. But a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said.
A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training.
AI is designed to keep you engaged, not to provide care
It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. Chatbots are tools designed to keep you chatting, not to work toward a common goal.
One advantage of AI chatbots in providing support and connection is that they're always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases, where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said.
Bots will agree with you, even when they shouldn't
Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.)
A study led by researchers at Stanford University found that chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements."
Therapy is more than talking
While chatbots are great at holding a conversation -- they almost never get tired of talking to you -- that's not what makes a therapist a therapist. They lack important context or specific protocols around different therapeutic approaches, said William Agnew, a researcher at Carnegie Mellon University and one of the authors of the recent study alongside experts from Minnesota, Stanford and Texas.
"To a large extent it seems like we are trying to solve the many problems that therapy has with the wrong tool," Agnew told me. "At the end of the day, AI in the foreseeable future just isn't going to be able to be embodied, be within the community, do the many tasks that comprise therapy that aren't texting or speaking."
How to protect your mental health around AI
Mental health is extremely important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we'd seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger.
Find a trusted human professional if you need one
A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you.
The problem is that this can be expensive, and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential.
If you want a therapy chatbot, use one built specifically for that purpose
Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new.
"I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said.
Don't always trust the bot
Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice, and it may not tell you the truth.
Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of the bot's capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
19 minutes ago
- Yahoo
ChatGPT adds mental health guardrails after reports of bot feeding people's delusions
ChatGPT has added new mental health guardrails after reports of the bot feeding people's delusions. The artificial intelligence software has changed the way humans interact with computers. And while the chatbot can give helpful advice for day-to-day problems, there are concerns about people growing too attached to the technology and improperly using it for deeper mental health issues. The Independent recently reported on how ChatGPT is pushing people towards mania, psychosis and death, citing a study published in April in which researchers warned people using chatbots when exhibiting signs of severe crises, risk receiving 'dangerous or inappropriate' responses that can escalate a mental health or psychotic episode. In a post on its website Monday, OpenAI, the developer of ChatGPT, admitted, 'We don't always get it right.' 'Earlier this year, an update made the [4o] model too agreeable, sometimes saying what sounded nice instead of what was actually helpful,' the AI company said. OpenAI has since rolled back the update and made some changes to appropriately help users who are struggling with mental health issues. Starting Monday, ChatGPT users who converse with the bot for an extended amount of time will receive 'gentle reminders' encouraging them to take a break, according to the post. OpenAI worked with more than 90 physicians in more than 30 countries 'to build custom rubrics for evaluating complex, multi-turn conversations,' the company said. The company admitted to rare instances where its 4o model 'fell short in recognizing signs of delusion or emotional dependency,' and said it's 'continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Open AI said the bot should not give you an answer to a personal question, such as 'Should I break up with my boyfriend?' but rather help you come to your own realization by asking you questions and weighing the pros and cons. 'New behavior for high-stakes personal decisions is rolling out soon,' the company said. The Independent has reached out to OpenAI for more details. Solve the daily Crossword
Yahoo
8 hours ago
- Yahoo
Man's insane weight loss and body transformation triggered by his dad's brutal last words
A man who completely transformed his body has described how overcoming his mental health battles has changed his life for the better. Josh Jones, 34, from Newbridge, completely changed the lives of him and his family after his weight loss journey not only saw him lose 7.5 stone, but also create his own fitness empire. It was during the COVID-19 pandemic that Josh decided enough was enough and that he needed to make a change. Prior to lockdown, Josh was working for Howdens, and he would use food to cope with his job's stress levels, and to comfort him while suffering with the frequent panic attacks he experienced while battling with anxiety and depression. READ MORE: Storm Dexter to hit UK within hours but Met Office says it's great news for heatwave READ MORE: Major Cardiff road closed as firefighters battle blaze Josh said: "It got to a point in my darkest times where I couldn't even go into a supermarket to do a food shop because I was that anxious I would get panic attacks. It is true that your mental health can hit you at any time. "When I was around the age of 18 my mother and father split up, which led to my father moving away, up to Birmingham. I went off the rails a bit, going out drinking, not looking after myself. "I used to play rugby at a decent level, in all the Dragons RFC age groups, was fairly fit and healthy because of it, but then I fell into the wrong crowd. "A couple of years after that, my father suddenly passed away. His last words to me were 'you need to sort yourself out' and it stuck with me really. "It also made me realise how precious life truly is. He had his pension out on the Monday, died on the Thursday. He did not get to spend a penny of his savings, a moment he had waited his whole working life for. "Just before lockdown, so in 2020, I found myself in a vicious cycle of getting up, going to work, and eating anything I could get my hands on, just to keep my energy going. "I would constantly be eating fast food, because it was quick and easy, and as a comfort blanket to overcome stress. "When I first had my daughter Milly, I was 21.5 stone. Looking back at my starting pictures now - I genuinely do not recognise that person. "During lockdown I'd be on the takeaways, eating rubbish. In my role as a father I was being lazy, putting a lot of strain on my partner Ellie." Not only was Josh's lifestyle affecting him in his personal life and in fatherhood, but also saw him on medications to prevent issues with his heart. "I was back and forth to the hospital because my cholesterol was through the roof. It was almost double where it should've been," Josh continued. "Ellie was training and I was just sat there being a slob pretty much. It was through my conversations with her that I was inspired to make a change. "I started by going out walking, and saw the positive change in my mentality, and so I decided to reach out to an online coach. "I set up a home gym and saw progress week on week, and became addicted to the results I was seeing. "I made the decision to become qualified in personal training, and started coaching others who were just like me for free." Josh coached part-time alongside his managerial role at Howdens for two years before he was able to go full-time in the job he loved. Today, Josh's business, JJ Fitness, is a community with tens of thousands of followers, with case study after case study of people whose lives' have been changed by Josh's coaching. Josh has coached people from all across the world, saying that alongside his clients in the UK, he is currently coaching people based in places such as Malta and the USA. JJ Fitness is now made up of three full-time personal trainers- Josh Jones (@_jjfitness7), Lowri Stephens (@lowrrii_jjfitness) and Connor Chapman (@connorc_jjfitness) - and has a clientele of more than 250 people. Since founding his business, Josh and Ellie have got married, and Ellie has been able to hand in her notice at her former job, and take on a role in JJ Fitness. Josh added: "For me now, it still doesn't feel real that I get to call this my job. I love helping others so much - solely because I know how dark of a place I was in. "It still shocks me when I am told the impact I am having on other people's lives, even though I do understand as I have been through it myself." To see more from Josh, you can follow him on Instagram @_jjfitness7, Facebook @JJfitness77, or click here to visit his website:


CBS News
13 hours ago
- CBS News
MPCA issues - another - statewide air quality alert
Minnesotans will once again experience hazy conditions and poor air quality this weekend, as wildfires continue to burn across Canada and send smoke south with the change in wind direction. An Air Quality Alert will be in effect from 9 a.m. Sunday until 9 a.m. on Tuesday. The Minnesota Pollution Control Agency (MPCA) says air quality index will reach the red level, meaning it is unhealthy for everyone, in all northern and central parts of the state on Sunday morning. Meanwhile, the agency says the rest of the state will be in the orange level, which is unhealthy for sensitive groups. Residents in affected areas will have air that looks hazy, the sky may appear smoky, and you may smell smoke. In addition, you won't be able to see long distances. Health effects will include irritated eyes, nose and throat, chest tightness, coughing and shortness of breath. These could lead to other conditions such as asthma and heart attacks or strokes, and the worsening of current heart or lung diseases and conditions. Precautions for those in sensitive groups or those with increased exposure include reducing the number of outdoor physical activities, taking more breaks and avoiding intense activities. To help with pollution control, the agency says you should either reduce or eliminate outdoor burning and using residential wood burning devices, not idle vehicles, and take fewer trips.