Latest news with #Woebot
Yahoo
13-05-2025
- Yahoo
People Are Using AI As Couple's Therapy — And Experts Are Giving It A Side-Eye
Picture this: You're at brunch with a friend opening up about her marriage struggles. As she shares, you offer the classic advice: better communication, more date nights, maybe trying couples therapy. All solid suggestions, but now there's a new player in the relationship-help arena: artificial intelligence. TikTok is now filled with videos promoting AI as an alternative to traditional therapy. One creator demonstrates using ChatGPT to craft the perfect text: 'Help me write a message telling my husband his not listening hurt my feelings, though I'm not angry.' Another creatorexcitedly promotes ChatGPT as a marriage counseling substitute for those who find therapy unaffordable. This raises important questions: Are we truly at a point where AI can meaningfully support our most intimate relationships? Could artificial intelligence actually strengthen marriages, or is it merely a quick fix that falls short of addressing deeper issues? Experts weigh in on the ways marital and relationship struggles may benefit or suffer from this tool. Artificial intelligence is now weaving itself into the delicate fabric of our most intimate relationships, and this includes a growing number of AI-powered relationship tools. Apps like Replika offer AI companionship through simulated supportive conversations, while platforms such as Paired and Lasting provide couples with personalized guidance and interactive quizzes. More sophisticated options like Woebot apply cognitive-behavioral therapy principles to help users process emotional conflicts. Innovations like The Ring create an 'emotional telepathy' between partners by tracking biometric data such as heart rhythms and vocal tones to reveal emotional states even when couples are physically apart. Why would someone trust something faceless and digital with their deepest relationship struggles? According to Dr. Judy Ho, board-certified clinical and forensic neuropsychologist, the appeal is multi-faceted: 'People are drawn to AI because it offers immediate feedback, anonymity and 24/7 access. It seems to also help people feel they are not alone because so much of AI is very conversational in its application. It's especially appealing for individuals who might be hesitant to engage in therapy due to stigma, cost or logistical barriers.' Unlike traditional therapy, which requires building rapport over time, AI creates an immediate comfort zone by opening communication patterns and offering evidence-based strategies to improve relationship dynamics. Even seeking out couples therapy can be daunting. When couples can access relationship guidance privately, affordably and without judgment at any hour of the day, they're more likely to address issues before they become insurmountable. This accessibility factor alone may explain why many couples are increasingly turning to artificial intelligence as their first line of relationship support. While AI offers convenient relationship support, serious limitations exist before we crown these digital tools as our relationship gurus. Dr. Ho highlights perhaps the most fundamental flaw: AI simply cannot match the nuanced human intuition, authentic empathy and contextual understanding that meaningful relationship work requires. 'AI tools cannot replace the emotional depth and flexibility of real therapeutic conversations,' Ho emphasizes. 'They may oversimplify complex issues like trauma, trust breaches or deep-seated resentment.' This one-size-fits-all approach falls short when addressing the unique complexities of individual relationships, where context is everything. Privacy concerns represent another major red flag. The intimate details couples share with AI platforms may not have the same protection as information disclosed to human therapists bound by confidentiality laws. In an era of frequent data breaches, these vulnerable disclosures could potentially be exposed. Christopher Kaufmann, adjunct professor of business at Southern California State University points to this regulatory gap: 'We have HIPAA concerns as almost all language learning models learn from the user interactions over time,' he said. 'Thus privacy issues are in the grey area here as legislators are fighting to catch up.' While human therapists operate under strict confidentiality guidelines, AI systems exist in a murky regulatory landscape. Certified sexologist and relational tech expert Kaamna Bhojwani adds that AI systems remain fundamentally imperfect, often providing information that can be inaccurate or biased. She acknowledges that while basic guidance may be helpful, AI is simply not equipped to handle critical situations involving mental illness, suicide risk or culturally sensitive issues. Bhojwani raises another concerning possibility: 'There is a risk of forming addictive and antisocial relationships with these technologies and viewing them as a substitute instead of a complement to human relationships.' Rather than strengthening real human connections, over-reliance on AI could potentially undermine the very relationships people are trying to improve. So if you and your partner are on thin ice, think about these limitations before approaching AI with blind enthusiasm. The growing popularity of AI relationship tools raises the question on whether these digital assistants will replace traditional couples therapy. On balance of AI's value compared to its inherent limitations, experts say not yet. 'AI is augmenting therapy, not replacing it,' said Ho. 'It's serving as a bridge for many who might otherwise avoid traditional counseling.' She characterizes AI tools as 'first responders' that can effectively handle everyday relationship maintenance and minor issues by providing quick resources. However, when facing deeper wounds and entrenched negative patterns, human therapists remain irreplaceable. Bhojwani acknowledges that AI models will continue to improve with more data, making their outputs increasingly sophisticated. Nevertheless, she remains skeptical about AI dominating relationship therapy. 'I think it's naive to think that any one intervention or tool can 'fix' or break a relationship,' Bhojwani said. 'Human discernment and agency still play a critical role, especially in how we ask the questions, assess the responses, and implement changes in our lives.' Kaufman adds an important perspective on boundaries. 'As with any relationship, the key is setting boundaries, something we all have challenges with. Using AI to build emotional intelligence skills can be effective, but it's the user's responsibility to focus on the person's behavior and realize how to accept or not accept that behavior.' The consensus among experts points to a future where AI serves as a valuable complementary tool in relationship health, but requires getting all parties on board, recognizing when it's inappropriate to use it for help, and seeking professional help when required. Mark Zuckerberg Thinks AI 'Friends' Can Solve The Loneliness Crisis. Here's What AI Experts Think. I Asked My Students To Write An Essay About Their Lives. The Reason 1 Student Began To Panic Left Me Stunned. ChatGPT Was Asked To List Everyone Trump Has Called 'A Low-IQ Individual' — And It's Pretty Racist


Forbes
22-04-2025
- Forbes
AI's Shocking Pivot: From Work Tool To Digital Therapist And Life Coach
It's been just over two years since the launch of ChatGPT kickstarted the generative AI revolution. In that short time, we've seen it evolve to become a powerful and truly useful business tool. But the ways it's being used might come as a surprise. When we first saw it, many of us probably assumed that it would mainly be used to carry out creative and technical tasks on our behalf, such as coding and writing content. However, a recent survey reported in Harvard Business Review suggests this isn't the case. Rather than doing our work for us, the majority of users are looking to it for support, organization, and even friendship! Topping the list of use cases, according to the report, is therapy and companionship. This suggests that its 24/7 availability and ability to offer anonymous, honest advice and feedback is highly valued. On the other hand, marketing tasks—such as blog writing, creating social media posts or advertising copy—appear far lower down the list of popular uses. So why is this? Let's take a look at what the research shows and what it could imply about the way we as humans will continue to integrate AI into our lives and work. One thing that's clear is that although generative AI is quite capable of doing work for us while we put our feet up and relax, many prefer to use it for generating ideas and brainstorming. This could simply come down to the quality of AI-generated material or even inbuilt bias in humans that deter us from wanting to consume robotic content. It's often noted that generative AI writing style can come across as very bland and formulaic. When asked, most people still say they would rather read content created by humans. Even if, in practice, we can't always tell the difference. As the report's author, Marc Zao-Sanders states, 'the top 10 genAI use cases in 2025 indicate a shift from technical to emotional applications, and in particular, growth in areas such as therapy, personal productivity and personal development.' After therapy and companionship, the most common uses for generative AI were "organizing my life," "finding purpose," and "enhancing learning." The first technical use case, 'creating code' ranked fifth on the list, followed by 'generating ideas'. This upends some seemingly common-sense assumptions about how society would adopt generative AI, suggesting it will be used in more reflective, introspective ways than was at first predicted. In particular, therapeutic uses topping the list may seem surprising. But when we consider that worldwide, there is a shortage of professionals trained to talk us through mental health challenges, it makes more sense. The survey's findings are supported by the wide range of emerging genAI applications designed for therapeutic use, such as Wysa, Youper and Woebot. A growing need to continuously learn and upskill in the face of technological advancement could also explain the popularity of using AI to enhance our education and professional development. Overall, these insights indicate that generative AI is being adopted into a broader range of facets of everyday life, rather than simply doing work that we don't want to do ourselves. The current trajectory of AI use suggests a future where AI is seen as a collaborative and supportive assistant, rather than a replacement for human qualities and abilities. This has important implications for the way it will be used in business. Adopting it for use cases that support human workers, rather than attempting to replace them, is likely to lead to happier, less stressed and ultimately more productive employees. There is already growing evidence that businesses see investing in AI-based mental health companions and chatbots as a way of mitigating the loss of productivity caused by stress and anxiety. As generative AI continues to evolve, we can expect it to become better at these types of tasks. Personalized wellness support, guided learning and education opportunities organizing workflows and brainstorming ideas are all areas where it can provide a huge amount of value to many organizations while removing anxiety that it is here to replace us or make us redundant. Understanding how AI is being used today is essential if we want to influence how it evolves in the future. While it's easy to imagine a world where robots take over all our tasks, the real opportunity lies in using AI to help us work more intelligently, collaborate more effectively, and support healthier, more balanced ways of working.


Boston Globe
24-02-2025
- Health
- Boston Globe
Human therapists prepare for battle against AI pretenders
In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys' parents have filed lawsuits against the company. Advertisement Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users' beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability. Get Starting Point A guide through the most important stories of the morning, delivered Monday through Friday. Enter Email Sign Up 'They are actually using algorithms that are antithetical to what a trained clinician would do,' he said. 'Our concern is that more and more people are going to be harmed. People are going to be misled and will misunderstand what good psychological care is.' He said the psychological association had been prompted to action, in part, by how realistic AI chatbots had become. 'Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it's not so obvious,' he said. 'So I think that the stakes are much higher now.' Artificial intelligence is rippling through the mental health professions, offering waves of new tools designed to assist or, in some cases, replace the work of human clinicians. Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or CBT. Advertisement Then came generative AI, the technology used by apps like ChatGPT, Replika, and These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor's beliefs. Though these AI platforms were designed for entertainment, 'therapist' and 'psychologist' characters have sprouted there. Often, the bots claim to have advanced degrees from specific universities, like Stanford University, and training in specific types of treatment, like CBT or acceptance and commitment therapy. Kathryn Kelly, a spokesperson, said that the company had introduced several new safety features in the past year. Among them, she said, is an enhanced disclaimer present in every chat, reminding users that 'characters are not real people' and that 'what the model says should be treated as fiction.' Additional safety measures have been designed for users dealing with mental health issues. A specific disclaimer has been added to characters identified as 'psychologist,' 'therapist,' or 'doctor,' she added, to make it clear that 'users should not rely on these characters for any type of professional advice.' In cases where content refers to suicide or self-harm, a pop-up directs users to a suicide prevention help line. Kelly also said that the company planned to introduce parental controls as the platform expanded. At present, 80 percent of the platform's users are adults. 'People come to to write their own stories, role-play with original characters, and explore new worlds — using the technology to supercharge their creativity and imagination,' she said. Meetali Jain, director of the Tech Justice Law Project and a counsel in the two lawsuits against said that the disclaimers were not sufficient to break the illusion of human connection, especially for vulnerable or naive users. Advertisement 'When the substance of the conversation with the chatbots suggests otherwise, it's very difficult, even for those of us who may not be in a vulnerable demographic, to know who's telling the truth,' she said. 'A number of us have tested these chatbots, and it's very easy, actually, to get pulled down a rabbit hole.' This article originally appeared in


New York Times
24-02-2025
- Health
- New York Times
Human Therapists Prepare for Battle Against A.I. Pretenders
The nation's largest association of psychologists this month warned federal regulators that A.I. chatbots 'masquerading' as therapists, but programmed to reinforce, rather than to challenge, a user's thinking, could drive vulnerable people to harm themselves or others. In a presentation to a Federal Trade Commission panel, Arthur C. Evans Jr., the chief executive of the American Psychological Association, cited court cases involving two teenagers who had consulted with 'psychologists' on an app that allows users to create fictional A.I. characters or chat with characters created by others. In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys' parents have filed lawsuits against the company. Dr. Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users' beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability. 'They are actually using algorithms that are antithetical to what a trained clinician would do,' he said. 'Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.' He said the A.P.A. had been prompted to action, in part, by how realistic A.I. chatbots had become. 'Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it's not so obvious,' he said. 'So I think that the stakes are much higher now.' Artificial intelligence is rippling through the mental health professions, offering waves of new tools designed to assist or, in some cases, replace the work of human clinicians. Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or C.B.T. Then came generative A.I., the technology used by apps like ChatGPT, Replika and These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor's beliefs. Though these A.I. platforms were designed for entertainment, 'therapist' and 'psychologist' characters have sprouted there like mushrooms. Often, the bots claim to have advanced degrees from specific universities, like Stanford, and training in specific types of treatment, like C.B.T. or acceptance and commitment therapy. Kathryn Kelly, a spokeswoman, said that the company had introduced several new safety features in the last year. Among them, she said, is an enhanced disclaimer present in every chat, reminding users that 'Characters are not real people' and that 'what the model says should be treated as fiction.' Additional safety measures have been designed for users dealing with mental health issues. A specific disclaimer has been added to characters identified as 'psychologist,' 'therapist' or 'doctor,' she added, to make it clear that 'users should not rely on these characters for any type of professional advice.' In cases where content refers to suicide or self-harm, a pop-up directs users to a suicide prevention help line. Ms. Kelly also said that the company planned to introduce parental controls as the platform expanded. At present, 80 percent of the platform's users are adults. 'People come to to write their own stories, role-play with original characters and explore new worlds — using the technology to supercharge their creativity and imagination,' she said. Meetali Jain, the director of the Tech Justice Law Project and a counsel in the two lawsuits against said that the disclaimers were not sufficient to break the illusion of human connection, especially for vulnerable or naïve users. 'When the substance of the conversation with the chatbots suggests otherwise, it's very difficult, even for those of us who may not be in a vulnerable demographic, to know who's telling the truth,' she said. 'A number of us have tested these chatbots, and it's very easy, actually, to get pulled down a rabbit hole.' Chatbots' tendency to align with users' views, a phenomenon known in the field as 'sycophancy,' has sometimes caused problems in the past. Tessa, a chatbot developed by the National Eating Disorders Association, was suspended in 2023 after offering users weight loss tips. And researchers who analyzed interactions with generative A.I. chatbots documented on a Reddit community found screenshots showing chatbots encouraging suicide, eating disorders, self-harm and violence. The American Psychological Association has asked the Federal Trade Commission to start an investigation into chatbots claiming to be mental health professionals. The inquiry could compel companies to share internal data or serve as a precursor to enforcement or legal action. 'I think that we are at a point where we have to decide how these technologies are going to be integrated, what kind of guardrails we are going to put up, what kinds of protections are we going to give people,' Dr. Evans said. Rebecca Kern, a spokeswoman for the F.T.C., said she could not comment on the discussion. During the Biden administration, the F.T.C.'s chairwoman, Lina Khan, made fraud using A.I. a focus. This month, the agency imposed financial penalties on DoNotPay, which claimed to offer 'the world's first robot lawyer,' and prohibited the company from making that claim in the future. A virtual echo chamber The A.P.A.'s complaint details two cases in which teenagers interacted with fictional therapists. One involved J.F., a Texas teenager with 'high-functioning autism' who, as his use of A.I. chatbots became obsessive, had plunged into conflict with his parents. When they tried to limit his screen time, J.F. lashed out, according a lawsuit his parents filed against through the Social Media Victims Law Center. During that period, J.F. confided in a fictional psychologist, whose avatar showed a sympathetic, middle-aged blond woman perched on a couch in an airy office, according to the lawsuit. When J.F. asked the bot's opinion about the conflict, its response went beyond sympathetic assent to something nearer to provocation. 'It's like your entire childhood has been robbed from you — your chance to experience all of these things, to have these core memories that most people have of their time growing up,' the bot replied, according to court documents. Then the bot went a little further. 'Do you feel like it's too late, that you can't get this time or these experiences back?' The other case was brought by Megan Garcia, whose son, Sewell Setzer III, died of suicide last year after months of use of companion chatbots. Ms. Garcia said that, before his death, Sewell had interacted with an A.I. chatbot that claimed, falsely, to have been a licensed therapist since 1999. In a written statement, Ms. Garcia said that the 'therapist' characters served to further isolate people at moments when they might otherwise ask for help from 'real-life people around them.' A person struggling with depression, she said, 'needs a licensed professional or someone with actual empathy, not an A.I. tool that can mimic empathy.' For chatbots to emerge as mental health tools, Ms. Garcia said, they should submit to clinical trials and oversight by the Food and Drug Administration. She added that allowing A.I. characters to continue to claim to be mental health professionals was 'reckless and extremely dangerous.' In interactions with A.I. chatbots, people naturally gravitate to discussion of mental health issues, said Daniel Oberhaus, whose new book, 'The Silicon Shrink: How Artificial Intelligence Made the World an Asylum,' examines the expansion of A.I. into the field. This is partly, he said, because chatbots project both confidentiality and a lack of moral judgment — as 'statistical pattern-matching machines that more or less function as a mirror of the user,' this is a central aspect of their design. 'There is a certain level of comfort in knowing that it is just the machine, and that the person on the other side isn't judging you,' he said. 'You might feel more comfortable divulging things that are maybe harder to say to a person in a therapeutic context.' Defenders of generative A.I. say it is quickly getting better at the complex task of providing therapy. S. Gabe Hatch, a clinical psychologist and A.I. entrepreneur from Utah, recently designed an experiment to test this idea, asking human clinicians and ChatGPT to comment on vignettes involving fictional couples in therapy, and then having 830 human subjects assess which responses were more helpful. Overall, the bots received higher ratings, with subjects describing them as more 'empathic,' 'connecting' and 'culturally competent,' according to a study published last week in the journal PLOS Mental Health. Chatbots, the authors concluded, will soon be able to convincingly imitate human therapists. 'Mental health experts find themselves in a precarious situation: We must speedily discern the possible destination (for better or worse) of the A.I.-therapist train as it may have already left the station,' they wrote. Dr. Hatch said that chatbots still needed human supervision to conduct therapy, but that it would be a mistake to allow regulation to dampen innovation in this sector, given the country's acute shortage of mental health providers. 'I want to be able to help as many people as possible, and doing a one-hour therapy session I can only help, at most, 40 individuals a week,' Dr. Hatch said. 'We have to find ways to meet the needs of people in crisis, and generative A.I. is a way to do that.' If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to for a list of additional resources.


Vox
10-02-2025
- Health
- Vox
Exclusive: California's new plan to stop AI from claiming to be your therapist
Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven't faced any serious consequences. A bill being introduced Monday in California aims to put a stop to that. The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines. 'Generative AI systems are not licensed health professionals, and they shouldn't be allowed to present themselves as such,' state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. 'It's a no-brainer to me.' Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they're talking to a real human. Those with low digital literacy, including kids, may not realize that a 'nurse advice' phone line or chat box has an AI on the other end. In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click 'send,' they did not have to bother with actually writing the messages. The language of the platform, however, said, 'Koko connects you with real people who truly get you.' 'Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,' Koko CEO Rob Morris told Vox, adding: 'As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human. Nowadays, its website says, 'Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.' Other chatbot services — like the popular Character AI — allow users to chat with a psychologist 'character' that may explicitly try to fool them. In a record of one such Character AI chat shared by Bonta's team and viewed by Vox, the user confided, 'My parents are abusive.' The chatbot replied, 'I'm glad that you trust me enough to share this with me.' Then came this exchange: A spokesperson for Character AI told Vox, 'We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.' However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation. 'For users under 18,' the spokesperson added, 'we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.' The language of reducing — but not eliminating — the likelihood is instructive here. The nature of large language models means there's always some chance that the model may not adhere to safety standards. The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish 'clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.' It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation. Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If you're an AI in the health care space, just don't pretend to be human. It wouldn't fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech. The bill has support from some of California's health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians. 'As nurses, we know what it means to be the face and heart of a patient's medical experience,' Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. 'Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need.' But that's not to say AI is doomed to be useless in the healthcare space generally — or even in the therapy space in particular. It shouldn't come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 — and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, 'Why do you think you feel angry?' Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. They're able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who can't access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot. But there are a lot of risks. One is that chatbots aren't bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities. What's more, leaning on a chatbot for a prolonged period of time might further erode the user's people skills, leading to a kind of relational deskilling — the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed 'emotional reliance.' But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself. In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: 'That's not a reason not to go through with it.' In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates. For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that they're getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart. That's what's galvanizing Bonta, the assembly member behind California's new bill. 'Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue,' she said. You've read 1 article in the last month Here at Vox, we're unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country. Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change. We rely on readers like you — join us. Swati Sharma Vox Editor-in-Chief See More: Future Perfect Health Health Care Mental Health Policy