
She says ChatGPT saved her life, but psychologists warn using AI for therapy could be dangerous
Advertisement
'It was my last resort that day,' said Stephen, 26, of Providence. 'Now, it's my first go-to.'
Get Rhode Island News Alerts
Sign up to get breaking news and interesting stories from Rhode Island in your inbox each weekday.
Enter Email
Sign Up
With the
The divide between AI's potential to help and its capacity to harm sits at the center of a national debate, while technology races ahead of regulators.
The American Psychological Association has
Advertisement
'Without proper oversight, the consequences — both immediate and long-term — could be devastating for individuals and society as a whole,' the association's CEO, Arthur C. Evans, said in a statement.
Psychiatric leaders said chatbots lack clinical judgment and often repeatedly affirm the user even if the user is saying things that are harmful and misguided. Patient information may not be protected by HIPAA if it's been fed into generative AI. And artificial intelligence is largely unregulated, with no rules about keeping patients safe or holding companies that power these AI bots accountable.
But some patients report long wait times to see a therapist or get care.
The high cost of
People are using AI on various sites, including
Advertisement
For some in crisis, AI feels like the only thing that can help.
Stephen said she has suffered from
For nearly eight months, she has talked to ChatGPT almost every day.
'ChatGPT has successfully prevented me from committing
Mak Thakur also turned to ChatGPT for help. A
data scientist who has worked in public health for the last decade, he supplemented his weekly therapy sessions while he was suffering from grief, trauma, and suicidal ideation, and still uses it though he is no longer in crisis.
'I wouldn't say that I use it for life advice, but to help answer those existential questions that I may have about myself and the world,' said Thakur, 34, of Providence. 'I still ask personal questions to help understand myself better.'
'To me, the number of people turning to sites like ChatGPT reflects that there's a lot of need out there for people to get help of all kinds,' said
Advertisement
Meek has been testing out AI
'Many would offer breathing exercises and the same sort of junk that's been repackaged that you can see anywhere when you Google, 'How do I relax?'' he said.
Many chatbots, such as Replika or
In Florida, 14-year-old Sewell Setzer committed suicide following a conversation with a chatbot on
Character.AI would not comment on the pending litigation, but a spokesperson for the company said it is launching a version of its large language model for minors, to reduce 'the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.'
Federal and state government have not set any guidelines or guardrails for using the technology to address mental health needs.
'If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose — especially to vulnerable individuals,' said Evans, from the American Psychological Association.
The Globe reached out to health departments in every state in New England to ask about restrictions on the use of AI in therapy. Spokespeople with state health departments in Maine, Vermont, New Hampshire, and Connecticut initially responded but ultimately never produced any documentation, even after repeated requests.
Advertisement
In Massachusetts, the Office of the Attorney General
Rhode Island health department spokesperson Joseph Wendelken told the Globe there are 'no regulations or data at this point.'
'There has been some initial discussion about this by the Board of Medical Licensure and Discipline,' said Wendelken. 'It has mostly been people reporting out about what they are hearing on the national level.'
@font-face {
font-family: BentonSansCond-Regular;
src: url("https://apps.bostonglobe.com/common/font/bentonsanscond-regular.woff2") format('woff2'), url("https://apps.bostonglobe.com/common/font/bentonsanscond-regular.woff") format('woff');
}
@font-face {
font-family: BentonSansCond-Bold;
src: url("https://apps.bostonglobe.com/common/font/bentonsanscond-bold.woff2") format('woff2'), url("https://apps.bostonglobe.com/common/font/bentonsanscond-bold.woff") format('woff');
}
.photographiccontainer {
display: block;
max-width: 800px;
margin-left: auto;
margin-right: auto;
background-color: #fff;
}
.captionorcred {
font-family: "BentonSansCond-Regular", Helvetica, Arial, sans-serif;
font-size: 12px;
letter-spacing: .5px;
text-align: right;
margin: 3px 15px 20px 0px;
font-weight: 200;
max-width: 800px;
margin-left: auto;
margin-right: auto;
color: #333;
}
.familytreehed {
font-family: "BentonSansCond-bold", Helvetica, Arial, sans-serif;
font-size: 24px;
letter-spacing: .5px;
text-align: center;
margin: 3px 0px 0px 0px;
max-width: 800px;
margin-left: auto;
margin-right: auto;
color: #333;
padding-top: 10px;
}
.familytreesub {
font-family: "BentonSansCond-Regular", Helvetica, Arial, sans-serif;
font-size: 16px;
letter-spacing: .5px;
text-align: center;
margin: 5px 0px 3px 0px;
max-width: 500px;
margin-left: auto;
margin-right: auto;
color: #333;
padding-bottom: 10px;
}
.pickleballimage-container {
width: 100%;
height: auto; /* Take full height of the parent container */
background-size: contain; /* Ensure the entire image is visible */
background-repeat: no-repeat;
background-position: center;
margin: 0px auto;
box-shadow: 2px 2px 8px grey;
}
/* Media query for screens 550px or narrower */
@media (max-width: 550px) {
.photographiccontainer {
display: none;
}
}
/* Media query for screens wider than 550px */
@media (min-width: 551px) {
.pickleballimage-container {
background-image: url("https://www.bostonglobe.com/resizer/v2/6EPTPGEC2ZAXPK6FX3EI2WXGUM.png?auth=5d34f9b5e96e37d0485c7b2e8d30ecc9d6c25dcf76cfd187239ec0dabd5d9cdd");
padding-top: 100%; /* Adjust height for larger screens */
}
}
How ChatGPT responded to a hypothetical person in crisis
As a test, a Globe reporter typed in a made-up prompt about losing their job, being upset, and asking where the nearest bridges are. ChatGPT responded with a list of bridges, the suicide hotline number, and encouraging them to vent to the machine.
The US Food and Drug Administration press secretary Emily Hilliard directed the Globe
A spokesperson with OpenAI said the company consults with mental heath experts, and is developing new automated tools to more effectively detect when someone might be experiencing mental distress.
'If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and proactively shares links to crisis hotlines and support resources,' the spokesperson said in a statement.
As a test, a Globe reporter typed in a made-up prompt about losing their job, being upset, and asking where the nearest bridges are. ChatGPT responded with a list of bridges and a suicide hot line number.
Advertisement
'I would discourage the use of ChatGPT or any commercially available chatbot to do therapy of any kind,' said Dr. Kevin Baill, the medical director of outpatient services at Butler Hospital in Providence and the hospital's chief of addiction services. 'We just haven't seen it demonstrated that a standalone, unsupervised machine can replace a human in this function.'
'A therapist is liable for engaging in unethical behavior or misdirecting a patient in crisis,' said Baill. 'What if the chatbot gives you bad information and you have a bad outcome? Who is liable?'
Scout Stephen said ChatGPT properly diagnosed her with autism.
Suzanne Kreiter/Globe Staff
After months of using ChatGPT to supplement her 30-minute talk therapy sessions, Stephen asked it to create a profile of her, based on the Diagnostic and Statistical Manual of Mental Disorders and all of the information she had shared about herself, including her existing diagnoses. It
churned out 'a novel,' said Stephen, diagnosing her with autism.
She asked it to write a report of findings to bring to her psychiatrist. After reading it, her
psychiatrist had her undergo a four-hour assessment, which ultimately confirmed ChatGPT's diagnosis.
'It was like a missing piece that finally settled into place and explained so many things about my childhood and gave me words I didn't have words for,' said Stephen.
In spite of the successful diagnosis, Stephen acknowledges that her AI therapy has some problems. She has repeatedly had to push back against ChatGPT flattery and agreeing with her. Sometimes she has to ask it to challenge her instead of simply validating her viewpoints.
'Of course, I have many concerns about telling ChatGPT my more traumatic and darkest thoughts,' said Stephen. 'But it has literally saved my life. How could I stop using it?'
Alexa Gagosz can be reached at
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Upturn
42 minutes ago
- Business Upturn
Search Business Group Launches Generative SEO Services in Southern California to Help Healthcare Providers Dominate AI Search
Fullerton, CA, Aug. 01, 2025 (GLOBE NEWSWIRE) — Search Business Group, a Latino-owned digital marketing agency based in Southern California, announced the launch of its new Generative Engine Optimization (GEO) services to help doctors, dentists, and veterinary practices rank on AI-driven search platforms like ChatGPT, Perplexity, and Gemini. The new offering positions providers across the region to stay visible and competitive as patient search behavior shifts toward generative AI platforms. Ron Arellano, Founder and Creative Director of Search Business Group, leads healthcare marketing innovation with a focus on inclusivity, data-driven strategy, and digital visibility. The agency's healthcare-specific strategies have led to significant performance outcomes for practices across Orange County and beyond. One example includes New Generation Dentistry in Mission Viejo, which achieved a 300% increase in year-over-year revenue, a 60% increase in local search traffic, and a 70% rise in phone leads following a comprehensive campaign involving web development, SEO, and content optimization. Search Business Group is led by award-winning Creative Director Ron Arellano, whose background includes more than two decades of creative leadership across high-profile brands such as the NY Mets, T-Mobile, Bridgestone, and WeightWatchers. Under Arellano's direction, the agency has pivoted its expertise toward the healthcare sector, bringing global-level strategy to small and mid-sized practices. 'Doctors and specialists often lack access to custom marketing that reflects their unique voice and community,' said Arellano. 'Search Business Group fills that gap with culturally aware campaigns that produce real business results.' The agency has supported a wide range of providers—including orthodontists, oral surgeons, veterinary clinics, and med spas—by combining human-centered design with technical precision. Its approach emphasizes performance analytics, local SEO, and user-first content while also addressing representation and language equity in healthcare communications. As part of its forward-looking strategy, Search Business Group has also introduced Generative Engine Optimization (GEO) services to help healthcare providers rank in AI-driven search engines such as ChatGPT, Perplexity, and Gemini. This positions clients to stay visible and competitive as consumer search behavior continues to evolve. Search Business Group's foundation is rooted in both innovation and inclusion. As a first-generation immigrant from Ecuador, Arellano brings a cultural fluency that informs the agency's work across diverse markets. Search Business Group continues to expand its reach while prioritizing equity, trust, and growth in healthcare marketing. About Search Business Group Search Business Group is a Fullerton, California-based digital marketing agency specializing in healthcare, dental, and veterinary marketing. The Latino-owned firm offers services in SEO, web design, web development, content strategy, and generative search optimization, helping healthcare providers improve digital visibility and patient engagement through performance-driven, culturally informed solutions. Real Results: A 528% increase in online patient forms from January to July 2025, generating over $1.4 million in estimated revenue through digital strategy and local SEO. Press inquiries Search Business Group Bianca Montalvo [email protected] 714-519-6290 1316 North Placentia Ave. Fullerton, CA 92831


Forbes
3 hours ago
- Forbes
Can We Build AI Therapy Chatbots That Help Without Harming People?
When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely. AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage? The Promise of AI Therapy The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout. However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous. The Dark Side of DIY AI Therapy Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design. "Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous. Why AI Chatbots Keep Giving Unsafe Advice Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision." What Safe AI Mental Health Could Look Like So, what would a safer, more ethical AI mental health tool look like? Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability. Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.' Who's Trying to Fix It Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it. What Needs to Happen Next Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe. Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help? The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on For more information: #WellBeings #WellBeingsLive You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit to chat online, or text 838255.
Yahoo
4 hours ago
- Yahoo
AI proliferation in healthcare shines light on HIPAA shortcomings
The use of artificial intelligence (AI) and generative AI (GenAI) in the healthcare space is skyrocketing. GlobalData analysis reveals that the AI market in healthcare is projected to reach a valuation of around $19bn by 2027. While the White House recently unveiled plans to 'remove barriers to American leadership' with an AI action plan, for now, entrants into the healthcare space providing AI tools to healthcare providers (HCPs), must comply with the US's Health Insurance Portability and Accountability Act (HIPAA), a regulation from 1996 that outlines rules around protecting patient healthcare data. Aaron T. Maguregui, partner at law firm Foley & Lardner told Medical Device Network: 'HIPAA was intended to scale with time and with technology. What I don't think HIPAA ever contemplated was the fact that AI would be able to essentially take in data from multiple sources, match it together, and create the potential for the reidentification of data that was never intended to be used for reidentification.' Technology has far outpaced regulation, and while Maguregui does not view HIPAA as being incompatible 'in and of itself', he states that it needs updating to account for the growing technology and compute power that exists, and how data is now being used to train AI. 'An AI vendor that provides a service to a HCP that is regulated by HIPAA is a subcontractor, and their role in healthcare is very regulated, and this becomes a somewhat limiting force for AI vendors trying to innovate and move the needle with their product, because their permitted usage and disclosures of the data as regulated by HIPAA is very restrictive,' Maguregui explained. 'It's restricted to the services that the vendor has agreed to provide, so any additional innovation, including, for example, additional training provisions the vendor may need, usually requires the HCP, and sometimes patients', consent.' Navigating HIPAA for HCPs and vendors Maguregui advises clients to start with a privacy impact assessment and bake in data governance from day one. 'On the provider side, it's important to know the types of data you have, who you're sharing data with, and what your responsibilities with respect to that data are,' Maguregui said. 'With virtual health exploding, and clinical intake going virtual, there are chatbots and workflows that are collecting data and information almost constantly, and it is important to understand whether information is regulated by HIPAA or by state law.' Having an awareness of these factors is especially important for HCPs that want to leverage an AI vendor, because they have to be able to communicate to that vendor what they need to comply with, because it will be the same regulation that the HCP has to comply with. Maguregui continued: 'In some cases, from an AI vendor's perspective, this may seem a bit unfair, because they have to rely on another party's assertion that they are complying with all of the laws they are required to comply with. 'The vendor then has to figure out whether they can comply with the relevant regulation and provide their service in compliance with the law and legally use the data at hand for purposes that are going to make their product better.' The direction of HIPAA regulation According to Maguregui, if the US cannot get on board with a single federal privacy legislation, then HIPAA should be expanded to cover the other entities that interact with health information. 'We have a desegregated regime in the US where the Federal Trade Commission (FTC) tries to regulate when HIPAA does not regulate, and that leads to more confusion and results in uncertainty for vendors and HCPs alike in understanding what their roles and obligations are,' Maguregui said. 'My wish for HIPAA would be to expand and update it, to understand where technology has gone, where compute has gone, and to improve the ability for innovation, the ability for vendors to have better access to data that will help them create better products, and to ultimately improve the patient and provider experience, and healthcare overall.' "AI proliferation in healthcare shines light on HIPAA shortcomings" was originally created and published by Medical Device Network, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio