
Chatbots Can Trigger a Mental Health Crisis. What to Know About "AI Psychosis"
But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI.
The phenomenon—sometimes colloquially called 'ChatGPT psychosis' or 'AI psychosis'—isn't well understood. There's no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they're flying blind as the medical world scrambles to catch up.
What is 'ChatGPT psychosis' or 'AI psychosis'?
The terms aren't formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems.
Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King's College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, 'we're talking about predominantly delusions, not the full gamut of psychosis.'
Read More: How to Deal With a Narcissist
The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It's closely tied to how chatbots communicate; by design, they mirror users' language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable.
Who's most at risk?
While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present.
'I don't think using a chatbot itself is likely to induce psychosis if there's no other genetic, social, or other risk factors at play,' says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. 'But people may not know they have this kind of risk.'
The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder.
Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says.
Immersion matters, too. 'Time seems to be the single biggest factor,' says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. 'It's people spending hours every day talking to their chatbots.'
What people can do to stay safe
Chatbots aren't inherently dangerous, but for some people, caution is warranted.
First, it's important to understand what large language models (LLMs) are and what they're not. 'It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,' says Hamilton Morrin, a neuropsychiatrist at King's College London. He advises users to avoid oversharing or relying on them for emotional support.
Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help.
Recognizing when use has become problematic isn't always easy. 'When people develop delusions, they don't realize they're delusions. They think it's reality,' says MacCabe.
Read More: Are Personality Tests Actually Useful?
Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. 'Increased obsessiveness with fringe ideologies' or 'excessive time spent using any AI system' are red flags, Girgis says.
Dr. Thomas Pollak, a psychiatrist at King's College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says.
What AI companies should be doing
So far, the burden of caution has mostly fallen on users. Experts say that needs to change.
One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it's hard to measure the problem or design meaningful safeguards.
Many argue that waiting for perfect evidence is the wrong approach. 'We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,' says Morrin. 'They should also be working with mental-health professionals and individuals with lived experience of mental illness.' At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says.
Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its 'model fell short in recognizing signs of delusion or emotional dependency.' It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT's responses in 'high-stakes personal decisions.'
Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King's College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a 'digital advance directive' allowing users to pre-set boundaries when they're well.
Read More: How to Find a Therapist Who's Right for You
Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label.
Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond.
Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard.
Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren't taken as seriously as hopes, experts say, that potential could be lost.
'We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,' Vasan says. 'Society cannot repeat that mistake.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
2 hours ago
- Yahoo
HubSpot (NYSE:HUBS) Beats Q2 Sales Targets, Full-Year Outlook Slightly Exceeds Expectations
Sales and marketing software maker HubSpot (NYSE:HUBS) announced better-than-expected revenue in Q2 CY2025, with sales up 19.4% year on year to $760.9 million. Guidance for next quarter's revenue was better than expected at $786 million at the midpoint, 1.4% above analysts' estimates. Its non-GAAP profit of $2.19 per share was 3.1% above analysts' consensus estimates. Is now the time to buy HubSpot? Find out in our full research report. HubSpot (HUBS) Q2 CY2025 Highlights: Revenue: $760.9 million vs analyst estimates of $739.3 million (19.4% year-on-year growth, 2.9% beat) Adjusted EPS: $2.19 vs analyst estimates of $2.12 (3.1% beat) Adjusted Operating Income: $129.1 million vs analyst estimates of $124.9 million (17% margin, 3.4% beat) The company lifted its revenue guidance for the full year to $3.08 billion at the midpoint from $3.04 billion, a 1.4% increase Management raised its full-year Adjusted EPS guidance to $9.50 at the midpoint, a 1.8% increase Operating Margin: -3.2%, in line with the same quarter last year Free Cash Flow Margin: 15.3%, down from 16.5% in the previous quarter Customers: 267,982, up from 258,258 in the previous quarter Billings: $785.3 million at quarter end, up 21.2% year on year Market Capitalization: $25.96 billion 'Q2 was another solid quarter of continued revenue growth and customer expansion,' said Yamini Rangan, Chief Executive Officer at HubSpot. Company Overview Started in 2006 by two MIT grad students, HubSpot (NYSE:HUBS) is a software-as-a-service platform that helps small and medium-sized businesses market themselves, sell, and get found on the internet. Revenue Growth Reviewing a company's long-term sales performance reveals insights into its quality. Any business can experience short-term success, but top-performing ones enjoy sustained growth for years. Over the last three years, HubSpot grew its sales at a decent 23.1% compounded annual growth rate. Its growth was slightly above the average software company and shows its offerings resonate with customers. This quarter, HubSpot reported year-on-year revenue growth of 19.4%, and its $760.9 million of revenue exceeded Wall Street's estimates by 2.9%. Company management is currently guiding for a 17.4% year-on-year increase in sales next quarter. Looking further ahead, sell-side analysts expect revenue to grow 14.9% over the next 12 months, a deceleration versus the last three years. Despite the slowdown, this projection is healthy and implies the market is baking in success for its products and services. Unless you've been living under a rock, it should be obvious by now that generative AI is going to have a huge impact on how large corporations do business. While Nvidia and AMD are trading close to all-time highs, we prefer a lesser-known (but still profitable) stock benefiting from the rise of AI. Click here to access our free report one of our favorites growth stories. Billings Billings is a non-GAAP metric that is often called 'cash revenue' because it shows how much money the company has collected from customers in a certain period. This is different from revenue, which must be recognized in pieces over the length of a contract. HubSpot's billings punched in at $785.3 million in Q2, and over the last four quarters, its growth was impressive as it averaged 20.1% year-on-year increases. This performance aligned with its total sales growth, indicating robust customer demand. The high level of cash collected from customers also enhances liquidity and provides a solid foundation for future investments and growth. Customer Base HubSpot reported 267,982 customers at the end of the quarter, a sequential increase of 9,724. That's roughly in line with what we've observed over the last year, confirming that the company is maintaining its sales momentum. Key Takeaways from HubSpot's Q2 Results We enjoyed seeing HubSpot beat analysts' billings expectations this quarter. We were also glad its full-year EPS guidance slightly exceeded Wall Street's estimates. Overall, we think this was a decent quarter with some key metrics above expectations. The stock traded up 4.4% to $512 immediately following the results. HubSpot put up rock-solid earnings, but one quarter doesn't necessarily make the stock a buy. Let's see if this is a good investment. We think that the latest quarter is just one piece of the longer-term business quality puzzle. Quality, when combined with valuation, can help determine if the stock is a buy. We cover that in our actionable full research report which you can read here, it's free. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Bloomberg
2 hours ago
- Bloomberg
A $30 Million Paycheck? Big Law Has Lost the Plot
Opinion Newsletter Jessica Karl, Columnist Save This is Bloomberg Opinion Today, the total billable hours of Bloomberg Opinion's opinions. Sign up here. If you were a lawyer earning tens of millions of dollars a year — compensation that Chris Bryant says is not unheard of in prestigious partner circles — would you be using ChatGPT to do your job? Because this individual certainly didn't bat an eyelash when they typed this inquiry in the search bar:


The Hill
2 hours ago
- The Hill
Meta bans millions of WhatsApp accounts linked to scam operations
Meta took down 6.8 million WhatsApp accounts tied to scam operations on Tuesday after victims reported financial fraud schemes. The company said many of the scam sources were based in Southeast Asia at criminal scam centers. 'Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centers were able to operationalize them,' Meta said in a Tuesday release. 'These scam centers typically run many scam campaigns at once — from cryptocurrency investments to pyramid schemes. There is always a catch and it should be a red flag for everyone: you have to pay upfront to get promised returns or earnings,' they wrote. In an effort to ensure users are protected, the company said it would flag when people were added to group messages by someone who isn't in their contact list and urge individuals to pause before engaging with unfamiliar messages where they're encouraged to communicate on other social platforms. 'Scams may start with a text message or on a dating app, then move to social media, private messaging apps and ultimately payment or crypto platforms,' Meta said. 'In the course of just one scam, they often try to cycle people through many different platforms to ensure that any one service has only a limited view into the entire scam, making it more challenging to detect,' the company added. The Tuesday release highlighted an incident with Cambodian users urging people to enlist in a rent a scooter pyramid scheme with an initial text message generated by ChatGPT. The message contained a link to a WhatsApp chat which redirected the target to Telegram where they were told to like TikTok videos. 'We banned ChatGPT accounts that were generating short recruitment-style messages in English, Spanish, Swahili, Kinyarwanda, German, and Haitian Creole. These messages offered recipients high salaries for trivial tasks — such as liking social media posts — and encouraged them to recruit others,' OpenAI wrote in their June report focused on disrupting malicious artificial intelligence efforts. 'The operation appeared highly centralized and likely originated from Cambodia. Using AI-powered translation tools, we were able to investigate and disrupt the campaign's use of OpenAI services swiftly,' the company added. The Federal Trade Commission has reported a steady increase in social media fraud. The agency said more money was reported lost to fraud originating on social media than any other method of contact from January 2021 to June 2023 — with losses totaling $2.7 billion.