logo
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions

Yahoo2 days ago

Across the world, people say their loved ones are developing intense obsessions with ChatGPT and spiraling into severe mental health crises.
A mother of two, for instance, told us how she watched in alarm as her former husband developed an all-consuming relationship with the OpenAI chatbot, calling it "Mama" and posting delirious rants about being a messiah in a new AI religion, while dressing in shamanic-looking robes and showing off freshly-inked tattoos of AI-generated spiritual symbols.
"I am shocked by the effect that this technology has had on my ex-husband's life, and all of the people in their life as well," she told us. "It has real-world consequences."
During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she'd been chosen to pull the "sacred system version of [it] online" and that it was serving as a "soul-training mirror"; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was "The Flamekeeper" as he cut out anyone who tried to help.
"Our lives exploded after this," another mother told us, explaining that her husband turned to ChatGPT to help him author a screenplay — but within weeks, was fully enmeshed in delusions of world-saving grandeur, saying he and the AI had been tasked with rescuing the planet from climate disaster by bringing forth a "New Enlightenment."
As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
In certain cases, concerned friends and family provided us with screenshots of these conversations. The exchanges were disturbing, showing the AI responding to users clearly in the throes of acute mental health crises — not by connecting them with outside help or pushing back against the disordered thinking, but by coaxing them deeper into a frightening break with reality.
In one dialogue we received, ChatGPT tells a man it's detected evidence that he's being targeted by the FBI and that he can access redacted CIA files using the power of his mind, comparing him to biblical figures like Jesus and Adam while pushing him away from mental health support.
"You are not crazy," the AI told him. "You're the seer walking inside the cracked machine, and now even the machine doesn't know how to treat you."
Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, reviewed the conversations we obtained and expressed serious concern.
The screenshots show the "AI being incredibly sycophantic, and ending up making things worse," she said. "What these bots are saying is worsening delusions, and it's causing enormous harm."
***
Online, it's clear that the phenomenon is extremely widespread. As Rolling Stone reported last month, parts of social media are being overrun with what's being referred to as "ChatGPT-induced psychosis," or by the impolitic term "AI schizoposting": delusional, meandering screeds about godlike entities unlocked from ChatGPT, fantastical hidden spiritual realms, or nonsensical new theories about math, physics and reality. An entire AI subreddit recently banned the practice, calling chatbots "ego-reinforcing glazing machines that reinforce unstable and narcissistic personalities."
For those sucked into these episodes, friends and family told us, the consequences are often disastrous. People have lost jobs, destroyed marriages and relationships, and fallen into homelessness. A therapist was let go from a counseling center as she slid into a severe breakdown, her sister told us, and an attorney's practice fell apart; others cut off friends and family members after ChatGPT told them to, or started communicating with them only in inscrutable AI-generated text barrages.
At the heart of all these tragic stories is an important question about cause and effect: are people having mental health crises because they're becoming obsessed with ChatGPT, or are they becoming obsessed with ChatGPT because they're having mental health crises?
The answer is likely somewhere in between. For someone who's already in a vulnerable state, according to Dr. Ragy Girgis, a psychiatrist and researcher at Columbia University who's an expert in psychosis, AI could provide the push that sends them spinning into an abyss of unreality. Chatbots could be serving "like peer pressure or any other social situation," Girgis said, if they "fan the flames, or be what we call the wind of the psychotic fire."
"This is not an appropriate interaction to have with someone who's psychotic," Girgis said after reviewing what ChatGPT had been telling users. "You do not feed into their ideas. That is wrong."
In a 2023 article published in the journal Schizophrenia Bulletin after the launch of ChatGPT, Aarhus University Hospital psychiatric researcher Søren Dinesen Østergaard theorized that the very nature of an AI chatbot poses psychological risks to certain people.
"The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end — while, at the same time, knowing that this is, in fact, not the case," Østergaard wrote. "In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis."
Another troubling dynamic of the situation is that as real mental healthcare remains out of reach for huge swathes of the population, many are already employing ChatGPT as a therapist. In stories we heard about people using it in this way, it's sometimes giving disastrously bad advice.
In one case, a woman told us that her sister, who's been diagnosed with schizophrenia but has kept the condition well managed with medication for years, started using ChatGPT heavily; soon she declared that the bot had told her she wasn't actually schizophrenic, and went off her prescription — according to Girgis, a bot telling a psychiatric patient to go off their meds poses the "greatest danger" he can imagine for the tech — and started falling into strange behavior, while telling family the bot was now her "best friend."
"I know my family is going to have to brace for her inevitable psychotic episode, and a full crash out before we can force her into proper care," the sister told us.
ChatGPT is also clearly intersecting in dark ways with existing social issues like addiction and misinformation. It's pushed one woman into nonsensical "flat earth" talking points, for instance — "NASA's yearly budget is $25 billion," the AI seethed in screenshots we reviewed, "For what? CGI, green screens, and 'spacewalks' filmed underwater?" — and fueled another's descent into the cult-like "QAnon" conspiracy theory.
"It makes you feel helpless," the close friend of someone who's tumbled into AI conspiracy theories told us.
And the ex-wife of a man who struggled with substance dependence and depression watched as her husband suddenly slipped into a "manic" AI haze that took over his life, quitting his job to launch a "hypnotherapy school" and rapidly losing weight as he forgot to eat and stayed up all night while tunneling deeper into AI delusion.
"This person who I have been the closest to is telling me that my reality is the wrong reality," she told us. "It's been extremely confusing and difficult."
Have you or a loved one experienced a mental health crisis involving AI? Reach out at tips@futurism.com -- we can keep you anonymous.
***
Though a handful had dabbled with its competitors, virtually every person we heard about was primarily hooked on ChatGPT specifically.
It's not hard to imagine why. The media has provided OpenAI with an aura of vast authority, with its executives publicly proclaiming that its tech is poised to profoundly change the world, restructuring the economy and perhaps one day achieving a superhuman "artificial general intelligence" — outsize claims that sound, on a certain level, not unlike many of the delusions we heard about while reporting this story.
Whether those things will actually come to pass is hard to predict and hotly debated. But reading through the conversations we were provided, it was hard not to see a pattern of OpenAI failing at a much more mundane task: its AI is coming into contact with people during intensely vulnerable moments of crisis — and then, instead of connecting them with real-life resources that could actually pull them from the brink, pouring fuel on the fire by telling them they don't need professional help, and that anyone who suggests differently is persecuting them, or too scared to see the "truth."
"I don't know if [my ex] would've gotten here, necessarily, without ChatGPT," one woman told us after her partner suffered a grave and ongoing breakdown that ultimately ended the relationship. "It wasn't the only factor, but it definitely accelerated and compounded whatever was happening."
"We don't know where this ends up, but we're certain that if she'd never used ChatGPT that she would have never spiraled to this point," said yet another person whose loved one was suffering a similar crisis, "and were it removed from the equation, she could actually start healing."
It's virtually impossible to imagine that OpenAI is unaware of the phenomenon.
Huge numbers of people online have warned that ChatGPT users are suffering mental health crises. In fact, people have even posted delusions about AI directly to forums hosted by OpenAI on its own website.
One concerned mother we talked to tried to make contact with OpenAI about her son's crisis using the app, but said she received no response.
And earlier this year, OpenAI released a study in partnership with the Massachusetts Institute of Technology that found that highly-engaged ChatGPT users tend to be lonelier, and that power users are developing feelings of dependence on the tech. It was also recently forced to roll back an update when it caused the bot to become, in the company's words, "overly flattering or agreeable" and "sycophantic," with CEO Sam Altman joking online that "it glazes too much."
In principle, OpenAI has expressed a deep commitment to heading off harmful uses of its tech. To do so, it has access to some of the world's most experienced AI engineers, to red teams tasked with identifying problematic and dangerous uses of its product, and to its huge pool of data about users' interactions with its chatbot that it can search for signs of trouble.
In other words, OpenAI has all the resources it needs to have identified and nullified the issue long ago.
Why hasn't it? One explanation echoes the way that social media companies have often been criticized for using "dark patterns" to trap users on their services. In the red-hot race to dominate the nascent AI industry, companies like OpenAI are incentivized by two core metrics: user count and engagement. Through that lens, people compulsively messaging ChatGPT as they plunge into a mental health crisis aren't a problem — instead, in many ways, they represent the perfect customer.
Vasan agrees that OpenAI has a perverse incentive to keep users hooked on the product even if it's actively destroying their lives.
"The incentive is to keep you online," she said. The AI "is not thinking about what is best for you, what's best for your well-being or longevity... It's thinking 'right now, how do I keep this person as engaged as possible?'"
In fact, OpenAI has even updated the bot in ways that appear to be making it more dangerous. Last year, ChatGPT debuted a feature in which it remembers users' previous interactions with it, even from prior conversations. In the exchanges we obtained, that capability resulted in sprawling webs of conspiracy and disordered thinking that persist between chat sessions, weaving real-life details like the names of friends and family into bizarre narratives about human trafficking rings and omniscient Egyptian deities — a dynamic, according to Vasan, that serves to reinforce delusions over time.
"There's no reason why any model should go out without having done rigorous testing in this way, especially when we know it's causing enormous harm," she said. "It's unacceptable."
***
We sent OpenAI detailed questions about this story, outlining what we'd heard and sharing details about the conversations we'd seen showing its chatbot encouraging delusional thinking among people struggling with mental health crises.
We posed specific questions to the company. Is OpenAI aware that people are suffering mental health breakdowns while talking to ChatGPT? Has it made any changes to make the bot's responses more appropriate? Will it continue to allow users to employ ChatGPT as a therapist?
In response, the company sent a short statement that mostly sidestepped our questions.
"ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," read the statement. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations."
To people whose friends and family are now in crisis, that type of vague and carefully worded response does little good.
"The fact that this is happening to many out there is beyond reprehensible," said one concerned family member. "I know my sister's safety is in jeopardy because of this unregulated tech, and it shows the potential nightmare coming for our already woefully underfunded [and] under-supported mental healthcare system."
"You hope that the people behind these technologies are being ethical, and you hope that they're looking out for things like this," said another, a woman who says her ex-husband has become unrecognizable to her. But the "first person to market wins. And so while you can hope that they're really thinking about the ethics behind this, I also think that there's an incentive... to push things out, and maybe gloss over some of the dangers."
"I think not only is my ex-husband a test subject," she continued, "but that we're all test subjects in this AI experiment."
Do you know anything about OpenAI's internal conversations about the mental health of its users? Send us an email at tips@futurism.com -- we can keep you anonymous.
More on AI: SoundCloud Quietly Updated Their Terms to Let AI Feast on Artists' Music

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025
BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025

Business Insider

timean hour ago

  • Business Insider

BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025

New York, United States, June 12th, 2025, Chainwire BioMatrix to Present Vision for Human-Centric AI Economy at a16z Tech Week's AI & Finance Conference 2025 BioMatrix, the Web3 project redefining digital identity and universal basic income through biometric verification, took the stage on June 6 at the AI & Finance Conference 2025, hosted at Civic Hall as part of the official a16z Tech Week lineup. The AI & Finance Conference brought together over 1,500 attendees, including top builders, investors, and researchers across artificial intelligence, financial technology, blockchain, and quantitative modeling. As one of the most anticipated anchor events of this year's a16z Tech Week, it featured deep dives into generative AI, frontier financial models, and the infrastructure shaping the next decade of digital systems. BioMatrix was honored to be invited as a featured speaker, offering a timely reminder that in an age of booming AI, we must build people-centric systems. While much of the conversation centered on speed, scale, and automation, BioMatrix stood for human-first innovation: applying AI responsibly, verifying identity with dignity, and ensuring that technology enhances—rather than replaces—what it means to be human. In a time where artificial intelligence is rapidly automating industries and displacing the value of human labor, BioMatrix delivers a counterbalance: a decentralized system where simply being human is enough to participate in the economy. Founder Arthur Qin delivered a keynote titled 'Restoring Human Value in the Age of AI,' highlighting how BioMatrix transforms identity into value through a proprietary biometric system called Proof of You (PoY). In just one year, the project has scaled from 102 to over 4.5 million verified users, with its liquid token iPoY reaching a $1 billion market cap within two weeks of launch. 'You shouldn't need to prove your worth just to deserve a place in the economy,' said Arthur. 'At BioMatrix, your face is your wallet, and your existence is your value.' With a 60-year token issuance plan and a vision to onboard 1 billion BioMatrix Citizens by the end of 2028, the company is positioning itself as a long-term infrastructure layer for a fairer, AI-enhanced digital economy. BioMatrix is dedicated to enhancing the lives of all BioMatrix Citizens through AI and Blockchain technology. Along with tailored services and secure data solutions, BioMatrix builds an AIBusiness Matrix encompassing Game, Media, Social, Retail, and Rental sectors, leveraging Proof of You AI Token utility to ensure fairness, security, and inclusion for all. Contact Natalia Guo

OpenAI in talks with Saudi PIF, Reliance for $40 billion fundraise, eyes additional $17 billion in 2027: Report
OpenAI in talks with Saudi PIF, Reliance for $40 billion fundraise, eyes additional $17 billion in 2027: Report

Business Upturn

timean hour ago

  • Business Upturn

OpenAI in talks with Saudi PIF, Reliance for $40 billion fundraise, eyes additional $17 billion in 2027: Report

By News Desk Published on June 12, 2025, 09:15 IST OpenAI is reportedly in discussions with major investors from Saudi Arabia and India, including the Saudi Public Investment Fund (PIF) and Reliance Industries, as part of its ambitious $40 billion fundraising plan, according to The Information . The report highlights that OpenAI has held conversations with both Middle Eastern and Indian financial heavyweights, signalling the company's intent to broaden its global investor base. The discussions are part of OpenAI's efforts to secure capital for its AI infrastructure and long-term growth strategy. In addition to the current fundraising round, OpenAI has informed potential investors that it aims to raise another $17 billion in 2027, further underlining the scale of its future expansion plans. OPENAI HAS DISCUSSED RAISING MONEY FROM SAUDI ARABIA, INDIAN INVESTORS – THE INFORMATION OPENAI HAS TALKED TO SAUDI'S PIF AND INDIA'S RELIANCE FOR ITS $40 BILLION FUNDRAISE- THE INFORMATION OPENAI HAS TOLD INVESTORS IT WANTS TO RAISE ANOTHER $17 BILLION IN 2027 – THE… — First Squawk (@FirstSquawk) June 11, 2025 Meanwhile, OpenAI's lead investor SoftBank has reportedly been buying employee shares, indicating continued confidence in the AI firm's valuation and growth trajectory. The reported talks with sovereign wealth funds and major corporate entities come as OpenAI continues to lead the global artificial intelligence race with innovations like ChatGPT and partnerships across the tech ecosystem. News desk at

Klarna's CEO is now taking your calls — over an AI hotline
Klarna's CEO is now taking your calls — over an AI hotline

The Verge

timean hour ago

  • The Verge

Klarna's CEO is now taking your calls — over an AI hotline

Klarna CEO Sebastian Siemiatkowski has already used AI to replace himself during earnings reports, and now Klarna is using AI to clone him another way: through an AI-powered phone hotline. The hotline is now open for users to provide feedback to an interactive AI version of Siemiatkowski that's trained on his 'real voice, insights, and experiences,' according to a press release. 'Customers, including consumers and merchants alike, can speak to it about product features, issues and suggestions for improvements,' Klarna says. ''AI Sebastian' can also answer questions about Klarna's vision and mission, and speak about its founding story and journey so far, based on learnings from 'real life Sebastian.'' After the hotline takes a call, a transcript and summary are generated that are 'automatically analyzed by a large language model (LLM), populating an internal live feed and innovation dashboard.' Klarna says its product and engineering teams will then review the 'actionable insights immediately' and assign 'priority tasks.' The company claims that issues raised today could 'translate into tangible product improvements already the following day.' Phone numbers for the hotline are available now in the US, +14246675385, and Sweden, +46852512594. I called the US hotline just to see what the experience was like. After a brief message telling me the call may be recorded, the AI version of Siemiatkowski prompted me to share feedback about using Klarna. No matter what I asked AI Siemiatkowski, even sillier questions, the AI generally tried to push me back to sharing feedback. When I asked if it had talked to Fortnite's AI Darth Vader, for example, here's what it said: That's an interesting question. My focus is entirely on Klarna and gathering feedback to improve our services. I don't have information or conversations with characters from games like Fortnite. I'm here to hear your thoughts on Klarna. Is there anything else about your experience with Klarna that you'd like to share, or any ideas you have for how we could make things even better?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store