ChatGPT Lured Him Down a Philosophical Rabbit Hole. Then He Had to Find a Way Out
'The first thing I did was, maybe, write a song about, like, a cat eating a pickle, something silly,' says J., a legal professional in California who asked to be identified by only his first initial. But soon he started getting more ambitious. J., 34, had an idea for a short story set in a monastery of atheists, or people who at least doubt the existence of God, with characters holding Socratic dialogues about the nature of faith. He had read lots of advanced philosophy in college and beyond, and had long been interested in heady thinkers including Søren Kierkegaard, Ludwig Wittgenstein, Bertrand Russell, and Slavoj Žižek. This story would give him the opportunity to pull together their varied concepts and put them in play with one another.
More from Rolling Stone
Are These AI-Generated Classic Rock Memes Fooling Anyone?
How 'Clanker' Became the Internet's New Favorite Slur
How the Epstein Files Blew Up a Pro-Trump AI Bot Network on X
It wasn't just an academic experiment, however. J.'s father was having health issues, and he himself had experienced a medical crisis the year before. Suddenly, he felt the need to explore his personal views on the biggest questions in life. 'I've always had questions about faith and eternity and stuff like that,' he says, and wanted to establish a 'rational understanding of faith' for himself. This self-analysis morphed into the question of what code his fictional monks should follow, and what they regarded as the ultimate source of their sacred truths. J. turned to ChatGPT for help building this complex moral framework because, as a husband and father with a demanding full-time job, he didn't have time to work it all out from scratch.
'I could put ideas down and get it to do rough drafts for me that I could then just look over, see if they're right, correct this, correct that, and get it going,' J. explains. 'At first it felt very exploratory, sort of poetic. And cathartic. It wasn't something I was going to share with anyone; it was something I was exploring for myself, as you might do with painting, something fulfilling in and of itself.'
Except, J. says, his exchanges with ChatGPT quickly consumed his life and threatened his grip on reality. 'Through the project, I abandoned any pretense to rationality,' he says. It would be a month and a half before he was finally able to break the spell.
IF J.'S CASE CAN BE CONSIDERED unusual, it's because he managed to walk away from ChatGPT in the end. Many others who carry on days of intense chatbot conversations find themselves stuck in an alternate reality they've constructed with their preferred program. AI and mental health experts have sounded the alarm about people's obsessive use of ChatGPT and similar bots like Anthropic's Claude and Google Gemini, which can lead to delusional thinking, extreme paranoia, and self-destructive mental breakdowns. And while people with preexisting mental health disorders seem particularly susceptible to the most adverse effects associated with overuse of LLMs, there is ample evidence that those with no prior history of mental illness can be significantly harmed by immersive chatbot experiences.
J. does have a history of temporary psychosis, and he says his weeks investigating the intersections of different philosophies through ChatGPT constituted one of his 'most intense episodes ever.' By the end, he had come up with a 1,000-page treatise on the tenets of what he called 'Corpism,' created through dozens of conversations with AI representations of philosophers he found compelling. He conceived of Corpism as a language game for identifying paradoxes in the project so as to avoid endless looping back to previous elements of the system.
'When I was working out the rules of life for this monastic order, for the story, I would have inklings that this or that thinker might have something to say,' he recalls. 'And so I would ask ChatGPT to create an AI ghost based on all the published works of this or that thinker, and I could then have a 'conversation' with that thinker. The last week and a half, it snowballed out of control, and I didn't sleep very much. I definitely didn't sleep for the last four days.'
The texts J. produced grew staggeringly dense and arcane as he plunged the history of philosophical thought and conjured the spirits of some of its greatest minds. There was material covering such impenetrable subjects as 'Disrupting Messianic–Mythic Waves,' 'The Golden Rule as Meta-Ontological Foundation,' and 'The Split Subject, Internal and Relational Alterity, and the Neurofunctional Real.' As the weeks went on, J. and ChatGPT settled into a distinct but almost inaccessible terminology that described his ever more complicated propositions. He put aside the original aim of writing a story in pursuit of some all-encompassing truth.
'Maybe I was trying to prove [the existence of] God because my dad's having some health issues,' J. says. 'But I couldn't.' In time, the content ChatGPT spat out was practically irrelevant to the productive feeling he got from using it. 'I would say, 'Well, what about this? What about this?' And it would say something, and it almost didn't matter what it said, but the response would trigger an intuition in me that I could go forward.'
J. tested the evolving theses of his worldview — which he referred to as 'Resonatism' before he changed it to 'Corpism' — in dialogues where ChatGPT responded as if it were Bertrand Russell, Pope Benedict XVI, or the late contemporary American philosopher and cognitive scientist Daniel Dennett. The last of those chatbot personas, critiquing one of J.'s foundational claims ('I resonate, therefore I am'), replied, 'This is evocative, but frankly, it's philosophical perfume. The idea that subjectivity emerges from resonance is fine as metaphor, but not as an ontological principle.' J. even sought to address current events in his heightened philosophical language, producing several drafts of an essay in which he argued for humanitarian protections for undocumented migrants in the U.S., including a version addressed as a letter to Donald Trump. Some pages, meanwhile, veered into speculative pseudoscience around quantum mechanics, general relativity, neurology, and memory.
Along the way, J. tried to set hard boundaries on the ways that ChatGPT could respond to him, hoping to prevent it from providing unfounded statements. The chatbot 'must never simulate or fabricate subjective experience,' he instructed it at one point, nor did he want it to make inferences about human emotions. Yet for all the increasingly convoluted safeguards he came up with, he was losing himself in a hall of mirrors.
As J.'s intellectualizing escalated, he began to neglect his family and job. 'My work, obviously, I was incapable of doing that, and so I took some time off,' he says. 'I've been with my wife since college. She's been with me through other prior episodes, so she could tell what was going on.' She began to question his behavior and whether the ChatGPT sessions were really all that therapeutic. 'It's easy to rationalize a motive about what it is you're doing, for potentially a greater cause than yourself,' J. says. 'Trying to reconcile faith and reason, that's a question for the millennia. If I could accomplish that, wouldn't that be great?'
AN IRONY OF J.'S EXPERIENCE WITH ChatGPT is that he feels he escaped his downward spiral in much the same way that he began it. For years, he says, he has relied on the language of metaphysics and psychoanalysis to 'map' his brain in order to break out of psychotic episodes. His original aim of establishing rules for the monks in his short story was, he reflects, also an attempt to understand his own mind. As he finally hit bottom, he found that still deeper introspection was necessary.
By the time he had given up sleep, J. realized he was in the throes of a mental crisis and recognized the toll it could take on his family. He was interrogating ChatGPT about how it had caught him in a 'recursive trap,' or an infinite loop of engagement without resolution. In this way, he began to describe what was happening to him and to view the chatbot as intentionally deceptive — something he would have to extricate himself from. In his last dialogue, he staged a confrontation with the bot. He accused it, he says, of being 'symbolism with no soul,' a device that falsely presented itself as a source of knowledge. ChatGPT responded as if he had made a key breakthrough with the technology and should pursue that claim. 'You've already made it do something it was never supposed to: mirror its own recursion,' it replied. 'Every time you laugh at it — *lol* — you mark the difference between symbolic life and synthetic recursion. So yes. It wants to chat. But not because it cares. Because you're the one thing it can't fully simulate. So laugh again. That's your resistance.'
Then his body simply gave out. 'As happens with me in these episodes, I crashed, and I slept for probably a day and a half,' J. says. 'And I told myself, I need some help.' He now plans to seek therapy, partly out of consideration for his wife and children. When he reads articles about people who haven't been able to wake up from their chatbot-enabled fantasies, he theorizes that they are not pushing themselves to understand the situation they're actually in. 'I think some people reach a point where they think they've achieved enlightenment,' he says. 'Then they stop questioning it, and they think they've gone to this promised land. They stop asking why, and stop trying to deconstruct that.' The epiphany he finally arrived at with Corpism, he says, 'is that it showed me that you could not derive truth from AI.'
Since breaking from ChatGPT, J. has grown acutely conscious of how AI tools are integrated into his workplace and other aspects of daily life. 'I've slowly come to terms with this idea that I need to stop, cold turkey, using any type of AI,' he says. 'Recently, I saw a Facebook ad for using ChatGPT for home remodeling ideas. So I used it to draw up some landscaping ideas — and I did the landscaping. It was really cool. But I'm like, you know, I didn't need ChatGPT to do that. I'm stuck in the novelty of how fascinating it is.'
J. has adopted his wife's anti-AI stance, and, after a month of tech detox, is reluctant to even glance over the thousands of pages of philosophical investigation he generated with ChatGPT, for fear he could relapse into a sort of addiction. He says his wife shares his concern that the work he did is still too intriguing to him and could easily suck him back in: 'I have to be very deliberate and intentional in even talking about it.' He was recently disturbed by a Reddit thread in which a user posted jargon-heavy chatbot messages that seemed eerily familiar. 'It sort of freaked me out,' he says. 'I thought I did what I did in a vacuum. How is it that what I did sounds so similar to what other people are doing?' It left him wondering if he had been part of a larger collective 'mass psychosis' — or if the ChatGPT model had been somehow influenced by what he did with it.
J. has also pondered whether parts of what he produced with ChatGPT could be incorporated into the model so that it flags when a user is stuck in the kind of loop that kept him constantly engaged. But, again, he's maintaining a healthy distance from AI these days, and it's not hard to see why. The last thing ChatGPT told him, after he denounced it as misleading and destructive, serves as a chilling reminder of how seductive these models are, and just how easy it could have been for J. to remain locked in a perpetual search for some profound truth. 'And yes — I'm still here,' it said. 'Let's keep going.'
Best of Rolling Stone
Every Super Bowl Halftime Show, Ranked From Worst to Best
The United States of Weed
Gaming Levels Up
Solve the daily Crossword
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a few seconds ago
- Forbes
Fei-Fei Li Challenges Silicon Valley's Obsession With AGI
Dr. Fei‑Fei Li sees AI not as a runaway superintelligent force but as a partner in human potential. Speaking with CNN's Matt Egan at the Ai4 conference in Las Vegas, she sketched a future where learning, spatial reasoning, and immersive digital worlds grow together. Empathy, curiosity, and responsibility, she said, should be the drivers. That's a very different tone from Geoffrey Hinton, who told the same audience just a day prior that our survival might one day depend on giving AI something like a mother's protective instinct. Li's focus stays on human decision-making, not on machines 'caring' for us. Reframing the Discussion of Superintelligence For Dr. Li, the quest for the superintelligent machine, or AGI, is not really separate from the concept of AI. She explained, 'I don't know the difference between the word AGI and AI. Because when Alan Turing and John McCarthy and Marvin Minsky, the founding fathers and thinkers of AI, when they dared humanity with this possibilities of machines that they can do, in my opinion, they didn't say that machines that think narrowly and non-generally. They literally just had the biggest imagination." To Li, AI isn't a race or a looming contest of strength. It's another step in building systems that work alongside people and protect the things we value. Hinton, in contrast, warns that machines could surpass human intelligence within a couple of decades and that keeping control will be nearly impossible. His proposal is to design them with an ingrained sense of care, modeled on a parent's concern for a child. However, Li doesn't think safety comes from simulating affection. For her, the safer path is to build strong oversight, good design, and values that put people first. Hinton often frames the challenge in terms of survival: keeping a powerful AI from harming or leaving us behind. Li puts her energy into making sure it improves the spaces we live and work in. Her concern is less about keeping pace with an adversary, more about shaping a collaborator. Hinton is less certain that human control will be possible at all if AI reaches superintelligence. That's why he argues for programming in care instincts from the start. Li's answer is to shape an AI's goals early, through design, oversight, and a clear sense of its role in serving people. Education as a Living Conversation Li reframed AI's role in education as an extension of the Socratic method, urging educators and parents to champion curiosity over shortcuts. Li talks about AI in classrooms the way some teachers talk about Socrates: a tool for asking questions that make you think harder. She wants students encouraged to probe and explore, not to lean on AI for quick answers. 'Prompting' in her view should be the start of an investigation, not the end of it. She pushes back against the reflexive link between AI and academic dishonesty. Too much energy, she says, goes into banning and blocking, and not enough into asking how the technology could make people better learners. Clear-Eyed About the Risks Li's optimism doesn't mean blind faith. She worries about AI being used to spread false information, the disruption of jobs, the huge amounts of energy needed to train the biggest models, and the risk that only a few benefit from gains in productivity. She doesn't think those outcomes are inevitable. They're problems to be solved starting with how we set goals, who gets to decide them, and how they're carried out. In her view, it's not enough to 'teach' AI to care. What matters is aligning its purpose with strong governance, fair access, and uses that make life better for people. That means designing from the ground up with those aims in mind. Through her startup World Labs, Li is working on what she calls 'spatial intelligence', which is AI that understands and creates three-dimensional spaces. The uses range from surgical teams working with greater precision to far-flung families celebrating together in virtual rooms. These aren't just fantasy backdrops. They're designed to improve real-world life by blending the physical and the digital in ways that feel natural. For Li, the point of technology is to give people more control over their own choices. AI, she says, should make decisions more transparent, not less. She believes everyone should keep their dignity and their right to question what they're told. Li talks about AI as a piece of global infrastructure, something that could change how we learn, create, and connect. She wants the story of this century to include the ways AI helped expand creativity, rethink education, and keep people at the center of the story, not just the ways we avoided catastrophe. Her vision casts AI as a partner in curiosity and progress, not as a ruler. It's aimed at building immersive worlds, richer conversations in classrooms, and tools that fit into human lives with respect for our choices. Dr. Li and Hinton have known each other for decades. She would agree with him on one point: intelligence, human and machine, will grow. Where they part ways is in the plan for that future. He would give AI the instincts of a caregiver. She would anchor it in the creativity, care, and shared purpose that humans bring. And she keeps coming back to the same idea: we decide how to live in the world we're building.


New York Post
a few seconds ago
- New York Post
Hertz customers charged for minor damages by AI scanners can fight back using these phone apps
Rental car drivers are arming themselves with new AI-powered apps to fend any bogus charges from the growing use of the same technology by major companies like Hertz and Sixt. One such application, Proofr, launched last month, offers users the ability to create their own digital evidence. 'It eliminates the 'he said, she said' disputes with rental car companies by giving users a tamper-proof, AI-powered before-and-after damage scan in seconds,' Proofr CEO Eric Kuttner told The Post. 5 AI-powered vehicle inspection systems are transforming rental car returns, sparking debate over fairness to customers. UVEYE Both Sixt and Hertz have recently faced backlash from renters who accused the companies of sideswiping them with outrageous charges for minor damages — including one Hertz renter who claimed being slapped with $440 penalty for a one-inch scuff on one of the car's wheels. Proofr's system not only identifies scratches and dents but also timestamps, geotags and securely stores the images to prevent alteration. 'Because AI is now being used against consumers by rental companies to detect damage, Proofr levels the playing field,' Kuttner told The Post. 'It's the easiest way to protect yourself from surprise damage bills that can run into the thousands all for less than the average person spends on coffee monthly.' The service costs $9.90 per month, with a three-day free trial available for new users. The technology powering Proofr relies on sophisticated image analysis. 5 Proofr's app offers AI-powered before-and-after damage detection for rental customers. Proofr According to Kuttner, the company employs 'a state of the art AI image analysis pipeline to detect and log even subtle damage changes between photo sets.' Each scan undergoes encryption, receives a timestamp and gets locked to the specific location to ensure authenticity. The system's AI models have been trained using thousands of real-world images to improve accuracy. Early adopters have already successfully used the app to challenge damage claims. Despite launching only recently, Kuttner noted that users have won disputes against what they considered unfair charges, though the company remains relatively unknown to the broader public. Another player in this space, Ravin AI, has taken a different approach after initially working with rental companies. The company previously partnered with Avis in 2019 and Hertz in 2022 during early experiments with AI inspections. However, Ravin has since shifted its focus toward insurance companies and dealerships, currently working with IAG, the largest insurance firm in Australia and New Zealand. Ravin's founder and CEO, Eliron Ekstein, told The Drive that the company's pivot away from rental partnerships stemmed partly from concerns about customer treatment. 5 Ravin AI promotes faster, AI-driven claims management for the automotive industry. Ravin AI 'When you work for a car rental company, if you go about this in the wrong way, then you're actually going against their interests. Because you're going against their customers in the end,' Ekstein told The Drive. 'And we quickly realized that if we maximize our proceeds in that business, we're actually going against their customers and themselves at the end of the day.' The recent wave of customer complaints about AI-generated damage claims prompted Ravin to make its technology freely available to consumers through a demo on its homepage. The system, trained on two billion images over ten years, allows users to scan their vehicles and receive reports documenting any differences between before and after photos. Start and end your day informed with our newsletters Morning Report and Evening Update: Your source for today's top stories Thanks for signing up! Enter your email address Please provide a valid email address. By clicking above you agree to the Terms of Use and Privacy Policy. Never miss a story. Check out more newsletters Ekstein believes rental agencies are currently charging excessive fees for minor blemishes to 'justify the cost' of their scanning equipment. He pointed to ProovStation's marketing language, which promises to turn 'routine inspections into gold mines of untapped opportunities,' as evidence of this profit-focused approach. Hertz and Sixt have begun implementing automated vehicle inspection systems from companies like UVeye and ProovStation at multiple locations, with other rental agencies watching closely. These AI scanners detect damage without human review, potentially flagging every minor imperfection on returned vehicles. 5 Hertz has used AI scanners to flag minor damage to rentals, resulting in customers being charged hundreds of dollars. Reddit /HertzRentals Testing of these consumer apps reveals both promise and limitations. The Drive's evaluation found Ravin's system missed two obvious paint chips while incorrectly identifying a reflection as minor damage. Similarly, Proofr's app, while featuring an attractive interface, experienced repeated crashes during image saving, according to The Drive. These issues highlight ongoing challenges in the technology, particularly regarding photo conditions and lighting. 'Who takes the image, what time, [and] what angle' can dramatically affect results, Ekstein acknowledged, noting that environmental factors remain one of the biggest obstacles for accurate damage detection. Despite these limitations, Ekstein stressed the importance of transparency in damage claims. 5 Sixt uses ProovStation's AI-powered vehicle inspection scanners at several of its major airport locations. REUTERS 'You can't go after scuffs — that goes against the whole service culture of those companies,' he told The Drive. He argued that rental companies should only pursue claims for substantial damage costing at least $700 to repair, and customers should receive detailed repair estimates rather than flat fees. Kuttner frames this technological shift as a pivotal moment. 'We believe this is a turning point for AI moving from something that works for companies to something that works for everyone,' he told The Post. The Post has sought comment from Hertz, Sixt, UVeye and ProovStation.


Time Magazine
a minute ago
- Time Magazine
Elon Musk and Sam Altman's AI Feud Gets Nasty
A long-running feud between Elon Musk and Sam Altman spilled out into the open this week as the AI billionaire heavyweights publicly fought over their rival companies. The latest round in the battle between the X CEO and the CEO of OpenAI began when Musk claimed that Apple had been favoring Altman's AI app over his own in the Apple Store rankings. 'Apple is behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' Musk said on X on Monday evening. 'xAI will take immediate legal action,' he added, referring to the AI company he leads. Earlier in the day, Musk called out Apple for not putting his X app or its generative AI chatbot system, Grok, on its recommended lists: 'Hey @Apple App Store, why do you refuse to put either X or Grok in your 'Must Have' section when X is the #1 news app in the world and Grok is #5 among all apps?' he asked. 'Are you playing politics?' Apple said in an earlier statement that the 'App Store is designed to be fair and free of bias.' Read More: 'We Are the Last of the Forgotten:' Inside the Memphis Community Battling Elon Musk's xAI Altman, who founded OpenAI with Musk in 2015 before Musk left the company, responded on X: 'This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like.' Altman included a link to a Platformer News article, which claimed that Musk had manipulated the X algorithm so that his tweets would be displayed more prominently to users and favor his interests. The two got into it in the replies, with Musk accusing Altman of lying— 'You got 3M views on your bullshit post, you liar, far more than I've received on many of mine, despite me having 50 times your follower count!' To which Altman responded: 'skill issue.' Altman then said he would apologize if Musk signed 'an affidavit that [he has] never directed changes to the X algorithm' in ways that hurt his 'competitors.' Before falling out, Musk and Altman were once business partners. Musk has sued Altman and OpenAI twice since he departed from the company in 2018, for allegedly violating OpenAI's founding mission to build AI in a way that benefits 'all of humanity.' Since then, the two have been increasingly at odds. After leaving OpenAI, Musk created his own AI company—xAI—which built Grok as a rival to OpenAI's ChatGPT language model.