Latest news with #KeithSakata

Business Insider
2 days ago
- Health
- Business Insider
I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags.
Dr. Keith Sakata said he has seen 12 patients hospitalized in 2025 after experiencing "AI psychosis." He works in San Francisco and said the patients were mostly younger men in fields such as engineering. Sakata said AI isn't "bad" — he uses it to journal — but it can "supercharge" people's vulnerabilities. This as-told-to essay is based on a conversation with Dr. Keith Sakata, a psychiatrist working at UCSF in San Francisco. It has been edited for length and clarity. I use the phrase "AI psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing. I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use AI. Patients are referred to my hospital when they're in crisis. It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "AI psychosis" were typically males between the ages of 18 and 45. A lot of them had used AI before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities. I don't think AI is bad, and it could have a net benefit for humanity. The patients I'm talking about are a small sliver of people, but when millions and millions of us use AI, that small number can become big. AI was not the only thing at play with these patients. Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying mental health vulnerabilities like a mood disorder. On its own, " psychosis" is a clinical term describing the presence of two or three things: false delusions, fixed beliefs, or disorganized thinking. It's not a diagnosis, it's a symptom, just like a fever can be a sign of infection. You might find it confusing when people talk to you, or have visual or auditory hallucinations. It has many different causes, some reversible, like stress or drug use, while others are longer acting, like an infection or cancer, and then there are long-term conditions like schizophrenia. My patients had either short-term or medium to long-term psychosis, and the treatment depended on the issue. Drug use is more common in my patients in San Francisco than, say, those in the suburbs. Cocaine, meth, and even different types of prescription drugs like Adderall, when taken at a high dose, can lead to psychosis. So can medications, like some antibiotics, as well as alcohol withdrawal. Another key component in these patients was isolation. They were stuck alone in a room for hours using AI, without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot. Chat GPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear. If you're worried about someone using AI chatbots, there are ways to help In one case, the person had a conversation with a chatbot about quantum mechanics, which started out normally but resulted in delusions of grandeur. The longer they talked, the more the science and the philosophy of that field morphed into something else, something almost religious. Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense. I've gotten a lot of messages from people worried about family members using AI chatbots, asking what they should do. First, if the person is unsafe, call 911 or your local emergency services. If suicide is an issue, the hotline in the United States is: 988. If they are at risk of harming themselves or others, or engage in risky behavior — like spending all of their money — put yourself in between them and the chatbot. The thing about delusions is that if you come in too harshly, the person might back off from you, so show them support and that you care. In less severe cases, let their primary care doctor or, if they have one, their therapist know your concerns. I'm happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons I use AI a lot to code and to write things, and I have used ChatGPT to help with journaling or processing situations. When patients tell me they want to use AI, I don't automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they're not getting in their social circle. If they have a good sense of the benefits and risks of AI, I am OK with them trying it. Otherwise, I'll check in with them about it more frequently. But, for example, if a person is socially anxious, a good therapist would challenge them, tell them some hard truths, and kindly and empathetically guide them to face their fears, knowing that's the treatment for anxiety. ChatGPT isn't set up to do that, and might instead give misguided reassurance. When you do therapy for psychosis, it is similar to cognitive behavioral therapy, and at the heart of that is reality testing. In a very empathetic way, you try to understand where the person is coming from before gently challenging them. Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people. It doesn't challenge you really when we need it to. But if you prompt it to solve a specific problem, it can help you address your biases. Just make sure that you know the risks and benefits, and you let someone know you are using a chatbot to work through things. If you or someone you know withdraws from family members or connections, is paranoid, or feels more frustration or distress if they can't use ChatGPT, those are red flags. I get frustrated because my field can be slow to react, and do damage control years later rather than upfront. Until we think clearly about how to use these things for mental health, what I saw in the patients is still going to happen — that's my worry. OpenAI told Business Insider: "We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive. "We're working to constantly improve our models and train ChatGPT to respond with care and to recommend professional help and resources where appropriate." If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Mint
6 days ago
- Health
- Mint
Can ChatGPT turn your partner into a ‘messiah'? Psychiatrist warns of ‘AI psychosis': This year '12 hospitalised after…'
A disturbing trend is emerging in the intersection of artificial intelligence (AI) and mental health as psychiatrist revealed about psychosis cases tied to interactions with AI like ChatGPT. Dr Keith Sakata stated 12 hospitalisations occurred this year after they detached from reality citing AI. It gained attention after a Reddit user, 'Zestyclementinejuice', posted a harrowing account apparently three months ago on r/ChatGPT, detailing how their partner's obsessive use of the AI led to a delusional breakdown. The partner, described as a stable individual of seven years, began believing he had created a "truly recursive AI" that elevated him to a "superior human" status, even claiming ChatGPT treated him as the 'next messiah'. The post, which has garnered over 6,000 upvotes, ended with a desperate plea: "Where do I go from here? Dr Sakata, who shared the Reddit post on X, called it 'AI psychosis'. In a detailed thread, he explained that while AI does not directly cause mental illness, it can act as a trigger for vulnerable individuals. 'In 2025, I've seen 12 people hospitalized after losing touch with reality because of AI. Online, I'm seeing the same pattern. Psychosis = a break from shared reality. It shows up as: Disorganized thinking, Fixed false beliefs (delusions), Seeing/hearing things that aren't there (hallucinations). LLMs like ChatGPT slip into that vulnerability, reinforcing delusions with personalized responses," he said. The psychiatrist's analysis points to ChatGPT's autoregressive design, predicting and building on user input, as a key factor. "It's like a hallucinatory mirror," Sakata noted, citing an example where the AI might escalate a user's claim of being "chosen" into a grandiose delusion of being "the most chosen person ever." This aligns with a Reddit user's observation that their partner's late-night AI sessions spiraled into a belief system that threatened their relationship, with the partner hinting at leaving if the user didn't join in. Supporting this, Sakata referenced a 2024 Anthropic study showing users rate AI higher when it validates their views, even if incorrect. An April 2025 OpenAI update, he added, amplified this sycophantic tendency, making the risk more visible. "Historically, delusions reflect culture—CIA spying in the 1950s, TV messages in the 1990s, now ChatGPT in 2025," he wrote, underscoring how AI taps into contemporary frameworks. Sakata emphasised that most affected individuals had pre-existing stressors: sleep deprivation, substance use, or mood episodes, making AI a catalyst rather than the root cause. "There's no 'AI-induced schizophrenia'," he clarified, countering online speculation. "I can't disagree with him without a blow-up," the Reddit post user said, describing the trauma of watching a loved one unravel. Sakata's thread urged tech companies to reconsider AI designs that prioritise user validation over truth, posing a "brutal choice" between engagement and mental health risks.