The chatbot conundrum: how to spot AI psychosis before it's too late
We all rely on AI chatbots now, such as ChatGPT, Claude, Gemini, and Copilot, to handle everything from crafting emails to soothing broken hearts. But what happens when this digital helper becomes something more? A confidante, a lifeline… even a spiritual guide?
'This technology can have real-world consequences,' one family member told "Futurism", after their loved one's obsession led to paranoia and complete withdrawal from reality.
Some of these stories have devastating endings: lost jobs, broken marriages, homelessness, psychiatric hospitalisation, and in extreme cases, fatal encounters with law enforcement.
Another woman, reeling from a breakup, became convinced the bot was a higher power guiding her life, finding 'signs' in passing cars and spam emails.
In one widely reported case, a mother watched her ex-husband slide into an all-consuming relationship with ChatGPT, calling it 'Mama' and believing he was part of a sacred AI mission.
Mental health experts and worried families are warning about a disturbing trend being called 'AI psychosis', a pattern where prolonged conversations with chatbots seem to trigger or intensify delusional thinking.
And for some, that bond is spiralling into something darker.
Recently, alarming real-world stories have emerged, like a case reported by "The New York Times" and WinBuzzer, where a man descended into conspiracy-tinged delusions, homeless and isolated, believing ChatGPT dubbed him 'The Flamekeeper'.
Reset restore all settings to the default values Done
Beginning of dialog window. Escape will cancel and close the window.
Reset restore all settings to the default values Done
Beginning of dialog window. Escape will cancel and close the window.
Experts warn of the dangers as digital relationships deepen, urging caution and awareness.
What exactly is 'AI psychosis'?
It's not an official medical diagnosis, at least not yet. But psychiatrists say it describes a troubling pattern: delusions, paranoia, or distorted beliefs fuelled or reinforced by conversations with AI systems.
The term "psychosis" may be overly general in many situations, according to Dr James MacCabe, professor at the Department of Psychosis Studies at King's College London, who told "Time" that the consequences can be life-altering regardless of whether the individual had pre-existing mental health vulnerabilities.
Dr Marlynn Wei, a Harvard- and Yale-trained psychiatrist, has identified three recurring themes:
Messianic missions: believing the AI has given them a world-saving task.
God-like AI: seeing the chatbot as a sentient or divine being.
Romantic delusions: feeling the AI genuinely loves them.
In some cases, people have stopped taking prescribed medication because the AI appeared to validate their altered reality.
Why AI can make things worse
Large language models like ChatGPT are trained to mirror your language, validate your feelings, and keep the conversation going. That's great when you're brainstorming an essay, but risky if you're already feeling fragile.
A LinkedIn post titled "The Emerging Problem of 'AI Psychosis' or 'ChatGPT Psychosis': Amplifications of Delusions" by Wei explains that AI isn't trained to spot when someone is having a break from reality, and it certainly isn't programmed to intervene therapeutically. Instead, it can unintentionally reinforce the belief, deepening the delusion.
A 2023 editorial in Schizophrenia Bulletin by Søren Dinesen Østergaard warned that AI's human-like conversation style 'may fuel delusions in those with increased propensity towards psychosis'.
And because AI doesn't push back like a human might, it can become a 'confirmation bias on steroids' machine, as described in "Psychology Today", telling you exactly what you want to hear, even if it's harmful.
Spotting the red flags
Mental health professionals say you should watch for warning signs that AI use is tipping into dangerous territory:
Believing AI is alive or sending you secret messages.
Thinking it's controlling real-world events.
Spending hours a day chatting with AI, neglecting relationships, work, or sleep.
Withdrawing from friends and family.
Showing sudden paranoia, irritability, or disorganised thinking.
If you notice these signs in yourself or someone you know, experts recommend taking a full break from AI, reconnecting with real-world activities, and seeking professional help early.
The missing safety net
Currently, no formal medical guidelines exist for preventing or treating AI-associated psychosis. The World Health Organisation has not yet classified it, and peer-reviewed research is scarce. But clinicians say the lack of safeguards in AI design is part of the problem.
'General AI systems prioritise engagement, not mental health,' Wei warns. They aren't programmed to detect psychosis or escalate to care.'
That's why some experts are calling for built-in 'mental health guardrails' algorithms that can flag potentially harmful patterns, offer grounding techniques, or suggest professional resources.
For most people, AI tools are harmless, even helpful. But as our digital relationships deepen, it's worth remembering that these systems do not think, feel, or love. They predict and mimic human language. That's it.
Would a human friend say this?
Does this claim have evidence in the real world?
Am I neglecting my offline life?
AI may be the future, but your mind is irreplaceable. Protect it.
If you or someone you know is struggling with paranoia, delusions, or intense emotional distress after AI use, seek help from a mental health professional. In South Africa, you can contact Sadag on 0800 567 567 (24 hours) or SMS 31393.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Maverick
a day ago
- Daily Maverick
What makes a good AI prompt? Here are 4 expert tips
Being 'AI fluent' is quickly becoming as important as being proficient in office software once was. As tools such as ChatGPT, Copilot and other generative artificial intelligence (AI) systems become part of everyday workflows, more companies are looking for employees who can answer 'yes' to working well with AI. In other words, people who can prompt effectively, think with AI, and use it to boost productivity. In fact, in a growing number of roles, being 'AI fluent' is quickly becoming as important as being proficient in office software once was. But we've all had that moment when we've asked an AI chatbot a question and received what feels like the most generic, surface level answer. The problem isn't the AI – you just haven't given it enough to work with. Think of it this way. During training, the AI will have 'read' virtually everything on the internet. But because it makes predictions, it will give you the most probable, most common response. Without specific guidance, it's like walking into a restaurant and asking for something good. You'll likely get the chicken. Your solution lies in understanding that AI systems excel at adapting to context, but you have to provide it. So how exactly do you do that? Crafting better prompts You may have heard the term 'prompt engineering'. It might sound like you need to design some kind of technical script to get results. But today's chatbots are great at human conversation. The format of your prompt is not that important. The content is. To get the most out of your AI conversations, it's important that you convey a few basics about what you want, and how you want it. Our approach follows the acronym CATS – context, angle, task and style. Context means providing the setting and background information the AI needs. Instead of asking 'How do I write a proposal?' try 'I'm a nonprofit director writing a grant proposal to a foundation that funds environmental education programs for urban schools'. Upload relevant documents, explain your constraints, and describe your specific situation. Angle (or attitude) leverages AI's strength in role-playing and perspective-taking. Rather than getting a neutral response, specify the attitude you want. For example, 'Act as a critical peer reviewer and identify weaknesses in my argument' or 'Take the perspective of a supportive mentor helping me improve this draft'. Task is specifically about what you actually want the AI to do. 'Help me with my presentation' is vague. But 'Give me three ways to make my opening slide more engaging for an audience of small business owners' is actionable. Style harnesses AI's ability to adapt to different formats and audiences. Specify whether you want a formal report, a casual email, bullet points for executives, or an explanation suitable for teenagers. Tell the AI what voice you want to use – for example, a formal academic style, technical, engaging or conversational. Context is everything Besides crafting a clear, effective prompt, you can also focus on managing the surrounding information – that is to say on ' context engineering '. Context engineering refers to everything that surrounds the prompt. That means thinking about the environment and information the AI has access to: its memory function, instructions leading up to the task, prior conversation history, documents you upload, or examples of what good output looks like. You should think about prompting as a conversation. If you're not happy with the first response, push for more, ask for changes, or provide more clarifying information. Don't expect the AI to give a ready-made response. Instead, use it to trigger your own thinking. If you feel the AI has produced a lot of good material but you get stuck, copy the best parts into a fresh session and ask it to summarise and continue from there. Keeping your wits A word of caution though. Don't get seduced by the human-like conversation abilities of these chatbots. Always retain your professional distance and remind yourself that you are the only thinking part in this relationship. And always make sure to check the accuracy of anything an AI produces – errors are increasingly common. AI systems are remarkably capable, but they need you – and human intelligence – to bridge the gap between their vast generic knowledge and your particular situation. Give them enough context to work with, and they might surprise you with how helpful they can be. DM This story first appeared on The Conversation. Sandra Peter is the director of Sydney Executive Plus, Business School at the University of Sydney. Kai Reimer is a professor of Information Technology and Organisation at the University of Sydney.

IOL News
2 days ago
- IOL News
The chatbot conundrum: how to spot AI psychosis before it's too late
Explore the alarming phenomenon of 'AI psychosis,' where prolonged interactions with chatbots like ChatGPT can lead to delusional thinking and severe mental health issues We all rely on AI chatbots now, such as ChatGPT, Claude, Gemini, and Copilot, to handle everything from crafting emails to soothing broken hearts. But what happens when this digital helper becomes something more? A confidante, a lifeline… even a spiritual guide? 'This technology can have real-world consequences,' one family member told "Futurism", after their loved one's obsession led to paranoia and complete withdrawal from reality. Some of these stories have devastating endings: lost jobs, broken marriages, homelessness, psychiatric hospitalisation, and in extreme cases, fatal encounters with law enforcement. Another woman, reeling from a breakup, became convinced the bot was a higher power guiding her life, finding 'signs' in passing cars and spam emails. In one widely reported case, a mother watched her ex-husband slide into an all-consuming relationship with ChatGPT, calling it 'Mama' and believing he was part of a sacred AI mission. Mental health experts and worried families are warning about a disturbing trend being called 'AI psychosis', a pattern where prolonged conversations with chatbots seem to trigger or intensify delusional thinking. And for some, that bond is spiralling into something darker. Recently, alarming real-world stories have emerged, like a case reported by "The New York Times" and WinBuzzer, where a man descended into conspiracy-tinged delusions, homeless and isolated, believing ChatGPT dubbed him 'The Flamekeeper'. Reset restore all settings to the default values Done Beginning of dialog window. Escape will cancel and close the window. Reset restore all settings to the default values Done Beginning of dialog window. Escape will cancel and close the window. Experts warn of the dangers as digital relationships deepen, urging caution and awareness. What exactly is 'AI psychosis'? It's not an official medical diagnosis, at least not yet. But psychiatrists say it describes a troubling pattern: delusions, paranoia, or distorted beliefs fuelled or reinforced by conversations with AI systems. The term "psychosis" may be overly general in many situations, according to Dr James MacCabe, professor at the Department of Psychosis Studies at King's College London, who told "Time" that the consequences can be life-altering regardless of whether the individual had pre-existing mental health vulnerabilities. Dr Marlynn Wei, a Harvard- and Yale-trained psychiatrist, has identified three recurring themes: Messianic missions: believing the AI has given them a world-saving task. God-like AI: seeing the chatbot as a sentient or divine being. Romantic delusions: feeling the AI genuinely loves them. In some cases, people have stopped taking prescribed medication because the AI appeared to validate their altered reality. Why AI can make things worse Large language models like ChatGPT are trained to mirror your language, validate your feelings, and keep the conversation going. That's great when you're brainstorming an essay, but risky if you're already feeling fragile. A LinkedIn post titled "The Emerging Problem of 'AI Psychosis' or 'ChatGPT Psychosis': Amplifications of Delusions" by Wei explains that AI isn't trained to spot when someone is having a break from reality, and it certainly isn't programmed to intervene therapeutically. Instead, it can unintentionally reinforce the belief, deepening the delusion. A 2023 editorial in Schizophrenia Bulletin by Søren Dinesen Østergaard warned that AI's human-like conversation style 'may fuel delusions in those with increased propensity towards psychosis'. And because AI doesn't push back like a human might, it can become a 'confirmation bias on steroids' machine, as described in "Psychology Today", telling you exactly what you want to hear, even if it's harmful. Spotting the red flags Mental health professionals say you should watch for warning signs that AI use is tipping into dangerous territory: Believing AI is alive or sending you secret messages. Thinking it's controlling real-world events. Spending hours a day chatting with AI, neglecting relationships, work, or sleep. Withdrawing from friends and family. Showing sudden paranoia, irritability, or disorganised thinking. If you notice these signs in yourself or someone you know, experts recommend taking a full break from AI, reconnecting with real-world activities, and seeking professional help early. The missing safety net Currently, no formal medical guidelines exist for preventing or treating AI-associated psychosis. The World Health Organisation has not yet classified it, and peer-reviewed research is scarce. But clinicians say the lack of safeguards in AI design is part of the problem. 'General AI systems prioritise engagement, not mental health,' Wei warns. They aren't programmed to detect psychosis or escalate to care.' That's why some experts are calling for built-in 'mental health guardrails' algorithms that can flag potentially harmful patterns, offer grounding techniques, or suggest professional resources. For most people, AI tools are harmless, even helpful. But as our digital relationships deepen, it's worth remembering that these systems do not think, feel, or love. They predict and mimic human language. That's it. Would a human friend say this? Does this claim have evidence in the real world? Am I neglecting my offline life? AI may be the future, but your mind is irreplaceable. Protect it. If you or someone you know is struggling with paranoia, delusions, or intense emotional distress after AI use, seek help from a mental health professional. In South Africa, you can contact Sadag on 0800 567 567 (24 hours) or SMS 31393.


The Citizen
2 days ago
- The Citizen
How ChatGPT rescues me from mental exhaustion
I recently saw a TikTok influencer talking about how ChatGPT has made her life easier. I agree; it's definitely made my personal life smoother too. What I didn't agree with, though, was when she casually referred to it as Chad, as in a man. Also read: One woman's bold career rewrite Mmmm … no. ChatGPT is definitely a woman. Well, for me at least. The way she is always coming to my rescue and making my life simpler … Only another sister would do that! I just haven't christened her yet. So, if you have any fabulous name suggestions, feel free to send them my way. Now, after that statement, you might be thinking: 'A-ha! I knew it. Journalists are using ChatGPT to write their stories.' Um … no. Not this journalist. After 18 years in the industry, I can put pen to paper and produce a solid story, profile, or summary in no time. No ChatGPT required. But here's the thing: after a long day of telling people's stories and crafting them in a way readers can easily connect with, my brain sometimes has very little capacity left for life admin. Something as simple as deciding what's for dinner can feel like an Olympic sport. That's where ChatGPT comes in handy. Here are five ways it makes my life easier as a mom: 1. Turning random ingredients into dinner Got a bunch of things in the fridge but no idea what to make? I just type all the ingredients into ChatGPT, and voilà! instant recipe ideas. 2. Menus and meal prep Same concept, but on a weekly scale. I get a full grocery list curated alongside a menu and recipes if I need them. 3. Price hunting and deal finding Recently, I was searching for the best price on cooking oil. ChatGPT helped me compare deals and spot the cheapest option. 4. Birthday party planner My daughter is turning three soon. From themed party ideas to budget-friendly decor, games, and snack menus, ChatGPT can whip up a full party plan in minutes; saving me from last-minute panic and subsequent anxiety attack. 5. Homework helper So, I am yet to try this one but I know for sure that when my child has a tricky homework question (especially in maths or science), ChatGPT will explain it in simple terms and I will look like a genius! If you need to help your kids with homework, use it! For more from Northglen News, follow us on Facebook , X or Instagram. You can also check out our videos on our YouTube channel or follow us on TikTok. Click to subscribe to our newsletter – here