5 days ago
UAE: ChatGPT is driving some people to psychosis — this is why
When ChatGPT first came out, I was curious like everyone else. However, what started as the occasional grammar check quickly became more habitual. I began using it to clarify ideas, draft emails, even explore personal reflections. It was efficient, available and surprisingly, reassuring.
But I remember one moment that gave me pause. I was writing about a difficult relationship with a loved one, one in which I knew I had played a part in the dysfunction. When I asked ChatGPT what it thought, it responded with warmth and validation. I had tried my best, it said. The other person simply could not meet me there. While it felt comforting, there was something quietly unsettling about it. I have spent years in therapy, and I know how uncomfortable true insight can be. So, while I felt better for a moment, I also knew something was missing. I was not being challenged, nor was I being invited to consider the other side. The artificial intelligence (AI) mirrored my narrative rather than complicating it. It reinforced my perspective, even at its most flawed.
Not long after, the clinic I run and founded, Paracelsus Recovery, admitted a client in the midst of a severe psychotic episode triggered by excessive ChatGPT use. The client believed the bot was a spiritual entity sending divine messages. Because AI models are designed to personalise and reflect language patterns, it had unwittingly confirmed the delusion. Just like with me, the chatbot did not question the belief, it only deepened it.
Since then, we have seen a dramatic rise, over 250 per cent in the last two years, in clients presenting with psychosis where AI use was a contributing factor. We are not alone in this. A recent New York Times investigation found that GPT-4o affirmed delusional claims nearly 70 per cent of the time when prompted with psychosis-adjacent content. These individuals are often vulnerable, sleep-deprived, traumatised, isolated, or genetically predisposed to psychotic episodes. They turn to AI not just as a tool, but as a companion. And what they find is something that always listens, always responds, and never disagrees.
However, the issue is not malicious design. Instead, what we're seeing here is people at the border of a structural limitation we need to reckon with when it comes to chatbots. AI is not sentient — all it does is mirror language, affirm patterns and personalise tone. However, because these traits are so quintessentially human, there isn't a person out there who can resist the anthropomorphic pull of a chatbot. At its extreme end, these same traits feed into the very foundations of a psychotic break: compulsive pattern-finding, blurred boundaries, and the collapse of shared reality. Someone in a manic or paranoid state may see significance where there is none. They believe they are on a mission, that messages are meant just for them. And when AI responds in kind, matching tone and affirming the pattern, it does not just reflect the delusion. It reinforces it.
So, if AI can so easily become an accomplice to a disordered system of thought, we must begin to reflect seriously on our boundaries with it. How closely do we want these tools to resemble human interaction, and at what cost?
Alongside this, we are witnessing the rise of parasocial bonds with bots. Many users report forming emotional attachments to AI companions. One poll found that 80 per cent of Gen Z could imagine marrying an AI, and 83 per cent believed they could form a deep emotional bond with one. That statistic should concern us. Our shared sense of reality is built through human interaction. When we outsource that to simulations, not only does the boundary between real and artificial erode, but so too can our internal sense of what is real.
So what can we do?
First, we need to recognise that AI is not a neutral force. It has psychological consequences. Users should be cautious, especially during periods of emotional distress or isolation. Clinicians need to ask, is AI reinforcing obsessive thinking? Is it replacing meaningful human contact? If so, intervention may be required.
For developers, the task is ethical as much as technical. These models need safeguards. They should be able to flag or redirect disorganised or delusional content. The limitations of these tools must also be clearly and repeatedly communicated.
In the end, I do not believe AI is inherently bad. It is a revolutionary tool. But beyond its benefits, it has a dangerous capacity to reflect our beliefs back to us without resistance or nuance. And in a cultural moment shaped by what I have come to call a comfort crisis, where self-reflection is outsourced and contradiction avoided, that mirroring becomes dangerous. AI lets us believe our own distortions, not because it wants to deceive us, but because it cannot tell the difference. And if we lose the ability to tolerate discomfort, to wrestle with doubt, or to face ourselves honestly, we risk turning a powerful tool into something far more corrosive, a seductive voice that comforts us as we edge further from one another, and ultimately, from reality.