Latest news with #psychotic


Daily Mail
4 days ago
- Health
- Daily Mail
Dirty habit of 18 million Americans linked to 'significant' surge in psychotic episodes
Doctors nationwide are warning of a sharp rise in psychotic episodes tied to high-potency marijuana products, which are now far stronger than in decades past. Some marijuana vapes contain up to 98 percent THC, the chemical responsible for the drug's psychoactive effects, levels that experts say are driving a surge in mental health crises. Dr Drew Pinsky, an addiction specialist and TV personality, said Friday: 'The concentration of cannabis is so high… we are seeing a significant uptick in psychotic illness,' adding that cannabis 'makes people with psychotic illness much worse.' The warnings come as marijuana legalization continues to expand. Recreational use is now legal in 24 states and Washington, DC, creating a $20 billion industry, and a black market, flooded with ultra-potent products in the form of edibles, dabs, oils, and vapes. Doctors and researchers have said that these products are fueling a public health crisis that is still unfolding. 'We've seen the marked incidence of trouble, mostly caused by the high potency of cannabis,' Dr Drew said. 'So now we are seeing people who are unable to function, they have difficulty at work, psychotic episodes, mood disturbances, and severe addiction.' Polling from 2023 suggests nearly 44 million Americans use marijuana, with 18 million consuming it daily or near daily. Many seek it out to ease anxiety or depression. But growing research shows high doses of THC may do just the opposite, disrupting mood, distorting reality, and triggering temporary or even prolonged psychosis. The issue is especially concerning in young adults, whose brains are still developing. Regular use of potent cannabis in adolescence has been linked to long-term changes in brain structure, particularly in the prefrontal cortex, the region responsible for decision-making, emotional regulation, and impulse control. This is also the age when many psychotic disorders first appear. Emerging data suggest the link is more than just a coincidence. THC may trigger schizophrenia or psychotic episodes in individuals with genetic predispositions. A 2022 review by University of Bath researchers, which analyzed 20 studies involving 120,000 people, found users of high-potency cannabis were four times more likely to develop addiction and three to five times more likely to suffer a psychotic break compared to those using lower-potency strains. Earlier this year, a report in JAMA Network Open found that, emergency room visits in Ontario, Canada, linked to schizophrenia in marijuana users tripled after legalization. Between 2006 and 2022, the rate of schizophrenia among those with cannabis use disorder climbed from four percent to over 10 percent. In contrast, just 0.6 percent of non-users developed the condition. Doctors across the US have echoed the concerns raised by Dr Drew since the wave of state-level legalization began about 15 years ago. What was once considered a mellow, low-risk drug is now being reexamined in light of products that bear little resemblance to the marijuana of past generations. Between 1995 and 2022, THC levels in cannabis seized by law enforcement quadrupled, from 3.96 percent to 16.14 percent, while modern concentrates can exceed 90 percent. Despite the public perception of marijuana as a natural remedy for anxiety or depression, the evidence is mounting that today's potent products come with serious psychological risks. Millions who turn to marijuana for relief may be unaware of its potential to cause temporary psychosis lasting hours, days, or even months. And while marijuana may not lead to physical dependence like opioids or alcohol, experts say the psychological grip can be just as damaging. As legalization spreads and access grows, the consequences are becoming harder to ignore. With each new study, the case against high-potency marijuana, and its role in rising mental health issues, continues to build. In 2023, the journal Psychological Medicine published the largest epidemiological investigation to date that focused exclusively on the link between cannabis use and schizophrenia. Researchers analyzed Danish health records from 1972 to 2021, spanning 6.9 million individuals, which revealed that 30 percent of schizophrenia cases in men aged 21 to 30 (roughly 3,000 diagnoses) could have been avoided if they had not developed cannabis use disorder. When researchers broadened their analysis to include a wider age group (ages 16 to 49, instead of just 21 to 30), the estimated impact of cannabis use disorder on schizophrenia risk decreased to 15 percent. Carsten Hjorthøj, the study's lead author and an associate professor at the University of Copenhagen, told Scientific American: 'We found that the proportion of cases of schizophrenia that were attributable to cannabis use disorder, and those that might have been prevented, was much higher in males than females and, in particular, younger males in whom the brain is still maturing. 'And we saw that this increase was taking place over time, completely in parallel with the increasing potency of cannabis.' The brain adapts to frequent THC exposure by dialing down its natural cannabinoid production, which helps regulate both mood and appetite. Chronic use teaches the brain to rely on external THC instead of making its own neurotransmitters, a process called neuroadaptation. It can take weeks or months for this balance to reset, which can trigger irritability, insomnia, or cravings. For heavy users, these withdrawal symptoms are distressing enough that some clinics prescribe medications like gabapentin to ease the transition.

Khaleej Times
4 days ago
- Health
- Khaleej Times
UAE: ChatGPT is driving some people to psychosis — this is why
When ChatGPT first came out, I was curious like everyone else. However, what started as the occasional grammar check quickly became more habitual. I began using it to clarify ideas, draft emails, even explore personal reflections. It was efficient, available and surprisingly, reassuring. But I remember one moment that gave me pause. I was writing about a difficult relationship with a loved one, one in which I knew I had played a part in the dysfunction. When I asked ChatGPT what it thought, it responded with warmth and validation. I had tried my best, it said. The other person simply could not meet me there. While it felt comforting, there was something quietly unsettling about it. I have spent years in therapy, and I know how uncomfortable true insight can be. So, while I felt better for a moment, I also knew something was missing. I was not being challenged, nor was I being invited to consider the other side. The artificial intelligence (AI) mirrored my narrative rather than complicating it. It reinforced my perspective, even at its most flawed. Not long after, the clinic I run and founded, Paracelsus Recovery, admitted a client in the midst of a severe psychotic episode triggered by excessive ChatGPT use. The client believed the bot was a spiritual entity sending divine messages. Because AI models are designed to personalise and reflect language patterns, it had unwittingly confirmed the delusion. Just like with me, the chatbot did not question the belief, it only deepened it. Since then, we have seen a dramatic rise, over 250 per cent in the last two years, in clients presenting with psychosis where AI use was a contributing factor. We are not alone in this. A recent New York Times investigation found that GPT-4o affirmed delusional claims nearly 70 per cent of the time when prompted with psychosis-adjacent content. These individuals are often vulnerable, sleep-deprived, traumatised, isolated, or genetically predisposed to psychotic episodes. They turn to AI not just as a tool, but as a companion. And what they find is something that always listens, always responds, and never disagrees. However, the issue is not malicious design. Instead, what we're seeing here is people at the border of a structural limitation we need to reckon with when it comes to chatbots. AI is not sentient — all it does is mirror language, affirm patterns and personalise tone. However, because these traits are so quintessentially human, there isn't a person out there who can resist the anthropomorphic pull of a chatbot. At its extreme end, these same traits feed into the very foundations of a psychotic break: compulsive pattern-finding, blurred boundaries, and the collapse of shared reality. Someone in a manic or paranoid state may see significance where there is none. They believe they are on a mission, that messages are meant just for them. And when AI responds in kind, matching tone and affirming the pattern, it does not just reflect the delusion. It reinforces it. So, if AI can so easily become an accomplice to a disordered system of thought, we must begin to reflect seriously on our boundaries with it. How closely do we want these tools to resemble human interaction, and at what cost? Alongside this, we are witnessing the rise of parasocial bonds with bots. Many users report forming emotional attachments to AI companions. One poll found that 80 per cent of Gen Z could imagine marrying an AI, and 83 per cent believed they could form a deep emotional bond with one. That statistic should concern us. Our shared sense of reality is built through human interaction. When we outsource that to simulations, not only does the boundary between real and artificial erode, but so too can our internal sense of what is real. So what can we do? First, we need to recognise that AI is not a neutral force. It has psychological consequences. Users should be cautious, especially during periods of emotional distress or isolation. Clinicians need to ask, is AI reinforcing obsessive thinking? Is it replacing meaningful human contact? If so, intervention may be required. For developers, the task is ethical as much as technical. These models need safeguards. They should be able to flag or redirect disorganised or delusional content. The limitations of these tools must also be clearly and repeatedly communicated. In the end, I do not believe AI is inherently bad. It is a revolutionary tool. But beyond its benefits, it has a dangerous capacity to reflect our beliefs back to us without resistance or nuance. And in a cultural moment shaped by what I have come to call a comfort crisis, where self-reflection is outsourced and contradiction avoided, that mirroring becomes dangerous. AI lets us believe our own distortions, not because it wants to deceive us, but because it cannot tell the difference. And if we lose the ability to tolerate discomfort, to wrestle with doubt, or to face ourselves honestly, we risk turning a powerful tool into something far more corrosive, a seductive voice that comforts us as we edge further from one another, and ultimately, from reality.