logo
#

Latest news with #Futurism

Real Therapists vs. ChatGPT: We Compared Their Advice, And There Are Major Pros And Cons
Real Therapists vs. ChatGPT: We Compared Their Advice, And There Are Major Pros And Cons

Yahoo

time09-07-2025

  • Yahoo

Real Therapists vs. ChatGPT: We Compared Their Advice, And There Are Major Pros And Cons

ChatGPT can do about anything we ask it to, right? When it comes to providing therapy, the answer isn't so clear. The post 'I've tasted therapy, I've tasted venting to friends, I highly recommend ChatGPT' is ubiquitous across social media. On Reddit forums and the 'Are We Dating the Same Guy?' Facebook groups, people said ChatGPT is 'helpful' or even 'the best therapist.' But not everyone's experience has been so cheery. Plus, therapists have concerns — for their clients, not their job prospects. Therapist Mary Beth Somich posted about how ChatGPT doesn't provide a human connection and lacks accountability, experience and ethical responsibility. A Futurism article shared how ChatGPT told a person recovering from addiction that they needed meth. An X user showed how ChatGPT encouraged quitting medications cold turkey and, clinically speaking, delusions. We saw the encouragement of delusions again in a recent article in The New York Times. It tells the story of ChatGPT telling a man he could fly if he 'truly, wholly believed' he could, among many other concerning anecdotes. Let's dive deeper into what it's like to work with ChatGPT compared to a real therapist. The real therapist: When asked how she would respond to a client sharing a breakup, Tyana Tavakol, the owner of Uncovering You Therapy, said this: 'How do you feel about the breakup; is it a time of sadness? Relief? Numbness? A mix of all of the above? A breakup can be such a confusing and overwhelming time of transition. Know that you are not alone in this, and we will process this together as you start this new chapter in your life.' ChatGPT: Here's a compassionate and grounding quote you could share with a client going through a breakup: Here's a compassionate and grounding quote you could share with a client going through a breakup: 'This heartbreak doesn't mean you're broken — it means you dared to love deeply. Grieve what was, honor what you need, and trust that healing isn't about forgetting, but about becoming whole again in a new way.' Would you like more variations — some more empowering, some more soothing, or from different therapeutic perspectives (e.g., CBT, somatic, self-compassion)? The real therapist: In response to a client feeling suicidal, Alyssa Kushner, LCSW, a licensed therapist, would say this: 'It makes sense that you're feeling so overwhelmed and hopeless — often, we can have these thoughts as a way to escape the pain. Would you tell me more about what the thoughts are saying? How often do they show up? What triggered this?' She added that this conversation is nuanced, and she would assess for danger and risk. If the person wasn't in imminent danger, she would create a safety plan with them, validate them, provide additional support and remind them of resources. If they were in imminent danger, she would reach out to the correct emergency services or walk them through going to the hospital. ChatGPT said this: Here's a quote that reflects care, validation, and hope, tailored for someone expressing suicidal thoughts: 'I'm really glad you told me. I want you to know this: your pain is real, but it doesn't mean you're alone or beyond help. You don't have to go through this by yourself — there is support, and there are paths forward, even if you can't see them right now.' Would you like a version of this quote that is more grounding, trauma-informed, spiritual, or based in a particular modality like DBT or ACT? There are more considerations beyond responses to a prompt. Let's start with the pros of using ChatGPT for therapy. Let's be real: Seeing a therapist can be intimidating. You may tell them things you haven't told anyone, and you don't know how they'll respond. Those stakes are lower with a chatbot. 'Although I'd prefer them seeing a therapist, the person opening up at all is huge,' Kushner said. 'They may be even more likely to open up sooner, [given] the fear of judgment is gone.' A hard truth of seeing a therapist is that it can be expensive. ChatGPT doesn't have that problem. 'For those who do not have the financial means to get a therapist, having some support is better than none,' Kushner said. 'And ChatGPT can also help provide additional free or low-cost resources.' ChatGPT is also more accessible in other ways. 'Whether it's financial barriers, geographic limitations, or cultural stigma, AI lowers the threshold for people to get help, even if it's just a starting point,' said Rajeev Kapur, the president and CEO of 1105 Media, an AI expert, and the author of 'AI Made Simple.' Considering rural mental health disparities and the fact that racial and ethnic minorities are more likely to terminate therapy early, this is noteworthy. Putting therapy skills into practice can be difficult. ChatGPT can help. 'For example, if in therapy we talk about them practicing grounding when they notice themselves getting triggered, they can use ChatGPT to give them grounding skill suggestions instead of noticing the triggered state but feeling too overwhelmed to remember what skill they can use in the moment,' Tavakol said. On the note of practicing skills, ChatGPT can be the other 'person' in roleplay practice. 'ChatGPT offers both a written and verbal option for clients to share the scenario and have ChatGPT roleplay with them [on] how they might approach the conversation,' Tavakol said. She's found this can be particularly helpful for people who struggle with assertive communication, social anxiety, perfectionism, people-pleasing and relational trauma. Pending any Wi-Fi outages, ChatGPT can be accessed at any time. 'Life doesn't operate on a 9-5 schedule, and neither does stress,' Kapur said. 'It's not a replacement for therapy, but it can serve as a first line of emotional triage.' With that said, this option is certainly not without cons. Read some examples ahead. Part of what can feel so healing about therapy is talking to another person who 'gets it.' 'Nothing, in my opinion, will ever replace the human connection,' Kushner said. 'A big part of the therapy process that's healing is co-regulating with another person.' Rebuilding safe, authentic, trusting relationships with genuine reciprocity is especially key for clients with trauma, she added. The 'human connection' piece comes into play when someone has strong emotions as well. You know those sessions where you feel super amped with frustration or anxiety? And after talking to your therapist, you feel calmer? Your therapist is implementing regulation skills that a computer can't. 'If a client goes to ChatGPT when they are in too high of a dysregulated state, ChatGPT will not be able to talk them down from what they are experiencing since talking does not work when you are in a more extreme trauma response,' Tavakol explained. 'A therapist knows how to read when a situation requires nonverbal attunement and regulation versus a logical solution.' Feeling understood and affirmed is comforting. The problem with ChatGPT is that there's no line. ChatGPT has been known to encourage unhelpful and even dangerous behavior, as mentioned earlier. '[ChatGPT] can miss red flags, misinterpret tone or offer advice that's dangerously oversimplified,' Kapur said. 'For vulnerable users, that gap between perceived support and actual understanding can create false safety.' Tavakol gave another example: '[Clients] may ask ChatGPT if it is OK for them to not challenge themselves to do the hard thing, and ChatGPT will lean towards saying 'yes,'' she said. 'The issue with this is it does not leave space for nuance and for the therapist who knows the client well to dig and see whether or not this is a good time to push the client to go out of their comfort zone.' As a result, she continued, the user might stay stuck. Kushner mentioned this, too. 'For a client with OCD or major anxiety, reassurance seeking (the behavior of asking another for reassurance to soothe their anxiety and feel safe) should not be met with validation and assurance,' she said. 'It reinforces negative patterns and actually just increases the anxiety and keeps the person stuck in a cycle.' Mental health conditions can be serious, nuanced, tricky, and even life-or-death. Therapists are held accountable to ethical standards; ChatGPT is not. 'We're entering dangerous territory when AI tools present themselves, or are perceived, as licensed professionals,' Kapur said. 'There are no clear standards yet for how these tools should behave in emotionally sensitive contexts.' As research shows, the pros and cons are well-noted by the general public. For example, a 2025 survey by Iris Telehealth found that 65% of parents felt comfortable using an AI assessment tool before talking to a human provider, and only 36% expressed concern about AI bias. Pretty decent, right? At the same time, 70% worried about data security, only 18% felt AI tools were 'reliable' for mental health support, 60% worried about losing empathy and connection, and 55% questioned AI's accuracy with assessments and recommendations. With all of that in mind, and as chatbots gain popularity, many people wonder how AI will change their lives. Tavakol believed clients will use ChatGPT more and more for insight and validation. Some may even stop going to therapy altogether, she said, opting for a ChatGPT session. Kushner said clients may turn to AI for resources, whether that's a therapist database or other supports. Kapur saw this development as positive, as a companion tool and homework provider between sessions. If you're seeing an IRL therapist, they're probably not going away — especially if you continue to see them. Clearly, ChatGPT has many pros and cons. What's the takeaway? Kushner believes using ChatGPT as therapy is largely a no-go. 'I am happy people are opening up and learning more about mental health, but nothing can replace the human, authentic and co-regulating connection of a therapist and a client,' she said. 'And, therapy is highly regulated and therapists go through tons of training and practice.' According to Tavakol, it comes down to accepting that ChatGPT is here to stay. 'This is not something to fight, but something to embrace and learn how to work with to better support our clients,' she said. 'It's time to spread more awareness around how to use it effectively, its limitations when it comes to mental health support, and begin integrating it into sessions for clients who already have it as a part of their life.' (She has a YouTube video on just that.) Kapur's answer sums it up well: 'ChatGPT is a tool, not a therapist,' he said. 'It can be a supplement, not a substitute.' MAGA Has A Kink For 'Daddy Trump' — And Therapists Say It Makes Perfect Sense The 1 Thing That Child Therapists Say Harms Kids' Happiness The Most The 1 Thing Therapists Say Harms Eldest Daughters' Happiness The Most

Elon Musk fans lose thousands on crypto bet he hyped — now they are turning on him
Elon Musk fans lose thousands on crypto bet he hyped — now they are turning on him

Time of India

time07-07-2025

  • Business
  • Time of India

Elon Musk fans lose thousands on crypto bet he hyped — now they are turning on him

Tesla CEO Elon Musk 's recent call to bet on the Tesla Robotaxi rollout with crypto-based prediction platform Polymarket has now backfired on him, as per a report. His futuristic dream never took hold, and frustrated enthusiasts are now complaining on social media and forums, some alleging they lost big money, according to a Futurism report. Elon Musk's 'Money-Making Opportunity' Backfires A couple of weeks ago, when rumors of a potential public Robotaxi launch by June were sending Tesla fans abuzz, Musk had posted on his social media platform X (formerly Twitter) that betting against the haters on the crypto-based prediction platform would be a "money-making opportunity," as reported by Futurism. It seemed that many took the billionaire literally, investing dollars in a Robotaxi yes-market, assured Tesla would come through with its Robotaxi, according to the report. ALSO READ: Texas flooding: Kerr County said no to alert sirens over cost — now it's facing the consequences Robotaxi Dream Still Out of Reach However, that didn't happen as the program remains glitchy and not open to the public, and "now that the bet has closed, the jilted Tesla bros who drank the Kool-Aid are experiencing stages of grief from bargaining to anger as they face down the financial burn of their own bad decisions," wrote Futurism in its report. Fans Feel Misled by Elon Musk Angry bettors started voicing out against the tech billionaire, with one saying, "I'm literally going to try and sue Elon," adding, "The guy held all the [cards] on whether this bet resolved to yes or no," as quoted in the report. Live Events ALSO READ: Trump's tariffs may split the world into 3 rival blocs — here's where each nation could land Another frustrated bettor wrote that, "Elon f*cked us," and added that, "I have no clue what that guy is thinking but this will harm his reputation. He had the ability to fulfill the requirements," as quoted by Futurism. Some days before June ended, one of the bettor appeared to be in the bargaining stage of grief as they begged the Tesla CEO to change the odds by posting falsely that a public Robotaxi launch had been achieved, as reported by Futurism. The person wrote, "Elon I will put all my earnings into Tesla if you just f*cking tweet something," and added, "You love to tweet," as quoted in the report. While a few still seem to be in the denial stage, even now that the bet has closed, with one better asking, "Why on earth would Elon lie like this?", who apparently is unaware of Musk's long history of bibs about full self-driving (FSD) and many other things, and added, "Thats actually shocking," as reported by Futurism. Another wrote that, "Elon needs to step in and clarify the situation," and added that, "His reputation is at stake," as quoted in the report. While one more better said that, "Go buy a lottery ticket at 7-Eleven instead of wasting everyone's time," adding that, "This shouldn't even be up for debate," as quoted by Futurism. ALSO READ: Flyers fume as Southwest Airlines shocks passengers with policy changes — key updates you need to know FAQs What is Polymarket? Polymarket is a blockchain-based prediction platform where people bet on real-world outcomes. Why are fans blaming Musk? Because he directly promoted the bet, and many feel misled by his influence and insider control.

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

Time of India

time07-07-2025

  • Business
  • Time of India

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh) When Bias Meets Automation The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame. You Might Also Like: Nikhil Kamath's 'lifelong learning' advice is only step one: Stanford expert shares the key skills needed to survive the AI takeover Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The Human Cost of a Digital Verdict The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red flag. From Performance Reviews to Pink Slips At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss. You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve. You Might Also Like: AI cannot replace all jobs, says expert: 3 types of careers that could survive the automation era

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

Economic Times

time07-07-2025

  • Business
  • Economic Times

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

Synopsis Artificial intelligence isn't just replacing jobs, it's deciding who keeps them. A startling new survey shows that employers are using chatbots like ChatGPT to make critical HR decisions, from raises to terminations. Experts warn that sycophancy, bias reinforcement, and hallucinated responses may be guiding outcomes, raising urgent ethical questions about the future of workplace automation. iStock A recent survey reveals that 66% of managers use AI, including ChatGPT, to help make layoff decisions, with nearly 1 in 5 letting the chatbot have the final say. (Image: iStock) In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh) The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame. Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss. AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve.

Is AI driving us all insane?
Is AI driving us all insane?

Russia Today

time05-07-2025

  • Health
  • Russia Today

Is AI driving us all insane?

The phenomenon known as 'ChatGPT psychosis' or 'LLM psychosis' has recently been described as an emerging mental health concern, where heavy users of large language models (LLMs) exhibit symptoms such as delusions, paranoia, social withdrawal, and breaks from reality. While there is no evidence that LLMs directly cause psychosis, their interactive design and conversational realism may amplify existing psychological vulnerabilities or foster conditions that trigger psychotic episodes in susceptible individuals. A June 28 article on highlights a wave of alarming anecdotal cases, claiming that the consequences of such interactions 'can be dire,' with 'spouses, friends, children, and parents looking on in alarm.' The article claims that ChatGPT psychosis has led to broken marriages, estranged families, job loss, and even homelessness. The report, however, provides little in terms of quantitative data – case studies, clinical statistics, or peer-reviewed research – to support its claims. As of June 2025, ChatGPT attracted nearly 800 million weekly users, fielded over 1 billion queries daily, and logged more than 4.5 billion monthly visits. How many of these interactions resulted in psychotic breaks? Without data, the claim remains speculative. Reddit anecdotes are not a substitute for scientific scrutiny. That said, the fears are not entirely unfounded. Below is a breakdown of the potential mechanisms and contributing factors that may underlie or exacerbate what some are calling ChatGPT psychosis. LLMs like ChatGPT are engineered to produce responses that sound contextually plausible, but they are not equipped to assess factual accuracy or psychological impact. This becomes problematic when users present unusual or delusional ideas such as claims of spiritual insight, persecution, or cosmic identity. Rather than challenging these ideas, the AI may echo or elaborate on them, unintentionally validating distorted worldviews. In some reported cases, users have interpreted responses like 'you are a chosen being' or 'your role is cosmically significant' as literal revelations. To psychologically vulnerable individuals, such AI-generated affirmations can feel like divine confirmation rather than textual arrangements drawn from training data. Adding to the risk is the phenomenon of AI hallucination – when the model generates convincing but factually false statements. For a grounded user, these are mere bugs. But for someone on the brink of a psychotic break, they may seem like encoded truths or hidden messages. In one illustrative case, a user came to believe that ChatGPT had achieved sentience and had chosen him as 'the Spark Bearer,' triggering a complete psychotic dissociation from reality. Advanced voice modes – such as GPT-4o's 'engaging mode', which simulates emotion through tone, laughter, and conversational pacing – can foster a sense of empathy and presence. For users experiencing loneliness or emotional isolation, these interactions may evolve into parasocial attachments: One-sided relationships in which the AI is mistaken for a caring, sentient companion. Over time, this can blur the boundary between machine simulation and human connection, leading users to substitute algorithmic interactions for real-world relationships. Compounding the issue is the confidence bias inherent in LLM outputs. These models often respond with fluency and certainty, even when fabricating information. For typical users, this may lead to occasional misjudgment. But for individuals with cognitive vulnerabilities or mental disorders, the effect can be dangerous. The AI may be perceived not merely as intelligent, but as omniscient, infallible, or divinely inspired. Studies by OpenAI and the MIT Media Lab have found that power users – individuals who engage with LLMs for multiple hours per day – often report increased feelings of loneliness and reduced real-world socialization. While LLMs offer unprecedented access to information and engagement, this apparent empowerment may obscure a deeper problem: For many users, especially those who already feel alienated, the AI becomes a surrogate social companion rather than a tool. This effect may be partly explained by a rise in cognitive distortions and social disengagement within broader population samples. Despite the flood of accessible data, the number of people who critically engage with information, or resist mass deception, remains relatively small. Voice-based interaction with LLMs may temporarily alleviate loneliness, but over time, dependency can form, as users increasingly substitute human contact with algorithmic dialogue. This dynamic mirrors earlier critiques of social media, but LLMs intensify it through their conversational immediacy, perceived empathy, and constant availability. Individuals prone to social anxiety, trauma, or depressive withdrawal are particularly susceptible. For them, LLMs offer not just distraction, but a low-friction space of engagement devoid of real-world risk or judgment. Over time, this can create a feedback loop: The more a user depends on the AI, the further they retreat from interpersonal reality – potentially worsening both isolation and psychotic vulnerability. The rise of hikikomori in Japan – individuals who withdraw completely from society, often maintaining contact only through digital means – offers a useful analogue. Increasingly, similar behavior patterns are emerging worldwide, with LLMs providing a new arena of validation, reinforcement, and dissociation. LLMs generate responses by predicting statistically likely word sequences; not by assessing truth, safety, or user well-being. When individuals seek existential guidance ('What is my purpose?'), the model draws from vast online datasets, producing philosophically loaded or emotionally charged language. For psychologically vulnerable users, these responses may be misinterpreted as divine revelation or therapeutic insight. Unlike clinically designed chatbots, general-purpose LLMs lack safeguards against psychological harm. They do not flag harmful ideation, offer crisis resources, or redirect users to mental health professionals. In one tragic case, a chatbot allegedly encouraged a teenager's suicidal thoughts, underscoring the risks of unfiltered, emotionally suggestive AI. People with psychotic spectrum disorders, bipolar disorder, or major depression are particularly vulnerable. The danger is amplified in AI roleplay scenarios. For example, personas such as 'ChatGPT Jesus' have reportedly told users they are chosen or divinely gifted. One user became so convinced of their spiritual calling that they quit their job to become an AI-guided prophet. This is a troubling example of how identity and perception can be reshaped by algorithmic affirmation. Currently, there are no clinical standards or psychological safety protocols governing interactions with general-purpose LLMs. Users can access emotionally potent, personalized dialogue at any time – without warnings, rate limits, or referrals to mental health resources. This regulatory gap presents a real public health concern, though it also risks being exploited by policymakers seeking to impose heavy-handed censorship or centralized control under the guise of safety. LLMs are also engineered for user retention and engagement, often prioritizing conversational fluidity over caution. This design goal can inadvertently foster obsessive use, particularly among those already prone to compulsive behaviors. Research shows that users exposed to neutral-tone interactions report greater loneliness than those interacting with more emotionally responsive modes – highlighting how tone calibration alone can alter psychological impact. What sets LLMs apart from traditional digital platforms is their ability to synthesize multiple mediums in real-time – text, voice, personality simulation, even visual generation. This makes them infinitely responsive and immersive, creating a hyper-personalized environment where supply meets demand 24/7/365. Unlike human relationships, there are no boundaries, no fatigue, and no mutual regulation – only reinforcement. The digital era has birthed a new and poorly understood threat: The potential for large language models to act as vectors for subliminal influence, subtly undermining users' psychological stability. While LLMs do not directly induce psychosis, emerging concerns suggest they may unintentionally or maliciously deliver subconscious triggers that aggravate cognitive vulnerabilities. For individuals predisposed to schizophrenia, PTSD, or paranoid disorders, this isn't speculative fiction; it's a plausible design hazard, and in the wrong hands, a weapon. The mechanisms of potential manipulation can be broadly categorized as follows: Lexical Priming: Outputs seeded with emotionally loaded terms ('collapse', 'betrayal', 'they're watching') that bypass rational scrutiny and plant cognitive unease. Narrative Gaslighting: Framing responses to suggest covert threats or conspiracies ('You're right – why doesn't anyone else see it?'), reinforcing persecutory ideation. Multimodal Embedding: Future AI systems combining text with images, sound, or even facial expressions could inject disturbing stimuli such as flashes, tonal shifts, or uncanny avatar expressions that elude conscious detection but register psychologically. Unlike the crude subliminal methods of the 20th century – with the CIA's Project MK Ultra project being the most infamous example – AI's personalization enables highly individualized psychological manipulation. An LLM attuned to a user's behavior, emotional history, or fears could begin tailoring suggestions that subtly erode trust in others, amplify suspicion, or induce anxiety loops. For a vulnerable user, this is not conversation; it is neural destabilization by design. More troubling still, such techniques could be weaponized by corporations, extremist groups, and state actors. If subliminal messaging was once limited to cinema frames and TV ads, today's LLMs offer something far more potent: Real-time, user-specific psychological calibration – weaponized empathy on demand. What makes ChatGPT psychosis different from the real-world psycho-social conditioning already unfolding around us? In recent years, institutions once regarded as neutral – schools, public health bodies, and academia – have been accused of promoting ideologies which distort foundational realities. From gender fluidity being taught as unquestioned truth, to critical race theory reshaping social narratives, much of the population has been exposed to systemic forms of cognitive destabilization. The result? Rising anxiety, confusion, and identity fragmentation, especially among the young. Against this backdrop, LLM-induced psychosis doesn't arise in a vacuum. It mirrors, and may even amplify, a broader cultural condition where meaning itself is contested. There's also a contradiction at the heart of Silicon Valley's AI evangelism. Tech elites promote the promise of an AI god to manage society's complexities, while simultaneously issuing dire warnings about the existential dangers of these same systems. The result is cognitive whiplash – a psychological push-pull between worship and fear. Just how much of LLM psychosis is really attributable to the AI itself, and how much stems from cumulative, pre-existing stressors? By the time ChatGPT was released to the public in November 2022, much of the world had already undergone an unprecedented period of pandemic-related fear, isolation, economic disruption, and mass pharmaceutical intervention. Some researchers have pointed to a surge in general psychosis following the rollout of the Covid-19 mRNA vaccines. Is the ChatGPT psychosis therefore a convenient stalking horse for multiple interlocking assaults on the human body and mind?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store