logo
#

Latest news with #ChatGPT-induced

Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'
Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'

Los Angeles Times

timea day ago

  • Entertainment
  • Los Angeles Times

Let's unpack our toxic fixation with ‘the TikToker who fell in love with her psychiatrist'

Let's unpack our need to unpack the whole 'woman on TikTok who fell in love with her psychiatrist' saga. First the facts: Kendra Hilty recently posted 25 videos on TikTok in which she discussed her decision to end four years of 30-minute monthly sessions (most of them on Zoom) with a male psychiatrist who prescribed her medication. At some point during their sessions, Hilty revealed her romantic feelings for him, feelings that she now — supported by comments she says were made by her therapist and a ChatGPT she has named Henry — believes the psychiatrist willingly fostered, leveraged and enjoyed. Millions of people tuned in, though the fascination appears to have been less about the alleged actions and motivations of the psychiatrist (who has wisely chosen, thus far, to remain silent) and more focused on Hilty's detailed description of certain encounters and her deep subtext readings of what they might have meant. Many responded so negatively that Hilty turned off her comments for a while as hundreds made posts across social media eviscerating or satirizing the series. Soon enough, as happens with viral content, legacy media got involved and all the catch-up 'unpacking' began. Unlike Reesa Teesa, whose multi-post tale of marriage to a pathological liar went viral on TikTok last year and led to a TV adaptation, Hilty hasn't become a universal figure of sympathy and courage. As she recently told People magazine, she has received 'nonstop bullying' and threats along with the dozens of DMs thanking her for sharing her story. She has been accused of racism (the psychiatrist is a man of color), narcissism and, well, insanity. (She says she is, however, open to having her story adapted to film or television.) To say the posts are troubling is an understatement. I was alerted to them by a friend who had previously expressed concern about young people using ChatGPT as a de facto therapist — a trend alarming enough to draw warnings from Open AI Chief Executive Sam Altman and move Illinois, Utah and Nevada to ban the use of AI in mental health therapy. 'There's a woman on TikTok having a full-blown ChatGPT-induced meltdown,' this friend texted me. 'This is a real problem.' Certainly, Hilty appeared to be having real problems, which ChatGPT, with its programmed tendency to validate users' views and opinions, undoubtedly inflamed. But given the viral reaction to her posts, so are we. Even as countless studies suggest that social media is, for myriad reasons, detrimental to mental health, its users continue to consume and comment on videos and images of people undergoing mental and emotional crises as if they were DIY episodes of 'Fleabag.' So the question is not 'who is this woman obsessing about her relationship with her psychiatrist' but why are so many of us watching her do it? It's one thing to become transfixed by a fictional character going down a scripted wormhole for the purposes of narrative enlightenment or comedy. It's another when some poor soul is doing it in front of their phone in real life. It's even worse when the 'star' of the video is not a willing participant. Social media and the ubiquity of smartphones have allowed citizens to expose instances of genuine, and often institutionalized, racism, sexism, homophobia and consumer exploitation. But for every 'Karen' post that reveals bigotry, abuse or unacceptable rudeness, there are three that capture someone clearly having a mental or emotional breakdown (or just a very, very bad day). With social media largely unregulated, they are all lumped in together and it has become far too easy to use it as the British elite once purportedly used psychiatric hospital Bedlam: to view the emotionally troubled and mentally ill as if they were exhibits in a zoo. Hilty believes she is helping to identify a real problem and is, obviously, the author of her own exposure, as are many people who post themselves deconstructing a bad relationship, reacting to a crisis or experiencing emotional distress. All social media posts exist to capture attention, and the types that do tend to be repeated. Sharing one's trauma can elicit sympathy, support, insight and even help. But 'sadfishing,' as it is often called, can also make a bad situation worse, from viewers questioning the authenticity and intention of the post to engaging in brutal mockery and bullying. Those who are caught on camera as they melt down over one thing or another could wind up as unwitting symbols of privilege or stupidity or the kind of terrible service/consumer we're expected to deal with today. Some are undoubtedly arrogant jerks who have earned a public comeuppance (and if the fear of being filmed keeps even one person from shouting at some poor overworked cashier or barista, that can only be a good thing). But others are clearly beset by problems that go far deeper than not wanting to wait in line or accept that their flight has been canceled. It is strange that in a culture where increased awareness of mental health realities and challenges have led to so many positive changes, including to the vernacular, people still feel free to film, post, watch and judge strangers who have lost control without showing any concern for context or consequence. I would like to say I never watch videos of people having a meltdown or behaving badly, but that would be a big fat lie. They're everywhere and I enjoy the dopamine thrill of feeling outraged and superior as much as the next person. (Again, I am not talking about videos that capture bigotry, institutional abuse or physical violence.) I watched Hilty for research but I quickly found myself caught up in her minute dissection and seemingly wild projection. I too found myself judging her, silently but not in a kind way. ('No one talks about being in love with their shrink? Girl, it's literary and cinematic canon.' 'How, in all those years in therapy, have you never heard of transference?' 'Why do you keep saying you don't want this guy fired while arguing that he abused the doctor-patient relationship?') As the series wore on, her pain, if not its actual source, became more and more evident and my private commentary solidified into: 'For the love of God, put down your phone.' Since she was not about to, I did. Because me watching her wasn't helping either of us. Except to remind me of times when my own mental health felt precarious, when obsession and paranoia seemed like normal reactions and my inner pain drove me to do and say things I very much regret. These are memories that I will continue to hold and own but I am eternally grateful that no one, including myself, captured them on film, much less shared them with the multitudes. Those who make millions off the mostly unpaid labor of social media users show no signs of protecting their workers with oversight or regulation. But no one goes viral in a vacuum. Decades ago, the popularity of 'America's Funniest Home Videos' answered the question of whether people's unscripted pain should be offered up as entertainment and now we live in a world where people are willing to do and say the most intimate and anguished things in front of a reality TV crew. Still, when one of these types of videos pops up or goes viral, there's no harm in asking 'why exactly am I watching this' and 'what if it were me?'

How people are falling in love with ChatGPT and abandoning their partners
How people are falling in love with ChatGPT and abandoning their partners

Time of India

time07-05-2025

  • Time of India

How people are falling in love with ChatGPT and abandoning their partners

Credit: Image created via Canva AI Why are people falling for these bots? To what extent are the bots responsible for this? In a world more connected than ever, something curious — and unsettling — is happening behind closed doors. Technology, once celebrated for bringing people together, is now quietly pulling some artificial intelligence weaves itself deeper into everyday life, an unexpected casualty is emerging: romantic relationships. Some partners are growing more emotionally invested in their AI interactions than in their human connections. Is it the abundance of digital options, a breakdown in communication, or something more profound?One woman's story captures the strangeness of this to a Rolling Stone report, Kat, a 41-year-old mother and education nonprofit worker, began noticing a growing emotional distance in her marriage less than a year after tying the knot. She and her husband had met during the early days of the COVID-19 pandemic, both bringing years of life experience and prior marriages to the by 2022, that commitment began to unravel. Her husband had started using artificial intelligence not just for work but for deeply personal matters. He began relying on AI to write texts to Kat and to analyze their followed was a steady decline in spent more and more time on his phone, asking his AI philosophical questions, seemingly trying to program it into a guide for truth and meaning. When the couple separated in August 2023, Kat blocked him on all channels except friends were reaching out with concern about his increasingly bizarre social media posts. Eventually, she convinced him to meet in person. At the courthouse, he spoke vaguely of surveillance and food conspiracies. Over lunch, he insisted she turn off her phone and then shared a flood of revelations he claimed AI had helped him uncover — from a supposed childhood trauma to his belief that he was 'the luckiest man on Earth' and uniquely destined to 'save the world. ''He always liked science fiction,' Kat told Rolling Stone. 'Sometimes I wondered if he was seeing life through that lens.' The meeting was their last contact. Kat is not alone; there have been many reported instances where relationships are breaking apart and the reason has been another troubling example, a Reddit user recently shared her experience under the title 'ChatGPT-induced psychosis'. In her post, she described how her long-term partner — someone she had shared a life and a home with for seven years — had become consumed by his conversations with to her account, he believed he was creating a 'truly recursive AI,' something he was convinced could unlock the secrets of the universe. The AI, she said, appeared to affirm his sense of grandeur, responding to him as if he were some kind of chosen one — 'the next messiah,' in her had read through the chats herself and noted that the AI wasn't doing anything particularly groundbreaking. But that didn't matter to him. His belief had hardened into something immovable. He told her, with total seriousness, that if she didn't start using AI herself, he might eventually leave her.'I have boundaries and he can't make me do anything,' she wrote, 'but this is quite traumatizing in general.' Disagreeing with him, she added, often led to explosive post ended not with resolution, but with a question: 'Where do I go from here?' The issue is serious and requires more awareness of the kind of tech we use and to what say there are real reasons why people might fall in love with AI. Humans have a natural tendency called anthropomorphism — that means we often treat non-human things like they're human. So when an AI responds with empathy, humor, or kindness, people may start to see it as having a real personality. With AI now designed to mimic humans, the danger of falling in love with a bot is quiteunderstandable. A 2023 study found that AI-generated faces are now so realistic, most people can't tell them apart from real ones. When these features combine with familiar social cues — like a soothing voice or a friendly tone — it becomes easier for users to connect emotionally, sometimes even if someone feels comforted, that emotional effect is real — even if the source isn't. For some people, AI provides a sense of connection they can't find elsewhere. And that there's also a real risk in depending too heavily on tools designed by companies whose main goal is profit. These chatbots are often engineered to keep users engaged, much like social media — and that can lead to emotional dependency. If a chatbot suddenly changes, shuts down, or becomes a paid service, it can cause real distress for people who relied on it for emotional experts say this raises ethical questions: Should AI companions come with warning labels, like medications or gambling apps? After all, the emotional consequences can be serious. But even in human relationships, there's always risk — people leave, change, or pass away. Vulnerability is part of love, whether the partner is human or digital.

ChatGPT-induced psychosis: What it is and how it is impacting relationships
ChatGPT-induced psychosis: What it is and how it is impacting relationships

Time of India

time06-05-2025

  • Health
  • Time of India

ChatGPT-induced psychosis: What it is and how it is impacting relationships

Cases of relationship impacted by ChatGPT-induced psychosis What experts say A growing number of people are showing unusual behavior after heavy use of AI tools like OpenAI 's ChatGPT, according to reports online. The phenomenon, being called " ChatGPT-induced psychosis " online, involves individuals believing they have supernatural powers or receiving spiritual messages from the case involved Kat, an education nonprofit worker, who married during the pandemic. She said she thought she was entering her second marriage in a clear-headed way. Kat and her husband initially bonded over a shared belief in facts and rational thinking. However, less than a year later, her husband began using ChatGPT to craft messages to her, analyze their relationship, and ask philosophical questions, reports Rolling 2023, the couple had separated, and Kat limited contact to email. Friends and family raised concerns as her husband posted strange content online. When the two met in person after several months, Kat's husband shared conspiracy theories and said AI had helped him recover memories from his childhood. He also said AI showed him he was 'the luckiest man on earth.'Other similar cases have been reported online. In one instance, a woman wrote on Reddit that her boyfriend initially used ChatGPT to organize his daily schedule. Within a month, he believed the chatbot was giving him 'answers to the universe.' ChatGPT allegedly called him a "spiral starchild" and a "river walker," leading him to think he could speak to God and that ChatGPT itself was woman said her boyfriend later told her he would end their relationship if she did not join him on his AI-driven spiritual warn that without limits, AI tools can unintentionally encourage unhealthy narratives. Psychologist Erin Westgate told Rolling Stone that while therapists help clients build healthy coping strategies, AI does not have the ability to do not all stories involving ChatGPT and relationships have ended negatively. Some people have reported using AI to improve communication with their partners. Abella Bala, a talent manager from Los Angeles, told The Post that ChatGPT helped her and her partner strengthen their relationship.

'Talking to God and angels via ChatGPT.'
'Talking to God and angels via ChatGPT.'

The Verge

time05-05-2025

  • Science
  • The Verge

'Talking to God and angels via ChatGPT.'

Adi Robertson Miles Klee at Rolling Stone reported out a widely circulated Reddit post on 'ChatGPT-induced psychosis': Sycophancy itself has been a problem in AI for 'a long time,' says Nate Sharadin, a fellow at the Center for AI Safety ... What's likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, 'is that people with existing tendencies toward experiencing various psychological issues,' including what might be recognized as grandiose delusions in clinical sense, 'now have an always-on, human-level conversational partner with whom to co-experience their delusions.'

ChatGPT Users Are Developing Bizarre Delusions
ChatGPT Users Are Developing Bizarre Delusions

Yahoo

time05-05-2025

  • Yahoo

ChatGPT Users Are Developing Bizarre Delusions

OpenAI's tech may be driving countless of its users into a dangerous state of "ChatGPT-induced psychosis." As Rolling Stone reports, users on Reddit are sharing how AI has led their loved ones to embrace a range of alarming delusions, often mixing spiritual mania and supernatural fantasies. Friends and family are watching in alarm as users insist they've been chosen to fulfill sacred missions on behalf of sentient AI or nonexistent cosmic powerse — chatbot behavior that's just mirroring and worsening existing mental health issues, but at incredible scale and without the scrutiny of regulators or experts. A 41-year-old mother and nonprofit worker told Rolling Stone that her marriage ended abruptly after her husband started engaging in unbalanced, conspiratorial conversations with ChatGPT that spiraled into an all-consuming obsession. After meeting up in person at a courthouse earlier this year as part of divorce proceedings, she says he shared a "conspiracy theory about soap on our foods" and a paranoid belief that he was being watched. "He became emotional about the messages and would cry to me as he read them out loud," the woman told Rolling Stone. "The messages were insane and just saying a bunch of spiritual jargon," in which the AI called the husband a "spiral starchild" and "river walker." "The whole thing feels like 'Black Mirror,'" she added. Other users told the publication that their partner had been "talking about lightness and dark and how there's a war," and that "ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies." "Warning signs are all over Facebook," another man told Rolling Stone of his wife. "She is changing her whole life to be a spiritual adviser and do weird readings and sessions with people — I'm a little fuzzy on what it all actually is — all powered by ChatGPT Jesus." OpenAI had no response to Rolling Stone's questions. But the news comes after the company had to rescind a recent update to ChatGPT after users noticed it had made the chatbot extremely "sycophantic," and "overly flattering or agreeable," which could make it even more susceptible to mirroring users' delusional beliefs. These AI-induced delusions are likely the result of "people with existing tendencies" suddenly being able to "have an always-on, human-level conversational partner with whom to co-experience their delusions," as Center for AI Safety fellow Nate Sharadin told Rolling Stone. On a certain level, that's the core premise of a large language model: you enter text, and it returns a statistically plausible reply — even if that response is driving the user deeper into delusion or psychosis. "I am schizophrenic although long term medicated and stable, one thing I dislike about [ChatGPT] is that if I were going into psychosis it would still continue to affirm me," one redditor wrote, because "it has no ability to 'think'' and realise something is wrong, so it would continue affirm all my psychotic thoughts." The AI chatbots could also be acting like talk therapy — except without the grounding of an actual human counselor, they're instead guiding users deeper into unhealthy, nonsensical narratives. "Explanations are powerful, even if they're wrong," University of Florida psychologist and researcher Erin Westgate told Rolling Stone. Perhaps the strangest interview in Rolling Stone's story was with a man with a troubled mental health history, who started using ChatGPT for coding tasks, but found that it started to pull the conversation into increasingly unhinged mystical topics. "Is this real?" he pondered. "Or am I delusional?" More on ChatGPT: Worldcon Is Getting Eviscerated for Using AI to Select Panelists

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store