
How people are falling in love with ChatGPT and abandoning their partners
Why are people falling for these bots?
To what extent are the bots responsible for this?
In a world more connected than ever, something curious — and unsettling — is happening behind closed doors. Technology, once celebrated for bringing people together, is now quietly pulling some apart.As artificial intelligence weaves itself deeper into everyday life, an unexpected casualty is emerging: romantic relationships. Some partners are growing more emotionally invested in their AI interactions than in their human connections. Is it the abundance of digital options, a breakdown in communication, or something more profound?One woman's story captures the strangeness of this moment.According to a Rolling Stone report, Kat, a 41-year-old mother and education nonprofit worker, began noticing a growing emotional distance in her marriage less than a year after tying the knot. She and her husband had met during the early days of the COVID-19 pandemic, both bringing years of life experience and prior marriages to the relationship.But by 2022, that commitment began to unravel. Her husband had started using artificial intelligence not just for work but for deeply personal matters. He began relying on AI to write texts to Kat and to analyze their relationship.What followed was a steady decline in communication.He spent more and more time on his phone, asking his AI philosophical questions, seemingly trying to program it into a guide for truth and meaning. When the couple separated in August 2023, Kat blocked him on all channels except email.Meanwhile, friends were reaching out with concern about his increasingly bizarre social media posts. Eventually, she convinced him to meet in person. At the courthouse, he spoke vaguely of surveillance and food conspiracies. Over lunch, he insisted she turn off her phone and then shared a flood of revelations he claimed AI had helped him uncover — from a supposed childhood trauma to his belief that he was 'the luckiest man on Earth' and uniquely destined to 'save the world. ''He always liked science fiction,' Kat told Rolling Stone. 'Sometimes I wondered if he was seeing life through that lens.' The meeting was their last contact. Kat is not alone; there have been many reported instances where relationships are breaking apart and the reason has been AI.In another troubling example, a Reddit user recently shared her experience under the title 'ChatGPT-induced psychosis'. In her post, she described how her long-term partner — someone she had shared a life and a home with for seven years — had become consumed by his conversations with ChatGPT.According to her account, he believed he was creating a 'truly recursive AI,' something he was convinced could unlock the secrets of the universe. The AI, she said, appeared to affirm his sense of grandeur, responding to him as if he were some kind of chosen one — 'the next messiah,' in her words.She had read through the chats herself and noted that the AI wasn't doing anything particularly groundbreaking. But that didn't matter to him. His belief had hardened into something immovable. He told her, with total seriousness, that if she didn't start using AI herself, he might eventually leave her.'I have boundaries and he can't make me do anything,' she wrote, 'but this is quite traumatizing in general.' Disagreeing with him, she added, often led to explosive arguments.Her post ended not with resolution, but with a question: 'Where do I go from here?' The issue is serious and requires more awareness of the kind of tech we use and to what limit.Experts say there are real reasons why people might fall in love with AI. Humans have a natural tendency called anthropomorphism — that means we often treat non-human things like they're human. So when an AI responds with empathy, humor, or kindness, people may start to see it as having a real personality. With AI now designed to mimic humans, the danger of falling in love with a bot is quiteunderstandable. A 2023 study found that AI-generated faces are now so realistic, most people can't tell them apart from real ones. When these features combine with familiar social cues — like a soothing voice or a friendly tone — it becomes easier for users to connect emotionally, sometimes even romantically.Still, if someone feels comforted, that emotional effect is real — even if the source isn't. For some people, AI provides a sense of connection they can't find elsewhere. And that matters.But there's also a real risk in depending too heavily on tools designed by companies whose main goal is profit. These chatbots are often engineered to keep users engaged, much like social media — and that can lead to emotional dependency. If a chatbot suddenly changes, shuts down, or becomes a paid service, it can cause real distress for people who relied on it for emotional support.Some experts say this raises ethical questions: Should AI companions come with warning labels, like medications or gambling apps? After all, the emotional consequences can be serious. But even in human relationships, there's always risk — people leave, change, or pass away. Vulnerability is part of love, whether the partner is human or digital.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
an hour ago
- Time of India
Sam Altman's brain chip venture is mulling gene therapy approach
The brain chip company that has drawn interest from Sam Altman and his artificial intelligence business OpenAI is exploring the idea of genetically altering brain cells to make better company, which has been referred to as Merge Labs , is looking at an approach involving gene therapy that would modify brain cells, according to people familiar with the plans who weren't authorised to speak publicly on the matter. In addition, an ultrasound device would be implanted in the head that could detect and modulate activity in the modified cells, these people one of a handful of ideas and technologies the company has been exploring, they said. The venture is still in early stages and could evolve significantly.'We have not done that deal yet,' Altman told journalists at a dinner Thursday in San Francisco, referring to a question about a brain-computer interface venture. 'I would like us to.'Altman said he wants to be able to think something and have ChatGPT respond to Labs is Altman's latest foray into the brain-computer interface field. He is facing off against his longtime rival, Elon Musk, whose company Neuralink is building brain implants with the short-term goal of treating disease and the long-term ambition of improving human declined to interface companies aim to build devices that connect computers to brains and augment peoples' cognition. Implants are currently enabling paralysed patients to control electronics and helping people communicate who are unable to talk. Technology billionaires and investors are also optimistic that noninvasive devices worn outside of the head could treat mental health Financial Times reported this week that Merge is aiming to raise $250 million at an $850 million valuation. Much of that support will come from OpenAI's ventures team, according to the report. Altman is co-founding the company but not personally investing in it, according to the Financial has also invested in Neuralink, Elon Musk's brain implant startup. Neuralink, along with several other companies, is developing chips that communicate with the brain using electrical signals, not years, researchers have been studying how to genetically change cells to make them respond to ultrasound, a field called sonogenetics . The idea Merge is considering to combine ultrasound with gene therapy could take years, some of the people has attracted significant attention recently as a possible brain therapy. Other companies are exploring the idea of using ultrasound transmitters outside the brain to massage brain tissue, with the goal of treating psychiatric conditions. That kind of technology has shown promise in research studies. Coinbase co-founder Fred Ehrsam's company Nudge, which is aiming to build a helmet that beams low-intensity focused ultrasound into the brain, recently raised $100 million. LinkedIn co-founder Reid Hoffman is leading a $12 million funding round in a similar company.


Hans India
an hour ago
- Hans India
Use ChatGPT as second opinion, not primary source: OpenAI executive
New Delhi: OpenAI's latest language model, GPT-5, may be more powerful and accurate than its predecessors, but the company has warned users not to treat ChatGPT as their main source of information. Nick Turley, Head of ChatGPT, said the AI chatbot should be used as a 'second opinion' because it is still prone to mistakes, despite major improvements. In an interview with The Verge, Turley admitted that GPT-5 continues to face the problem of hallucinations, where the system produces information that sounds believable but is factually wrong. OpenAI says it has reduced such errors significantly, but the model still gives incorrect responses about 10 per cent of the time. Turley stressed that achieving 100 per cent reliability is extremely difficult. 'Until we are provably more reliable than a human expert across all domains, we'll continue to advise users to double-check the answers,' he said. 'I think people are going to continue to leverage ChatGPT as a second opinion, versus necessarily their primary source of fact,' he added. Large language models like GPT-5 are trained to predict words based on patterns in huge datasets. While this makes them excellent at generating natural responses, it also means they can provide false information on unfamiliar topics. To address this, OpenAI has connected ChatGPT to search, allowing users to verify results with external sources. Turley expressed confidence that hallucinations will eventually be solved but cautioned that it will not happen in the near future. 'I'm confident we'll eventually solve hallucinations, and I'm confident we're not going to do it in the next quarter,' he said. Meanwhile, OpenAI continues to expand its ambitions. Reports suggest the company is developing its own browser, and CEO Sam Altman has even hinted that OpenAI could consider buying Google Chrome if it were ever put up for sale.


Indian Express
2 hours ago
- Indian Express
‘AI is a good debating partner': Zoho's Sridhar Vembu on how to use ChatGPT and other tools
Roughly three years after ChatGPT set off the generative AI boom, many are still trying to figure out how to make the technology a useful part of their everyday work. Even CEOs and top executives of major companies are reportedly grappling with this question even as their own employees are mandated to use AI to make businesses more efficient and competitive In this context, Sridhar Vembu, the co-founder and chief scientist of Zoho Corporation, recently shared a few pointers on how AI is used today internally at the multinational enterprise software company. On our AI use as of August 2025. I use AI chat tools daily, at least 2-3 sessions a day. So I would count myself as a moderate to heavy user. I have the top 5 apps installed in my phone and I use all of them (see item 2). 1. AI helps me learn faster. It is a much better search… — Sridhar Vembu (@svembu) August 14, 2025 Acknowledging the vast potential of AI to accelerate learning, streamline workflows, and enhance product experiences, Vembu also expressed a word of caution about over-reliance or misuse of AI as it can lead to the reversal of expected productivity gains. 'I use AI chat tools daily, at least 2-3 sessions a day. So I would count myself as a moderate to heavy user. I have the top 5 apps installed in my phone and I use all of them,' he said. 'We continue to run a lot of experiments, and I will revise my opinion if and when facts change on the ground,' Vembu added. Notably, Vembu said that like many users, he too uses traditional web browsers and search engines a lot less. 'AI helps me learn faster. It is a much better search engine in that sense. My web search has gone down 80% as a direct result,' he said. This is part of a larger shift in user behaviour where more and more people are turning to AI chatbots like ChatGPT, instead of Google Search, to look up information online. Vembu also said that Elon Musk-owned xAI's Grok chatbot and its integration on X (formerly Twitter) shows that AI can enhance product experience. According to Vembu, he does not support the use of AI tools like ChatGPT or Gemini to create new content. 'AI can help customer support agents do their work faster but it is unwise to let AI replace human agents. It is also unwise for a human to copy paste AI text and send it to a customer, hiding the fact that it came from AI,' he said. Weighing in on the debate around vibe-coding, Vembu said that code generated using AI tools 'requires a full round of review for compliance, privacy and security, and those are neither easy nor fun for humans to do.' 'If any programmer submits AI generated code without doing all this, they are failing at their job. Doing all of the above may destroy much of the 'productivity gains' in generating code. In some cases, AI may even slow us down,' he further said.