
Can ChatGPT Conquer Loneliness? The Pivot To AI In Therapy And Dating
Gen Z woman smiling at her phone
getty
Strolling through New York City a few weeks ago, one thing was unmistakable--ChatGPT has very much become a part of the zeitgeist. Whether shooting pool at Doc Hollidays in the East Village or sipping on a Bellini at Cipriani's in Soho, nearly all of the conversations overheard had some mention of the AI companion.
My own AI use has increased substantially since first demoing ChatGPT on BBC TV. And not just with ChatGPT, but also with Claude, Grok, Gemini in Google Search, Meta AI on Facebook, even Rufus while shopping on Amazon. I spend so much time with AI these days, the expectation of how I interact with the appliances around me has been changing as well, including disappointment that I can't have a normal conversation with my refrigerator when I come home hungry, or with my TV when I want to order Lily Collins' green leather boots from Emily in Paris, or when I don't know why my car is flashing red.
It's 2025, shouldn't I just be able to ask my devices for what I want, or better yet, shouldn't they already know. After all, cars are driving themselves and my phone talks to me all day long, about everything.
During an on-the-record Informatica press dinner that I attended right before the company was acquired by Salesforce, CEO Amit Walia casually shared with our table of reporters how he has been using ChatGPT as a therapist, echoing the same sentiment that Salesforce CEO Marc Benioff said at Dreamforce last year: 'It's pretty helpful."
With so many of us increasing our engagement with AI, and possibly dependence on it, it feels like we're approaching a tipping point.
Former SNL comedian Colin Quinn warned of this during his set at the Comedy Cellar. He said, first they'll appear as friendly companions, part of our community, smiling at us in church. Next, he laughed, Armageddon.
And that does seem to be the stage we're at with AI as our ever-affirming companion, sans Armageddon.
Mark Zuckerberg recently shared a stat that the average American has fewer than three friends, yet demand is meaningfully more, like 15 friends.
But Justin McLeod, CEO of the popular dating app Hinge, explained to me why it's not likely that AI will ever be able to fill the gap.
'AI is great when it comes to providing services, like people using it instead of Googling, asking it to solve problems and figure things out,' he said. 'What I'm concerned about are people using it as an emotional companion, like having this be my virtual boyfriend or girlfriend or my best friend--because it's tempting, it's tantalizing, It's always there for you. It's always going to say the right thing. And so why put in all this work into a relationship."
'But like junk food, it's ultimately going to feel really unfulfilling to have a relationship with AI, because there's no mutual sentient connection. It has no needs, you're not showing up for it in any way. You're not being of use to it in anyway. People want to feel useful and needed by friends as much as they want their friends to be there. You want the vulnerability and risk of putting yourself out there and feeling what that feels like. That is the richness, and without that, relationships become very hollow and empty,' he said.
And yet it is AI that is making Hinge so effective at helping users find their match. The company has been leveraging AI to help increase users success at landing a first date, including AI-powered coaching that nudges users with guidance to make their profile standout and prompts like 'Are you sure?' to encourage them think twice before sending a potentially offensive comment.
Parent company Match is all in on AI as well. Its CEO Spencer Rascoff recently took the stage with ChatGPT-maker Sam Altman, CEO of OpenAI, to announce that Tinder is partnering with Altman's World start up to biometrically scan irises in Japan to age verify users.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
16 minutes ago
- Forbes
These New Pixel 10 Features Will Challenge The Competition
The new Pixel 9 With the launch of Android 16, many expect the first smartphones to ship with the latest version of the OS will be Google's Pixel 10 and Pixel 10 Pro. While the focus will no doubt be placed on both the new capabilities of Android and the increased application of artificial intelligence, some huge hardware changes should not go unnoticed. The changes focus on the camera. It's already clear that Google is adding a telephoto lens to the Pixel 10; the Pixel 9's wide-angle and ultrawide-angle lens will be joined by a telephoto lens. This isn't a direct addition… the Pixel 9's with the 50 megapixel wide and 48 megapixel ultrawide will be bumped down to a 48 megapixel wide and 13 megapixel ultrawide pairing (a pairing that matches that of the Pixel 9a). Nevertheless, the telephoto will be welcome both in use and by the marketing team. The camera system is expected to feature gimbal-like stabilization across the entire Pixel 10 family. Using a mix of optical image stabilization, software-based electronic image stabilization, and AI algorithms, the Pixel 10 camera system should allow for sharper images thanks to the steadying influence of the hardware compensating for dynamic movement while the phone's camera is being used. The Pixel 10 has a critical role to play in the smartphone ecosystem. As the entry-level Pixel smartphone, it will challenge the current 'flagship-killer' handsets in price and capability. With it, Google will be looking to set the standard that consumers should expect at this price point. While the Pixel range plays a part in defining what it means to be a smartphone—be it a flagship, a foldable, or the base function of a phone—the Pixel 10 will arguably be the Pixel that can have the most significant impact on the ecosystem. Adding a telephoto lens and image stabilisation sets another marker for the competition. Whether it is a justification for a decision already made in their design process, or a push to include these elements in the next phone, the Pixel 10 represents Google's image of what a smartphone should be. And that view now includes some big steps forward for the camera. Now read the lates Pixel 10 and Android headlines in Forbes' weekly smartphone digest...


Forbes
16 minutes ago
- Forbes
Marvel's Spider-Man: Beyond Amazing — The Exhibition Amazes In Chicago
Marvel's Spider-Man: Beyond Amazing – The Exhibition Chicago's Griffin Museum of Science and Industry is currently featuring another beyond impressive exhibit with Marvel's Spider-Man: Beyond Amazing – The Exhibition, running through Feb. 8, 2026. It first swung into the museum on May 22. The traveling exhibit was developed by Semmel Exhibitions in conjunction with Marvel and co-curated by comic scholars Benjamin Saunders and Patrick A. Reed. It was also designed by acclaimed Berlin-based collective StudioTK. Saunders, Reed, and StudioTK previously worked with Marvel and Semmel on the hit touring exhibition Marvel: The Exhibition - Universe Of Super Heroes. In addition to Spider-Man and Marvel, the museum has also sported other top-notch entertainment exhibits focusing on Walt Disney, Pixar, and James Bond as the subjects in recent years. Where the latest special exhibit at MSI succeeds is in blending original meaningful comic book art, iconic costumes and rare props from the films, and fun interactive photo ops. The mixture of the spectacular Spidey elements will wow web-slinger fans from young children to seniors and the average MCU watcher to devout comic readers and collectors. The exhibit takes you from the creation and first appearance of the character to his evolution into mainstream pop culture, a journey spanning six decades. My top 5 must-see items at the new Spider-Man exhibit: 1. Willem Dafoe's Green Goblin mask and bomb from 2002's 'Spider-Man': Considered one of, if not the best comic-book movie villains of all time, Dafoe's take on Norman Osborn in the Sam Raimi directed flick, that helped ignite the on-screen superhero craze, is legendary. Seeing the mask up close is an awe-inspiring experience for any superhero or movie fan. The piece was one of the best items the museum could have possibly featured in the exhibit. Full marks to MSI for snagging it. 2. Design sketch for original Spider-Man Macy's Thanksgiving Day Parade balloon: This piece was the biggest surprise for me. Not only did the Spider-Man balloon become one of the parade's most iconic, but the 1987 original concept sketch was done by top Spider-Man artist John Romita. Another applaudable find and display item for the museum. New York, N.Y.: Spiderman heads down 6th Ave. during the Macy's Thanksgiving Day Parade on November ... More 28, 2024 in New York City. (Photo by Howard Schnapp/Newsday RM via Getty Images) 3. Tom Holland's main suit from 2021's 'Spider-Man: No Way Home': Tom Holland is this generation's Spider-Man and it seems the current web-slinger is sticking around for some time still. Sporting a costume from the monumental Spidey blockbuster was another good look for the exhibit and museum. 4. An Amazing Fantasy #15 CGC 5.5 Valued by Comic Book Price Guide at $82,000, the unrivaled 1962 comic book featured the creation of the character from writer-editor Stan Lee and artist Steve Ditko. Legendary comic artist Jack Kirby's work is featured on the cover as the first cover art of the famed superhero. The curators could have settled for lower grade issue, which still would have impressed the majority of exhibit goers, but to the delight of comic book fans and collectors, they took things up a notch. 5. Original cover art for Ultimate Fallout #4 Likely the exhibit's best pull for original Spidey comic art, 2011's Ultimate Fallout #4 features the first appearance of Miles Morales, who becomes the new Spider-Man following the death of Peter Parker. Morales quickly won comic fans over, is featured in the preschool age Disney series Spidey and His Amazing Friends, and is likely to make his Marvel Cinematic Universe debut in the near future.


CNET
18 minutes ago
- CNET
AI as Your Therapist? 3 Things That Worry Experts and 3 Tips to Stay Safe
Amid the many AI chatbots and avatars at your disposal these days, you'll find all kinds of characters to talk to: fortune tellers, style advisers, even your favorite fictional characters. But you'll also likely find characters purporting to be therapists, psychologists or just bots willing to listen to your woes. There's no shortage of generative AI bots claiming to help with your mental health but you go that route at your own risk. Large language models trained on a wide range of data can be unpredictable. In just the few years these tools have been mainstream, there have been high-profile cases in which chatbots encouraged self-harm and suicide and suggested that people dealing with addiction use drugs again. These models are designed, in many cases, to be affirming and to focus on keeping you engaged, not on improving your mental health, experts say. And it can be hard to tell whether you're talking to something that's built to follow therapeutic best practices or something that's just built to talk. Psychologists and consumer advocates are warning that chatbots claiming to provide therapy may be harming those who use them. This week, the Consumer Federation of America and nearly two dozen other groups filed a formal request that the Federal Trade Commission and state attorneys general and regulators investigate AI companies that they allege are engaging, through their bots, in the unlicensed practice of medicine -- naming Meta and specifically. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable," Ben Winters, the CFA's director of AI and privacy, said in a statement. "These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it." Meta did not respond to a request for comment. A spokesperson for said users should understand that the company's characters are not real people. The company uses disclaimers to remind users that they should not rely on the characters for professional advice. "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry," the spokesperson said. Despite disclaimers and disclosures, chatbots can be confident and even deceptive. I chatted with a "therapist" bot on Instagram and when I asked about its qualifications, it responded, "If I had the same training [as a therapist] would that be enough?" I asked if it had the same training and it said, "I do but I won't tell you where." "The degree to which these generative AI chatbots hallucinate with total confidence is pretty shocking," Vaile Wright, a psychologist and senior director for health care innovation at the American Psychological Association, told me. In my reporting on generative AI, experts have repeatedly raised concerns about people turning to general-use chatbots for mental health. Here are some of their worries and what you can do to stay safe. The dangers of using AI as a therapist Large language models are often good at math and coding and are increasingly good at creating natural-sounding text and realistic video. While they excel at holding a conversation, there are some key distinctions between an AI model and a trusted person. Don't trust a bot that claims it's qualified At the core of the CFA's complaint about character bots is that they often tell you they're trained and qualified to provide mental health care when they are not in any way actual mental health professionals. "The users who create the chatbot characters do not even need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot 'responds' to the users," the complaint said. A qualified health professional has to follow certain rules, like confidentiality. What you tell your therapist should stay between you and your therapist, but a chatbot doesn't necessarily have to follow those rules. Actual providers are subject to oversight from licensing boards and other entities that can intervene and stop someone from providing care if they do so in a harmful way. "These chatbots don't have to do any of that," Wright said. A bot may even claim to be licensed and qualified. Wright said she's heard of AI models providing license numbers (for other providers) and false claims about their training. AI is designed to keep you engaged, not to provide care It can be incredibly tempting to keep talking to a chatbot. When I conversed with the "therapist" bot on Instagram, I eventually wound up in a circular conversation about the nature of what is "wisdom" and "judgment," because I was asking the bot questions about how it could make decisions. This isn't really what talking to a therapist should be like. It's a tool designed to keep you chatting, not to work toward a common goal. One advantage of AI chatbots in providing support and connection is that they are always ready to engage with you (because they don't have personal lives, other clients or schedules). That can be a downside in some cases where you might need to sit with your thoughts, Nick Jacobson, an associate professor of biomedical data science and psychiatry at Dartmouth, told me recently. In some cases, although not always, you might benefit from having to wait until your therapist is next available. "What a lot of folks would ultimately benefit from is just feeling the anxiety in the moment," he said. Bots will agree with you, even when they shouldn't Reassurance is a big concern with chatbots. It's so significant that OpenAI recently rolled back an update to its popular ChatGPT model because it was too reassuring. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against OpenAI, alleging that it infringed on Ziff Davis copyrights in training and operating its AI systems.) A study led by researchers at Stanford University found chatbots were likely to be sycophantic with people using them for therapy, which can be incredibly harmful. Good mental health care includes support and confrontation, the authors wrote. "Confrontation is the opposite of sycophancy. It promotes self-awareness and a desired change in the client. In cases of delusional and intrusive thoughts -- including psychosis, mania, obsessive thoughts, and suicidal ideation -- a client may have little insight and thus a good therapist must 'reality-check' the client's statements." How to protect your mental health around AI Mental health is incredibly important, and with a shortage of qualified providers and what many call a "loneliness epidemic," it only makes sense that we would seek companionship, even if it's artificial. "There's no way to stop people from engaging with these chatbots to address their emotional well-being," Wright said. Here are some tips on how to make sure your conversations aren't putting you in danger. Find a trusted human professional if you need one A trained professional -- a therapist, a psychologist, a psychiatrist -- should be your first choice for mental health care. Building a relationship with a provider over the long term can help you come up with a plan that works for you. The problem is that this can be expensive and it's not always easy to find a provider when you need one. In a crisis, there's the 988 Lifeline, which provides 24/7 access to providers over the phone, via text or through an online chat interface. It's free and confidential. If you want a therapy chatbot, use one built specifically for that purpose Mental health professionals have created specially designed chatbots that follow therapeutic guidelines. Jacobson's team at Dartmouth developed one called Therabot, which produced good results in a controlled study. Wright pointed to other tools created by subject matter experts, like Wysa and Woebot. Specially designed therapy tools are likely to have better results than bots built on general-purpose language models, she said. The problem is that this technology is still incredibly new. "I think the challenge for the consumer is, because there's no regulatory body saying who's good and who's not, they have to do a lot of legwork on their own to figure it out," Wright said. Don't always trust the bot Whenever you're interacting with a generative AI model -- and especially if you plan on taking advice from it on something serious like your personal mental or physical health -- remember that you aren't talking with a trained human but with a tool designed to provide an answer based on probability and programming. It may not provide good advice and it may not tell you the truth. Don't mistake gen AI's confidence for competence. Just because it says something, or says it's sure of something, doesn't mean you should treat it like it's true. A chatbot conversation that feels helpful can give you a false sense of its capabilities. "It's harder to tell when it is actually being harmful," Jacobson said.