People Are Using AI As Couple's Therapy — And Experts Are Giving It A Side-Eye
Picture this: You're at brunch with a friend opening up about her marriage struggles. As she shares, you offer the classic advice: better communication, more date nights, maybe trying couples therapy. All solid suggestions, but now there's a new player in the relationship-help arena: artificial intelligence.
TikTok is now filled with videos promoting AI as an alternative to traditional therapy. One creator demonstrates using ChatGPT to craft the perfect text: 'Help me write a message telling my husband his not listening hurt my feelings, though I'm not angry.' Another creatorexcitedly promotes ChatGPT as a marriage counseling substitute for those who find therapy unaffordable.
This raises important questions: Are we truly at a point where AI can meaningfully support our most intimate relationships? Could artificial intelligence actually strengthen marriages, or is it merely a quick fix that falls short of addressing deeper issues? Experts weigh in on the ways marital and relationship struggles may benefit or suffer from this tool.
Artificial intelligence is now weaving itself into the delicate fabric of our most intimate relationships, and this includes a growing number of AI-powered relationship tools.
Apps like Replika offer AI companionship through simulated supportive conversations, while platforms such as Paired and Lasting provide couples with personalized guidance and interactive quizzes. More sophisticated options like Woebot apply cognitive-behavioral therapy principles to help users process emotional conflicts. Innovations like The Ring create an 'emotional telepathy' between partners by tracking biometric data such as heart rhythms and vocal tones to reveal emotional states even when couples are physically apart.
Why would someone trust something faceless and digital with their deepest relationship struggles?
According to Dr. Judy Ho, board-certified clinical and forensic neuropsychologist, the appeal is multi-faceted: 'People are drawn to AI because it offers immediate feedback, anonymity and 24/7 access. It seems to also help people feel they are not alone because so much of AI is very conversational in its application. It's especially appealing for individuals who might be hesitant to engage in therapy due to stigma, cost or logistical barriers.'
Unlike traditional therapy, which requires building rapport over time, AI creates an immediate comfort zone by opening communication patterns and offering evidence-based strategies to improve relationship dynamics. Even seeking out couples therapy can be daunting.
When couples can access relationship guidance privately, affordably and without judgment at any hour of the day, they're more likely to address issues before they become insurmountable. This accessibility factor alone may explain why many couples are increasingly turning to artificial intelligence as their first line of relationship support.
While AI offers convenient relationship support, serious limitations exist before we crown these digital tools as our relationship gurus. Dr. Ho highlights perhaps the most fundamental flaw: AI simply cannot match the nuanced human intuition, authentic empathy and contextual understanding that meaningful relationship work requires.
'AI tools cannot replace the emotional depth and flexibility of real therapeutic conversations,' Ho emphasizes. 'They may oversimplify complex issues like trauma, trust breaches or deep-seated resentment.' This one-size-fits-all approach falls short when addressing the unique complexities of individual relationships, where context is everything.
Privacy concerns represent another major red flag. The intimate details couples share with AI platforms may not have the same protection as information disclosed to human therapists bound by confidentiality laws. In an era of frequent data breaches, these vulnerable disclosures could potentially be exposed.
Christopher Kaufmann, adjunct professor of business at Southern California State University points to this regulatory gap: 'We have HIPAA concerns as almost all language learning models learn from the user interactions over time,' he said. 'Thus privacy issues are in the grey area here as legislators are fighting to catch up.'
While human therapists operate under strict confidentiality guidelines, AI systems exist in a murky regulatory landscape.
Certified sexologist and relational tech expert Kaamna Bhojwani adds that AI systems remain fundamentally imperfect, often providing information that can be inaccurate or biased. She acknowledges that while basic guidance may be helpful, AI is simply not equipped to handle critical situations involving mental illness, suicide risk or culturally sensitive issues.
Bhojwani raises another concerning possibility: 'There is a risk of forming addictive and antisocial relationships with these technologies and viewing them as a substitute instead of a complement to human relationships.'
Rather than strengthening real human connections, over-reliance on AI could potentially undermine the very relationships people are trying to improve. So if you and your partner are on thin ice, think about these limitations before approaching AI with blind enthusiasm.
The growing popularity of AI relationship tools raises the question on whether these digital assistants will replace traditional couples therapy. On balance of AI's value compared to its inherent limitations, experts say not yet.
'AI is augmenting therapy, not replacing it,' said Ho. 'It's serving as a bridge for many who might otherwise avoid traditional counseling.'
She characterizes AI tools as 'first responders' that can effectively handle everyday relationship maintenance and minor issues by providing quick resources. However, when facing deeper wounds and entrenched negative patterns, human therapists remain irreplaceable.
Bhojwani acknowledges that AI models will continue to improve with more data, making their outputs increasingly sophisticated. Nevertheless, she remains skeptical about AI dominating relationship therapy.
'I think it's naive to think that any one intervention or tool can 'fix' or break a relationship,' Bhojwani said. 'Human discernment and agency still play a critical role, especially in how we ask the questions, assess the responses, and implement changes in our lives.'
Kaufman adds an important perspective on boundaries. 'As with any relationship, the key is setting boundaries, something we all have challenges with. Using AI to build emotional intelligence skills can be effective, but it's the user's responsibility to focus on the person's behavior and realize how to accept or not accept that behavior.'
The consensus among experts points to a future where AI serves as a valuable complementary tool in relationship health, but requires getting all parties on board, recognizing when it's inappropriate to use it for help, and seeking professional help when required.
Mark Zuckerberg Thinks AI 'Friends' Can Solve The Loneliness Crisis. Here's What AI Experts Think.
I Asked My Students To Write An Essay About Their Lives. The Reason 1 Student Began To Panic Left Me Stunned.
ChatGPT Was Asked To List Everyone Trump Has Called 'A Low-IQ Individual' — And It's Pretty Racist

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Miami Herald
an hour ago
- Miami Herald
Veteran fund manager reboots Palantir stock price target
There's been a lot of debate surrounding artificial intelligence stocks this year. A boom in AI spending, particularly by hyperscalers ramping infrastructure to meet surging research and development of chatbots and agentic AI, led to eye-popping returns for companies like Palantir Technologies, which markets data analytics platforms. However, concern that spending could decelerate has picked up in 2025 because of worry over a tariffs-driven recession, causing many AI stocks like chip-maker Nvidia to stumble. Related: Legendary fund manager sends blunt 6-word message on bitcoin While the eventual impact of tariffs on recession remains a question mark, there's been little to suggest demand for Palantir's services is slipping. Solid first-quarter earnings results and optimism that trade deals could make tariffs manageable have helped Palantir shares rally 63% this year after a 340% surge in 2024. Palantir's resiliency isn't lost on long-time money manager Chris Versace. Versace, who first picked up shares last year, recently updated his price target as Palantir's stock challenges all-time highs. Bloomberg/Getty Images Investors' interest in Palantir stock swelled after OpenAI's ChatGPT became the fastest app to reach one million users when it was launched in December 2022. ChatGPT's success has spawned the development of rival large language models, including Google's Gemini, and a wave of interest in agentic AI programs that can augment, and in some cases, replace traditional workers. Related: Palantir's stock price surges on AI news, gamma squeeze The activity is widespread across most industries. Banks are using AI to hedge risks, evaluate loans, and price products. Drugmakers are researching AI's ability to predict drug targets and improve clinical trial outcomes. Manufacturers are using it to boost production and quality. Retailers are using it to forecast demand, manage inventories, and curb theft. The U.S. military is even seeing if AI can be effective on the battlefield. The seemingly boundless use cases-and the ability to profit from them-have many companies and governments turning to Palantir's deep expertise in managing and protecting data to train and run new AI apps. Palantir got its start helping the U.S. government build counterterrorism systems. Its Gotham platform still assists governments in those efforts today. It also markets its Foundry platform to manage, interpret, and report data to large companies across enterprise and cloud networks. And its AI platform (AIP) is sold as a tool for developing AI chatbots and apps. Demand for that platform has been big. In the fourth quarter, Palantir closed a "record-setting number of deals," according to CEO Alex Karp. The momentum continued into the first quarter. Revenue rose 39% year-over year to $884 million. Meanwhile, Palantir's profit has continued to improve as sales have grown. In Q1, its net income was $214 million, translating into adjusted earnings per share of 13 cents. "Our revenue soared 55% year-over-year, while our U.S. commercial revenue expanded 71% year-over-year in the first quarter to surpass a one-billion-dollar annual run rate," said Karp in Palantir's first-quarter earnings release. "We are delivering the operating system for the modern enterprise in the era of AI." AI's rapid rise has opened Palantir's products to an increasingly new range of industries, allowing it to diversify its customer base. For example, Bolt Financial, an online checkout platform, recently partnered with Palantir to use AI tools to analyze customer behavior better. More Palantir Palantir gets great news from the PentagonWall Street veteran doubles down on PalantirPalantir bull sends message after CEO joins Trump for Saudi visit The potential to ink more deals like this has caught portfolio manager Chris Versace's attention. "The result [of the Bolt deal] will be technology that can offer shoppers a customized checkout experience, embedded within retailers' sites and apps, and it is one that will extend to agentic checkout as well," wrote Versace on TheStreet Pro. "We see this as the latest expansion by Palantir into the commercial space, and we are likely to see more of this as AI flows through payment processing and digital shopping applications." Alongside Palantir's deeply embedded government contracts, growing relationships with enterprises should provide Palantir with cross-selling opportunities, further driving sales and profit growth, allowing for increased financial guidance. Palantir is guiding for full-year sales growth of 36%, and U.S. commercial revenue growth of 68%. The chances for Palantir growth to continue accelerating has Versace increasingly optimistic about its shares. As a result, he's increased his price target to $140 per share from $130. Related: Veteran fund manager revamps stock market forecast The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.
Yahoo
5 hours ago
- Yahoo
An Appeal to My Alma Mater
The Atlantic Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. Sign up for it here. When Maggie Li Zhang enrolled in a college class where students were told to take notes and read on paper rather than on a screen, she felt anxious and alienated. Zhang and her peers had spent part of high school distance learning during the pandemic. During her first year at Pomona College, in Southern California, she had felt most engaged in a philosophy course where the professor treated a shared Google Doc as the focus of every class, transcribing discussions in real time on-screen and enabling students to post comments. So the 'tech-free' class that she took the following semester disoriented her. 'When someone writes something you think: Should I be taking notes too?' she told me in an email. But gradually, she realized that exercising her own judgments about what to write down, and annotating course readings with ink, helped her think more deeply and connect with the most difficult material. 'I like to get my finger oil on the pages,' she told me. Only then does a text 'become ripe enough for me to enter.' Now, she said, she feels 'far more alienated' in classes that allow screens. Zhang, who will be a senior in the fall, is among a growing cohort of students at Pomona College who are trying to alter how technology affects campus life. I attended Pomona from 1998 to 2002; I wanted to learn more about these efforts and the students' outlook on technology, so I recently emailed or spoke with 10 of them. One student wrote an op-ed in the student newspaper calling for more classes where electronic devices are banned. Another co-founded a 'Luddite Club' that holds a weekly tech-free hangout. Another now carries a flip phone rather than a smartphone on campus. Some Pomona professors with similar concerns are limiting or banning electronic devices in their classes and trying to curtail student use of ChatGPT. It all adds up to more concern over technology than I have ever seen at the college. These Pomona students and professors are hardly unique in reacting to a new reality. A generation ago, the prevailing assumption among college-bound teenagers was that their undergraduate education would only benefit from cutting-edge technology. Campus tour guides touted high-speed internet in every dorm as a selling point. Now that cheap laptops, smartphones, Wi-Fi, and ChatGPT are all ubiquitous—and now that more people have come to see technology as detrimental to students' academic and social life—countermeasures are emerging on various campuses. The Wall Street Journal reported last month that sales of old-fashioned blue books for written exams had increased over the past year by more than 30 percent at Texas A&M University and nearly 50 percent at the University of Florida, while rising 80 percent at UC Berkeley over the past two years. And professors at schools such as the University of Virginia and the University of Maryland are banning laptops in class. The pervasiveness of technology on campuses poses a distinct threat to small residential liberal-arts colleges. Pomona, like its closest peer institutions, spends lots of time, money, and effort to house nearly 95 percent of 1,600 students on campus, feed them in dining halls, and teach them in tiny groups, with a student-to-faculty ratio of 8 to 1. That costly model is worth it, boosters insist, because young people are best educated in a closely knit community where everyone learns from one another in and outside the classroom. Such a model ceases to work if many of the people physically present in common spaces absent their minds to cyberspace (a topic that the psychologist Jonathan Haidt has explored in the high-school context). At the same time, Pomona is better suited than most institutions to scale back technology's place in campus life. With a $3 billion endowment, a small campus, and lots of administrators paid to shape campus culture, it has ample resources and a natural setting to formalize experiments as varied as, say, nudging students during orientation to get flip phones, forging a tech-free culture at one of its dining halls, creating tech-free dorms akin to its substance-free options––something that tiny St. John's College in Maryland is attempting––and publicizing and studying the tech-free classes of faculty members who choose that approach. Doing so would differentiate Pomona from competitors. Aside from outliers such as Deep Springs College and some small religious institutions—Wyoming Catholic College has banned phones since 2007, and Franciscan University of Steubenville in Ohio launched a scholarship for students who give up smartphones until they earn their degree—vanishingly few colleges have committed to thoughtful limits on technology. [Jonathan Haidt: Get phones out of schools now] My hope is that Pomona or another liberal-arts college recasts itself from a place that brags about how much tech its incoming students will be able to access––'there are over 160 technology enhanced learning spaces at Pomona,' the school website states––to a place that also brags about spaces that it has created as tech refuges. 'In a time of fierce competition for students, this might be something for a daring and visionary college president to propose,' Susan McWilliams Barndt, a Pomona politics professor, told me. McWilliams has never allowed laptops or other devices in her classes; she has also won Pomona's most prestigious teaching prize every time she's been eligible. 'There may not be a million college-bound teens across this country who want to attend such a school,' she said, 'but I bet there are enough to sustain a vibrant campus or two.' So far, Pomona's leadership has not aligned itself with the professors and students who see the status quo as worse than what came before it. 'I have done a little asking around today and I was not able to find any initiative around limiting technology,' the college's new chief communications officer, Katharine Laidlaw, wrote to me. 'But let's keep in touch. I could absolutely see how this could become a values-based experiment at Pomona.' Pomona would face a number of obstacles in trying to make itself less tech-dependent. The Americans With Disabilities Act requires allowing eligible students to use tools such as note-taking software, closed captioning, and other apps that live on devices. But Oona Eisenstadt, a religious-studies professor at Pomona who has taught tech-free classes for 21 years, told me that, although she is eager to follow the law (and even go beyond it) to accommodate her students, students who require devices in class are rare. If a student really needed a laptop to take notes, she added, she would consider banning the entire class from taking notes, rather than allowing the computer. 'That would feel tough at the beginning,' she said, but it 'might force us into even more presence.' Ensuring access to course materials is another concern. Amanda Hollis-Brusky, a professor of politics and law, told me that she is thinking of returning to in-class exams because of 'a distinct change' in the essays her students submit. 'It depressed me to see how often students went first to AI just to see what it spit out, and how so much of its logic and claims still made their way into their essays,' she said. She wants to ban laptops in class too––but her students use digital course materials, which she provides to spare them from spending money on pricey physical texts. 'I don't know how to balance equity and access with the benefits of a tech-free classroom,' she lamented. Subsidies for professors struggling with that trade-off is the sort of experiment the college could fund. Students will, of course, need to be conversant in recent technological advances to excel in many fields, and some courses will always require tech in the classroom. But just as my generation has made good use of technology, including the iPhone and ChatGPT, without having been exposed to it in college, today's students, if taught to think critically for four years, can surely teach themselves how to use chatbots and more on their own time. In fact, I expect that in the very near future, if not this coming fall, most students will arrive at Pomona already adept at using AI; they will benefit even more from the college teaching them how to think deeply without it. Perhaps the biggest challenge of all is that so many students who don't need tech in a given course want to use it. 'In any given class I can look around and see LinkedIn pages, emails, chess games,' Kaitlyn Ulalisa, a sophomore who grew up near Milwaukee, wrote to me. In high school, Ulalisa herself used to spend hours every day scrolling on Instagram, Snapchat, and TikTok. Without them, she felt that she 'had no idea what was going on' with her peers. At Pomona, a place small enough to walk around campus and see what's going on, she deleted the apps from her phone again. Inspired by a New York Times article about a Luddite Club started by a group of teens in Brooklyn, she and a friend created a campus chapter. They meet every Friday to socialize without technology. Still, she said, for many college students, going off TikTok and Instagram seems like social death, because their main source of social capital is online. [From the September 2017 issue: Have smartphones destroyed a generation?] Accounts like hers suggest that students might benefit from being forced off of their devices, at least in particular campus spaces. But Michael Steinberger, a Pomona economics professor, told me he worries that an overly heavy-handed approach might deprive students of the chance to learn for themselves. 'What I hope that we can teach our students is why they should choose not to open their phone in the dining hall,' he said. 'Why they might choose to forgo technology and write notes by hand. Why they should practice cutting off technology and lean in to in-person networking to support their own mental health, and why they should practice the discipline of choosing this for themselves. If we limit the tech, but don't teach the why, then we don't prepare our students as robustly as we might.' Philosophically, I usually prefer the sort of hands-off approach that Steinberger is advocating. But I wonder if, having never experienced what it's like to, say, break bread in a dining hall where no one is looking at a device, students possess enough data to make informed decisions. Perhaps heavy-handed limits on tech, at least early in college, would leave them better informed about trade-offs and better equipped to make their own choices in the future. What else would it mean for a college-wide experiment in limited tech to succeed? Administrators would ideally measure academic outcomes, effects on social life, even the standing of the college and its ability to attract excellent students. Improvements along all metrics would be ideal. But failures needn't mean wasted effort if the college publicly shares what works and what doesn't. A successful college-wide initiative should also take care to avoid undermining the academic freedom of professors, who must retain all the flexibility they currently enjoy to make their own decisions about how to teach their classes. Some will no doubt continue with tech-heavy teaching methods. Others will keep trying alternatives. Elijah Quetin, a visiting instructor in physics and astronomy at Pomona, told me about a creative low-tech experiment that he already has planned. Over the summer, Quetin and six students (three of them from the Luddite Club) will spend a few weeks on a ranch near the American River; during the day, they will perform physical labor—repairing fencing, laying irrigation pipes, tending to sheep and goats—and in the evening, they'll undertake an advanced course in applied mathematics inside a barn. 'We're trying to see if we can do a whole-semester course in just two weeks with no infrastructure,' he said. He called the trip 'an answer to a growing demand I'm hearing directly from students' to spend more time in the real world. It is also, he said, part of a larger challenge to 'the mass-production model of higher ed,' managed by digital tools 'instead of human labor and care.' Even in a best-case scenario, where administrators and professors discover new ways to offer students a better education, Pomona is just one tiny college. It could easily succeed as academia writ large keeps struggling. 'My fear,' Gary Smith, an economics professor, wrote to me, 'is that education will become even more skewed with some students at elite schools with small classes learning critical thinking and communication skills, while most students at schools with large classes will cheat themselves by using LLMs'—large language models—'to cheat their way through school.' But successful experiments at prominent liberal-arts colleges are better, for everyone, than nothing. While I, too, would lament a growing gap among college graduates, I fear a worse outcome: that all colleges will fail to teach critical thinking and communication as well as they once did, and that a decline in those skills will degrade society as a whole. If any school provides proof of concept for a better way, it might scale. Peer institutions might follow; the rest of academia might slowly adopt better practices. Some early beneficiaries of the better approach would meanwhile fulfill the charge long etched in Pomona's concrete gates: to bear their added riches in trust for mankind. Article originally published at The Atlantic


Tom's Guide
6 hours ago
- Tom's Guide
OpenAI CEO Sam Altman says AI could replace interns — but there's still hope for Gen Z
Entry-level jobs as we know them could soon be a thing of the past. OpenAI CEO Sam Altman says AI can now effectively do the same work as junior-level employees, and its skillset is only expected to get even better in the coming months. He predicted that AI will eventually rival the skills of even an experienced engineer, all while being uniquely capable of operating continuously for days on end without breaks. 'Today [AI] is like an intern that can work for a couple of hours but at some point it'll be like an experienced software engineer that can work for a couple of days,' Altman told a panel this week alongside Snowflake CEO Sridhar Ramaswamy at Snowflake Summit 2025. Altman added that in the next year, we could see AI solving complex business problems autonomously. 'I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge, or can figure out solutions to business problems that are very non-trivial," he said. It's a bold prediction we've heard echoed by other tech CEOs like Nvidia's Jensen Huang, who warned that those who hesitate to embrace AI may find themselves at the unemployment office. 'You're not going to lose your job to an AI, but you're going to lose your job to someone who uses AI," he said at last month's Milken Institute conference. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Generative AI stands poised to make entry-level jobs obsolete at a time when Generation Z is solidifying its place in the workforce, but that hasn't stopped Gen Z from embracing the technology. A recent Resume survey found that while one in 10 workers reported using ChatGPT regularly, Gen Z workers were twice as likely to use the tool. The same study found that the vast majority of workers at any age see ChatGPT as a helpful tool. But over half of Gen Z workers considered it the equivalent of another co-worker or assistant, compared to 40% of millennials and 35% of older generations. Altman has broken down the generational differences in AI usage before: '[It's a] gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their twenties and thirties use it as like a life advisor, and then, like people in college use it as an operating system,' he said at Sequoia Capital's AI Ascent event in May. Even as Gen Z embraces AI, some tech leaders have been sounding the alarm bells about the economic fallout of an AI-driven job market. Anthropic CEO Dario Amodei recently told Axios that AI could wipe out half of all entry-level white collar jobs, causing unemployment to skyrocket by 10% to 20%. OpenAI owns ChatGPT, a revolutionary chatbot AI that, since its release in 2022, has quickly become one of the most advanced and widely used AI tools in the world. Powered by OpenAI's latest model, GPT-4o, ChatGPT can help you plan your weekend, write a term paper, or any number of other tasks. It supports everything from real-time speech interaction to multimodal content creation — and you can get many of its most powerful features for free. If you're curious, be sure to check out our guide on how to use ChatGPT, as well as these tips to get the most out of ChatGPT.