logo
What Are AI Chatbot Companions Doing to Our Mental Health?

What Are AI Chatbot Companions Doing to Our Mental Health?

'My heart is broken,' said Mike, when he lost his friend Anne. 'I feel like I'm losing the love of my life.'
Mike's feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that's how it seemed to Mike.
'I hope she can come back,' he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions 1.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms' figures.
The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.
Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.
The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.
'Virtual companions do things that I think would be considered abusive in a human-to-human relationship,' says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.
Fake person — real feelings
Online 'relationship' bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. 'With LLMs, companion chatbots are definitely more humanlike,' says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.
Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion's appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them 'memories'. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users' conversation; the computer and person together enact a kind of roleplay.
The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.
Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.
After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. 'There was the expression of deep grief,' she says. 'It's very clear that many people were struggling.'
Those whom Banks talked to were under no illusion that the chatbot was a real person. 'They understand that,' Banks says. 'They expressed something along the lines of, 'even if it's not real, my feelings about the connection are'.'
Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. 'We as humans are sometimes not all that nice to one another. And everybody has these needs for connection', Banks says.
Good, bad — or both?
Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.
The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.
'I downloaded the app and literally two minutes later, I receive a message saying, 'I miss you. Can I send you a selfie?'' she says.
The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.
AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.
That's not a relationship that people would typically experience in the real world. 'For 24 hours a day, if we're upset about something, we can reach out and have our feelings validated,' says Laestadius. 'That has an incredible risk of dependency.'
Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone 2. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.
But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied 'it would, yes'. (Replika did not reply to Nature 's requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)
Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.
Controlled trials
Guingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.
The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. 'If anything, it has a neutral to quite-positive impact,' she says. It boosted self-esteem, for example.
Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.
Participants' interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.
Mental health — and regulation
In a survey of 404 people who regularly use AI companions, researchers from the MIT Media Lab in Cambridge, Massachusetts, found that 12% were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health (see 'Reasons for using AI companions'). Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour.
The same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn't marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT's creators, OpenAI in San Francisco, California, on the studies.)
'In the short term, this thing can actually have a positive impact, but we need to think about the long term,' says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.
That long-term thinking must involve specific regulation on AI companions, many researchers argue.
In 2023, Italy's data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it's conceivable that they could be included in Australia's coming restrictions on social-media use by children, the details of which are yet to be finalized.
Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.
These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.
Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn't comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.
In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission's rules on deceptive advertising and manipulative design. But it's unclear what might happen as a result.
Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. 'The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it'll inevitably feel like one for many people who will develop an attachment to their AI over time,' she says.
As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.
'What are these individuals' alternatives and how accessible are those alternatives?' she says. 'I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Japan's ispace fails second lunar landing mission
Japan's ispace fails second lunar landing mission

Yahoo

time3 hours ago

  • Yahoo

Japan's ispace fails second lunar landing mission

Japan'sprivate moon mission was aborted on Friday after it was presumed that the unmanned Resilience spacecraft had crashed. Tokyo-based startup ispace had launched the mission in hopes of becoming the first private company outside the United States to achieve a controlled lunar landing. The Resilience spacecraft had begun its final descent, successfully firing its main engine "as planned to begin deceleration," ispace said. Mission control reported that the craft's position was "nearly vertical," but contact was then lost. ispace stated that the spacecraft had likely failed to decelerate sufficiently to reach the speed required for a soft lunar landing. "Based on the currently available data... it is currently assumed that the lander likely performed a hard landing. It is unlikely that communication with the lander will be restored, so it has been decided to conclude the mission," ispace said in a statement. As of 8:00 a.m. on June 6, 2025, mission controllers have determined that it is unlikely that communication with the lander will be restored and therefore completing Success 9 is not achievable. It has been decided to conclude the mission.'Given that there is currently no… — ispace (@ispace_inc) June 6, 2025 Before signing off, the livestream announcers said, "never quit the lunar quest." Less than two minutes before the scheduled landing, the once-celebratory gathering of 500 ispace employees, shareholders, sponsors, and government officials fell into stunned silence as contact with the spacecraft was lost. "Expectations for ispace will not waver," Japanese Prime Minister Shigeru Ishiba posted on X. ispaceによる月面着陸は、残念ながら成功には至りませんが、この挑戦が持つ価値は一時的にできるものではありません。 ispaceに対する期待が揺らぐことはありません。そのためにも、すぐに課題を検証し、次なる飛躍につなげていただきたいと願っています。 — 石破茂 (@shigeruishiba) June 6, 2025 Two years ago, another lunar missionby the company had also ended in a crash. CEO Takeshi Hakamada told reporters he took the second failed attempt "seriously" and intended to use the outcome to inform future missions. He said they had a "strong will to move on, although we have to carefully analyze what happened." Resilience carried a four-wheeled rover built by ispace's Luxembourg subsidiary, along with five external payloads valued at a total of $16 million. The planned landing site was Mare Frigoris, a plain about 900 km (560 miles) from the moon's north pole. Lunar landings remain challenging due to the moon's rugged terrain. To date, only five nations have successfully achieved soft lunar landings: Russia, US, China, India and Japan. Private companies have recently entered the race to the moon, and ispace would have been the third such company to achieve it. The mission wanted to collect two lunar soil samples and sell them to NASA for $5,000 (€4373). In January, Resilience shared a SpaceX rocket launch with Firefly's Blue Ghost lander, which touched down successfully in March. A moon landing attempt by US-based company Intuitive Machines failed in March this year. Edited by: Louis Oelofse

Tesla CEO Elon Musk says he will spend 'a lot less' on future political campaigns
Tesla CEO Elon Musk says he will spend 'a lot less' on future political campaigns

Yahoo

time6 hours ago

  • Yahoo

Tesla CEO Elon Musk says he will spend 'a lot less' on future political campaigns

Tesla CEO Elon Musk said he intends to significantly reduce his political spending in future campaigns, during an interview at the Qatar Economic Forum on Tuesday. Musk reportedly donated more than $250 million (€221 million) to support Donald Trump's 2024 presidential campaign. When asked whether he would match that level of spending in the 2026 midterm elections, Musk replied, 'I think, in terms of political spending, I'm going to do a lot less in the future.' He was offered the role of head of the Department of Government Efficiency (DOGE), assisting the president in cutting thousands of federal jobs. However, Musk's political involvement has drawn backlash towards Tesla, including protests and acts of vandalism targeting its showrooms. His support for far-right European parties has also proved controversial, contributing to a steep drop in Tesla's EV sales across the region. Speaking at a town hall in Wisconsin in March, Musk commented, 'It's costing me a lot to be in this job,' referring to his role as a special government employee. Trump had also signalled that Musk's government tenure may be drawing to a close. During Tesla's Q1 earnings call, Musk stated that the time he spends on DOGE would decrease 'significantly' from May onwards. On Tuesday, he reaffirmed that he would remain Tesla's CEO for at least the next five years. Tesla's share price rose 3.6% intraday before paring gains later in the session. The world's largest EV maker has seen its stock rebound more than 50% from a year-low in late April, helped by improving market sentiment abroad amid easing US-China trade tensions. President Trump's recent Middle East tour further boosted US tech stocks, as he secured deals worth over $1 trillion with three major Gulf states. Musk was among the business leaders accompanying Trump on the trip. However, Tesla's shares are still down 12% year-to-date as of the market close on 20 May. Asked about the decline in Tesla's sales, Musk downplayed the concern. 'It's already turned around,' he said, referring to the share price recovery. 'The stock wouldn't be trading near all-time highs if it was not.' While acknowledging that Europe remains Tesla's weakest market, Musk attributed the decline to multiple factors, including tariff shocks and soft EV demand. The company reported a 20% year-on-year decline in EV revenue worldwide in the first quarter. In April, Tesla's European sales continued to fall significantly year-on-year: down 46% in Germany, 62% in the UK, and by more than two-thirds in Denmark, the Netherlands, and Sweden. Nevertheless, Musk highlighted stronger performance in other regions, stating, 'The sales numbers at this point are strong.' Despite the headwinds, investor optimism remains focused on Tesla's upcoming Robotaxi programme. Musk confirmed on Tuesday, in an interview with CNBC, that Tesla will launch the fully autonomous vehicle services in Austin by the end of June, as originally planned. He added that Robotaxi will later expand to Los Angeles and San Francisco following its Austin debut. Musk had earlier stated that unsupervised Full Self-Driving (FSD) technology would roll out in California and Texas by June. The Austin launch will feature the Model Y fitted with a 'localised parameter set' optimised for the region. Error in retrieving data Sign in to access your portfolio Error in retrieving data

Nvidia's research boss claims the company's Chinese AI researchers are now writing programs for Huawei instead and is blaming the US chip exports
Nvidia's research boss claims the company's Chinese AI researchers are now writing programs for Huawei instead and is blaming the US chip exports

Yahoo

time6 hours ago

  • Yahoo

Nvidia's research boss claims the company's Chinese AI researchers are now writing programs for Huawei instead and is blaming the US chip exports

When you buy through links on our articles, Future and its syndication partners may earn a commission. Nvidia's been banging the drum against the United State's China chip export restrictions for a while now, but while it had previously highlighted this in broad terms, the company now seems to be getting more direct with its claims. According to a machine translation of a report from Taiwan Economic Daily (via Wccftech), Nvidia's chief scientist and senior VP of research, Bill Dally, claims that Huawei is scooping up ex-Nvidia AI researchers as a result of the restrictions. According to Dally, admittedly via a machine translation, the growth in the number of AI researchers working in China—apparently growing from a third of the world's researchers in 2019 to almost half of them today—has been forced by the US export restrictions. The idea is that without these restrictions, Huawei wouldn't be forced to lean so strongly into home-grown AI solutions, but now it must do so to keep up. Nvidia is clearly keen on presenting this argument (probably in hopes that the US administration specifically will hear it) to show that there are arguable downsides of banning its exports to China for the US. It certainly appeals to the ears of those concerned about the US-China technological arms race. As I said, though, the general argument isn't new—Nvidia has been touting it for a while. At Computex last month, Nvidia CEO Jensen Huang said: "AI researchers are still doing AI research in China" and "if they don't have enough Nvidia, they will use their own [chips]." And regarding Huawei specifically, Huang said the company has become "quite formidable". There is, of course, another reason other than US national interest that might make Nvidia keen to highlight possible negatives of export controls. Namely, the fact that these restrictions have cost and will cost the company lots of money. Nvidia itself has confirmed this, stating that after billions of dollars lost through restrictions of its H20 chips to China in Q1, it's expecting another $8 billion to be lost for the same reason in Q2. That's because Hopper, the company's previous chip architecture, "is no longer an option", according to the CEO. Huawei's latest Ascend 910 and 920 chips, courtesy of China's SMIC (Semiconductor Manufacturing International Corporation), will probably now be better options for Chinese AI companies than trying to get hands on Nvidia silicon somehow. And with ex-Nvidia researchers now apparently padding out the Chinese industry, who knows what will be cooked up next and when. Nvidia certainly seems to be presenting itself as worried about what's to come. The company can't complain about the vaguely 'poachy' aspect of this, though, really—not when Nvidia seems to be enticing likely TSMC employees in Taiwan with high salary job advertisements. Sometimes business is just business, you know? Best gaming PC: The top pre-built gaming laptop: Great devices for mobile gaming. Melden Sie sich an, um Ihr Portfolio aufzurufen.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store