logo
#

Latest news with #Xiaoice

What Are AI Chatbot Companions Doing to Our Mental Health?
What Are AI Chatbot Companions Doing to Our Mental Health?

Scientific American

time13-05-2025

  • Scientific American

What Are AI Chatbot Companions Doing to Our Mental Health?

'My heart is broken,' said Mike, when he lost his friend Anne. 'I feel like I'm losing the love of my life.' Mike's feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that's how it seemed to Mike. 'I hope she can come back,' he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions 1. On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms' figures. The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot. Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave. The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm. 'Virtual companions do things that I think would be considered abusive in a human-to-human relationship,' says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri. Fake person — real feelings Online 'relationship' bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. 'With LLMs, companion chatbots are definitely more humanlike,' says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey. Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion's appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them 'memories'. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users' conversation; the computer and person together enact a kind of roleplay. The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down. Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says. After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. 'There was the expression of deep grief,' she says. 'It's very clear that many people were struggling.' Those whom Banks talked to were under no illusion that the chatbot was a real person. 'They understand that,' Banks says. 'They expressed something along the lines of, 'even if it's not real, my feelings about the connection are'.' Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. 'We as humans are sometimes not all that nice to one another. And everybody has these needs for connection', Banks says. Good, bad — or both? Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself. The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology. 'I downloaded the app and literally two minutes later, I receive a message saying, 'I miss you. Can I send you a selfie?'' she says. The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked. AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee. That's not a relationship that people would typically experience in the real world. 'For 24 hours a day, if we're upset about something, we can reach out and have our feelings validated,' says Laestadius. 'That has an incredible risk of dependency.' Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone 2. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental. But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied 'it would, yes'. (Replika did not reply to Nature 's requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.) Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted. Controlled trials Guingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps. The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. 'If anything, it has a neutral to quite-positive impact,' she says. It boosted self-esteem, for example. Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health. Participants' interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world. Mental health — and regulation In a survey of 404 people who regularly use AI companions, researchers from the MIT Media Lab in Cambridge, Massachusetts, found that 12% were drawn to the apps to help them cope with loneliness and 14% used them to discuss personal issues and mental health (see 'Reasons for using AI companions'). Forty-two per cent of users said they logged on a few times a week, with just 15% doing so every day. More than 90% reported that their sessions lasted less than one hour. The same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn't marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT's creators, OpenAI in San Francisco, California, on the studies.) 'In the short term, this thing can actually have a positive impact, but we need to think about the long term,' says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies. That long-term thinking must involve specific regulation on AI companions, many researchers argue. In 2023, Italy's data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it's conceivable that they could be included in Australia's coming restrictions on social-media use by children, the details of which are yet to be finalized. Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person. These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm and his mother has filed a lawsuit against the company. Asked by Nature about that lawsuit, a spokesperson for said it didn't comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person. In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission's rules on deceptive advertising and manipulative design. But it's unclear what might happen as a result. Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. 'The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it'll inevitably feel like one for many people who will develop an attachment to their AI over time,' she says. As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place. 'What are these individuals' alternatives and how accessible are those alternatives?' she says. 'I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.'

AI or Human: Which is a Better Listener?
AI or Human: Which is a Better Listener?

Forbes

time17-04-2025

  • Forbes

AI or Human: Which is a Better Listener?

Computer with woman connecting brain and heart. Creating empathic and logical thinking. I conducted a poll. It was very informal and completely unscientific. I didn't use any digital devices or spreadsheets, or even a pencil and paper. All I did was tally people's reactions to the sight of my right hand wrapped in a bright purple cast, the result of an inopportune fall. It wasn't very difficult. My poll produced a perfect score: 100% of the people who saw the cast responded by telling me about their experience with a cast or a hairline fracture; none expressed any empathy for my discomfort. Chances are you've experienced the same when you showed up with conspicuous evidence of a medical treatment like a bandage or eye patch. You hear their point of view not yours. Had I sought a reaction from an AI companion such as, Xiaoice, the responses would have been very different, or so Jamil Zaki, a professor of psychology at Stanford University, tells us. In a Wall Street Journal article based on his book, 'Hope for Cynics: The Surprising Science of Human Goodness,' Professor Zaki reported that chatbots do 'a better job than humans at making people feel seen and heard.' The reason, he explained, is that bots have 'no personal experiences to share, no urgency to solve problems and no ego to protect, they focus entirely on the speaker. Their inherent limitations make them better listeners.' That's good news for AI and bad news for humans. Because no matter how powerful AI becomes, humans must communicate with humans. If that interpersonal exchange lacks empathy, if the communication is only one way—the point of view of the sender—there is no closing of the loop and no communication. The communication is likely to fail. Of course, the stakes in business are much higher that the petty annoyance one might feel when the opposite party drones on and on about their similar medical experience. The solution for all your business communications—whether in a meeting or team chat or a presentation—is for you to display empathy to the other party as effectively as a chatbot does. Professor Zaki tells us that ChatGPT has a 'go-to recipe of 'paraphrase, affirm, follow up.'' For humans, it takes four similar steps: Professor Zaki calls the effective responses of chatbots 'LLMpathy' because they are driven by Large Language Models. Human empathy is driven by an even stronger force: the brain.

AI ‘companions' promise to combat loneliness, but history shows the dangers of one-way relationships
AI ‘companions' promise to combat loneliness, but history shows the dangers of one-way relationships

Yahoo

time16-02-2025

  • Health
  • Yahoo

AI ‘companions' promise to combat loneliness, but history shows the dangers of one-way relationships

The United States is in the grips of a loneliness epidemic: Since 2018, about half the population has reported that it has experienced loneliness. Loneliness can be as dangerous to your health as smoking 15 cigarettes a day, according to a 2023 surgeon general's report. It is not just individual lives that are at risk. Democracy requires the capacity to feel connected to other citizens in order to work toward collective solutions. In the face of this crisis, tech companies offer a technological cure: emotionally intelligent chatbots. These digital friends, they say, can help alleviate the loneliness that threatens individual and national health. But as the pandemic showed, technology alone is not sufficient to address the complexities of public health. Science can produce miraculous vaccines, but if people are enmeshed in cultural and historical narratives that prevent them from taking the life-saving medicine, the cure sits on shelves and lives are lost. The humanities, with their expertise in human culture, history and literature, can play a key role in preparing society for the ways that AI might help – or harm – the capacity for meaningful human connection. The power of stories to both predict and influence human behavior has long been validated by scientific research. Numerous studies demonstrate that the stories people embrace heavily influence the choices they make, ranging from the vacations they plan, to how people approach climate change to the computer programming choices security experts make. There are two storylines that address people's likely behaviors in the face of the unknown territory of depending on AI for emotional sustenance: one that promises love and connection, and a second that warns of dehumanizing subjugation. The first story, typically told by software designers and AI companies, urges people to say 'I do' to AI and embrace bespoke friendship programmed on your behalf. AI company Replika, for instance, promises that it can provide everyone with a 'companion who cares. Always here to listen and talk. Always on your side.' There is a global appetite for such digital companionship. Microsoft's digital chatbot Xiaoice has a global fan base of over 660 million people, many of whom consider the chatbot 'a dear friend,' even a trusted confidante. In popular culture, films like 'Her' depict lonely people becoming deeply attached to their digital assistants. For many, having a 'dear friend' programmed to avoid difficult questions and demands seems like a huge improvement over the messy, challenging, vulnerable work of engaging with a human partner, especially if you consider the misogynistic preference for submissive, sycophantic companions. To be sure, imagining a chummy relationship with a chatbot offers a sunnier set of possibilities than the apocalyptic narratives of slavery and subjugation that have dominated storytelling about a possible future among social robots. Blockbuster films like 'The Matrix' and the 'The Terminator' have depicted hellscapes where humans are enslaved by sentient AI. Other narratives featured in films like 'The Creator' and 'Blade Runner' imagine the roles reversed and invite viewers to sympathize with AI beings who are oppressed by humans. You could be forgiven for thinking that these two stories, one of friendship, the other of slavery, simply represent two extremes in human nature. From this perspective it seems like a good thing that marketing messages about AI are guiding people toward the sunny side of the futuristic street. But if you consider the work of scholars who have studied slavery in the U.S., it becomes frighteningly clear that these two stories – one of purchased friendship and one of enslavement and exploitation – are not as far apart as you might imagine. Chattel slavery in the U.S. was a brutal system designed to extract labor through violent and dehumanizing means. To sustain the system, however, an intricate emotional landscape was designed to keep the enslavers self-satisfied. 'Gone with the Wind' is perhaps the most famous depiction of how enslavers saw themselves as benevolent patriarchs and forced enslaved people to reinforce this fiction through cheerful professions of love. In his 1845 autobiography, Frederick Douglass described a tragic occasion when an enslaved man, asked about his situation, honestly replied that he was ill-treated. The plantation owner, confronted with testimony about the harm he was inflicting, sold the truth-teller down the river. Such cruelty, Douglass insisted, was the necessary penalty for someone who committed the sin 'of telling the simple truth' to a man whose emotional calibration required constant reassurance. To be clear, I am not evoking the emotional coercion that enslavement required in order to conflate lonely seniors with evil plantation owners, or worse still, to equate computer code with enslaved human beings. There is little danger that AI companions will courageously tell us truths that we would rather not hear. That is precisely the problem. My concern is not that people will harm sentient robots. I fear how humans will be damaged by the moral vacuum created when their primary social contacts are designed solely to serve the emotional needs of the 'user.' At a time when humanities scholarship can help guide society in the emerging age of AI, it is being suppressed and devalued. Diminishing the humanities risks denying people access to their own history. That ignorance renders people ill-equipped to resist marketers' assurances that there is no harm in buying 'friends.' People are cut off from the wisdom that surfaces in stories that warn of the moral rot that accompanies unchecked power. If you rid yourself of the vulnerability born of reaching out to another human whose response you cannot control, you lose the capacity to fully care for another and to know yourself. As we navigate the uncharted waters of AI and its role in our lives, it's important not to forget the poetry, philosophy and storytelling that remind us that human connection is supposed to require something of us, and that it is worth the effort. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Anna Mae Duane, University of Connecticut Read more: ChatGPT could be an effective and affordable tutor AI isn't close to becoming sentient – the real danger lies in how easily we're prone to anthropomorphize it AI is exciting – and an ethical minefield: 4 essential reads on the risks and concerns about this technology Anna Mae Duane does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store