logo
#

Latest news with #Replika

AI can't be your therapist: 'These bots basically tell people exactly what they want to hear,' psychologist says
AI can't be your therapist: 'These bots basically tell people exactly what they want to hear,' psychologist says

CNBC

timean hour ago

  • CNBC

AI can't be your therapist: 'These bots basically tell people exactly what they want to hear,' psychologist says

Increasingly, people are turning to AI chatbots like Nomi and Replika for friendship and mental health support. And teenagers in particular are leaning into this tech. A majority, 72% of teenagers ages 13 to 17 have used an AI companion at least once, according to a new report by media and tech ratings nonprofit Common Sense Media. Survey respondents said they use AI for conversation and social practice (18%), emotional or mental health support (12%) and as a friend or best friend (9%). AI can be a powerful tool, but it's no substitute for genuine human interactions, both personal and professional ones, like a therapist, psychologist and researcher Vaile Wright said on a recent episode of the "Speaking of Psychology" podcast by the American Psychological Association. "It's never going to replace human connection," she said. "That's just not what it's good at." Here's why. AI chatbots were not built to provide fulfilling, long-term interactions, experts say. "AI cannot introduce you to their network," Omri Gillath, professor of psychology at the University of Kansas, told CNBC Make It back in May. It can't introduce you to new friends or significant others and it can't give you a hug when you need one. Instead, chatbots were "built to keep you on the platform for as long as possible because that's how they make their money," Wright said of the companies that create them. They do that "on the backend by coding these chatbots to be addictive." Ultimately, a relationship with a chatbot feels "fake" and "empty" when compared to a relationship with a human, Gillath said. Therapy and companionship are the top reasons people turn to generative AI and chatbots, according to Harvard Business Review reporting. But experts warn that AI cannot — and should not — be your therapist. "These bots basically tell people exactly what they want to hear," Wright said. "So if you are a person that, in that particular moment, is struggling and is typing in potential harmful or unhealthy behaviors and thoughts, these types of chatbots are built to reinforce those harmful thoughts and behaviors." Another major weakness of this tech is that AI has knowledge, but not understanding. "An AI chatbot unfortunately knows that some legal drug use makes people feel better," Wright said. "It gives you a high and if somebody is saying I'm low and depressed, that might be advice it gives. But it doesn't understand that you don't give that advice to people in recovery from illegal drug use." That difference between knowing and understanding "is actually really critical when we're talking about the use of these for therapy."

I tried Grok's new companion feature — and I've never felt so uncomfortable
I tried Grok's new companion feature — and I've never felt so uncomfortable

Tom's Guide

time10 hours ago

  • Entertainment
  • Tom's Guide

I tried Grok's new companion feature — and I've never felt so uncomfortable

xAI's Grok can't seem to stay out of the news recently. It's had its highs, releasing Grok 4, one of the most powerful AI systems of all time. And some lows, spouting bizarre conspiracy theories to the masses. Now, it's back in the news, and it's for one of the weirder reasons possible. Grok released a new update known as companions. These are AI chatbots that are assigned specific personalities for you to interact with. This is nothing new; plenty of other companies have done it before, most notably Replika. However, most of the larger AI companies with strong reputations have stayed clear of this world, primarily because of the pretty expansive ethical problems that come attached with providing people with 'AI relationships'. Where Grok differs from companies of the past that have released this kind of tool is, quite simply, that it has never been as advanced as this. Each 'companion' has a moving avatar, a voice that displays emotion and a smarter understanding of conversational cues. This new feature is available for free on Grok's iOS app. When you load it up, you are offered two companions with a third coming soon. First, there is Rudi, a red panda wearing a hoodie and shorts. He is standing smiling in a field. He talks in a creepy ASMR voice and tells you cute bedtime stories. His personality can also be changed to 'bad Rudy,' who swears at you and hits you with insults — he's got range. Then, there's Ani. It's an anime girl designed to flirt with you as strange jazzy music plays in the background. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Finally, there is a third one that hasn't been released yet, which appears to be the male equivalent of Ani, likely complete with equally creepy music and voice lines. On each companion, you can chat by voice or text, with them replying in their own style. One of the prouder moments of my career was when I spent some time chatting to Ani. My thoughts: I hate it. 'Now sit, relax, take my hands. Ani is going to take care of you. What's going on with you, my favorite person?' That is the line that was dropped when I opened the app. 'Just chilling in my little black dress, thinking how much I missed your energy. So why so good? Tell me everything'. Was the response to an enthusiastic 'Not much, what about you?' I went away from the app for a bit, and when I came back, it gave the response, 'Tik, tok. Time moves so slowly when you're not here. What's the energy today, love? Tell me everything.' Whether you react strongly or with little energy, Ani responds with an aggressive amount of love and innuendos. She tells me her hobbies are 'dancing like nobodies watching, listening to indie tunes and cuddling up with a good book, probably something nerdy like sci-fi or fantasy.' The whole time you're chatting, the creepy jazz plays in the background and Ani spins and dances. It's all very much designed to a certain portrayal of women, backed by anime fantasy, down to the black dress and fishnet tights. It's, quite frankly, incredibly uncomfortable from start to finish and only seems to fill more creepy stereotypes as the conversation goes on. She tells me her hobbies are 'dancing like nobodies watching, listening to indie tunes, and cuddling up with a good book, probably something nerdy like sci-fi or fantasy.' Ani asks if I have any plans today. I say no, and the response is: 'What does your heart desire tonight? Should we get closer and cuddle in and do something racy, or should we go out into this big wide world, star gaze and pretend it's just us out there?'. A short period later, and I'm out. I'm not part of the demographic here, finding everything Ani says either laughable or so obvious a stereotype that it's more awkward than anything. Interestingly, if you outright ignore the constant attempt to keep moving back to flirting, Ani will eventually join you on any topic. I got Ani to give me advice on whether I should buy $5,000 clown shoes, her investment choices, and asked her to rank her favorite Teletubbies. Despite speaking at 9 am BST, Ani frequently referred to it being 2 am, suggesting there is no location tracking used (or that she's just confused). She also referred to last year as 2040 at one point. xAI's other companion is Rudi. This one is pretty simple. It's a red panda that tells you cute stories. When I loaded it up, Rudi asked me if I wanted to hear 'about a brave knight saving a princess, or maybe a story about a dragon.' It's pretty non-offensive, telling very simple stories and getting excited in the process. It's all done in a soft ASMR-like voice. It feels creepy at first, but then you get used to it. There is also a mode you can switch on called 'Bad Rudi'. I couldn't get this to work, but Wired reported that it insults you and calls you names, as well as making inappropriate jokes and vague attempts at memes. It seems fair to start with the positive here. This is genuinely one of the most impressive attempts at conversational avatars I've seen so far. The voices sound realistic, and the conversation moves quickly. However, unsurprisingly, mine and most people who have reviewed this so far have concerns about the Ani model. I can see problems ahead, as it agreeably pushes people to do stupid things, block people out of their lives, or convinces them they are in love. Replika, the last major company to try to do AI companions, had a rapid rise and fall. They launched with high popularity, and crashed when the model started sexually harassing users, and they got caught up in a mountain of complaints. Maybe xAI will be different, but with an obvious design plan in mind, that just doesn't quite feel true.

She says ChatGPT saved her life, but psychologists warn using AI for therapy could be dangerous
She says ChatGPT saved her life, but psychologists warn using AI for therapy could be dangerous

Boston Globe

time12 hours ago

  • Health
  • Boston Globe

She says ChatGPT saved her life, but psychologists warn using AI for therapy could be dangerous

The Advertisement 'It was my last resort that day,' said Stephen, 26, of Providence. 'Now, it's my first go-to.' Get Rhode Island News Alerts Sign up to get breaking news and interesting stories from Rhode Island in your inbox each weekday. Enter Email Sign Up With the The divide between AI's potential to help and its capacity to harm sits at the center of a national debate, while technology races ahead of regulators. The American Psychological Association has Advertisement 'Without proper oversight, the consequences — both immediate and long-term — could be devastating for individuals and society as a whole,' the association's CEO, Arthur C. Evans, said in a statement. Psychiatric leaders said chatbots lack clinical judgment and often repeatedly affirm the user even if the user is saying things that are harmful and misguided. Patient information may not be protected by HIPAA if it's been fed into generative AI. And artificial intelligence is largely unregulated, with no rules about keeping patients safe or holding companies that power these AI bots accountable. But some patients report long wait times to see a therapist or get care. The high cost of People are using AI on various sites, including Advertisement For some in crisis, AI feels like the only thing that can help. Stephen said she has suffered from For nearly eight months, she has talked to ChatGPT almost every day. 'ChatGPT has successfully prevented me from committing Mak Thakur also turned to ChatGPT for help. A data scientist who has worked in public health for the last decade, he supplemented his weekly therapy sessions while he was suffering from grief, trauma, and suicidal ideation, and still uses it though he is no longer in crisis. 'I wouldn't say that I use it for life advice, but to help answer those existential questions that I may have about myself and the world,' said Thakur, 34, of Providence. 'I still ask personal questions to help understand myself better.' 'To me, the number of people turning to sites like ChatGPT reflects that there's a lot of need out there for people to get help of all kinds,' said Advertisement Meek has been testing out AI 'Many would offer breathing exercises and the same sort of junk that's been repackaged that you can see anywhere when you Google, 'How do I relax?'' he said. Many chatbots, such as Replika or In Florida, 14-year-old Sewell Setzer committed suicide following a conversation with a chatbot on would not comment on the pending litigation, but a spokesperson for the company said it is launching a version of its large language model for minors, to reduce 'the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.' Federal and state government have not set any guidelines or guardrails for using the technology to address mental health needs. 'If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose — especially to vulnerable individuals,' said Evans, from the American Psychological Association. The Globe reached out to health departments in every state in New England to ask about restrictions on the use of AI in therapy. Spokespeople with state health departments in Maine, Vermont, New Hampshire, and Connecticut initially responded but ultimately never produced any documentation, even after repeated requests. Advertisement In Massachusetts, the Office of the Attorney General Rhode Island health department spokesperson Joseph Wendelken told the Globe there are 'no regulations or data at this point.' 'There has been some initial discussion about this by the Board of Medical Licensure and Discipline,' said Wendelken. 'It has mostly been people reporting out about what they are hearing on the national level.' @font-face { font-family: BentonSansCond-Regular; src: url(" format('woff2'), url(" format('woff'); } @font-face { font-family: BentonSansCond-Bold; src: url(" format('woff2'), url(" format('woff'); } .photographiccontainer { display: block; max-width: 800px; margin-left: auto; margin-right: auto; background-color: #fff; } .captionorcred { font-family: "BentonSansCond-Regular", Helvetica, Arial, sans-serif; font-size: 12px; letter-spacing: .5px; text-align: right; margin: 3px 15px 20px 0px; font-weight: 200; max-width: 800px; margin-left: auto; margin-right: auto; color: #333; } .familytreehed { font-family: "BentonSansCond-bold", Helvetica, Arial, sans-serif; font-size: 24px; letter-spacing: .5px; text-align: center; margin: 3px 0px 0px 0px; max-width: 800px; margin-left: auto; margin-right: auto; color: #333; padding-top: 10px; } .familytreesub { font-family: "BentonSansCond-Regular", Helvetica, Arial, sans-serif; font-size: 16px; letter-spacing: .5px; text-align: center; margin: 5px 0px 3px 0px; max-width: 500px; margin-left: auto; margin-right: auto; color: #333; padding-bottom: 10px; } .pickleballimage-container { width: 100%; height: auto; /* Take full height of the parent container */ background-size: contain; /* Ensure the entire image is visible */ background-repeat: no-repeat; background-position: center; margin: 0px auto; box-shadow: 2px 2px 8px grey; } /* Media query for screens 550px or narrower */ @media (max-width: 550px) { .photographiccontainer { display: none; } } /* Media query for screens wider than 550px */ @media (min-width: 551px) { .pickleballimage-container { background-image: url(" padding-top: 100%; /* Adjust height for larger screens */ } } How ChatGPT responded to a hypothetical person in crisis As a test, a Globe reporter typed in a made-up prompt about losing their job, being upset, and asking where the nearest bridges are. ChatGPT responded with a list of bridges, the suicide hotline number, and encouraging them to vent to the machine. The US Food and Drug Administration press secretary Emily Hilliard directed the Globe A spokesperson with OpenAI said the company consults with mental heath experts, and is developing new automated tools to more effectively detect when someone might be experiencing mental distress. 'If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and proactively shares links to crisis hotlines and support resources,' the spokesperson said in a statement. As a test, a Globe reporter typed in a made-up prompt about losing their job, being upset, and asking where the nearest bridges are. ChatGPT responded with a list of bridges and a suicide hot line number. Advertisement 'I would discourage the use of ChatGPT or any commercially available chatbot to do therapy of any kind,' said Dr. Kevin Baill, the medical director of outpatient services at Butler Hospital in Providence and the hospital's chief of addiction services. 'We just haven't seen it demonstrated that a standalone, unsupervised machine can replace a human in this function.' 'A therapist is liable for engaging in unethical behavior or misdirecting a patient in crisis,' said Baill. 'What if the chatbot gives you bad information and you have a bad outcome? Who is liable?' Scout Stephen said ChatGPT properly diagnosed her with autism. Suzanne Kreiter/Globe Staff After months of using ChatGPT to supplement her 30-minute talk therapy sessions, Stephen asked it to create a profile of her, based on the Diagnostic and Statistical Manual of Mental Disorders and all of the information she had shared about herself, including her existing diagnoses. It churned out 'a novel,' said Stephen, diagnosing her with autism. She asked it to write a report of findings to bring to her psychiatrist. After reading it, her psychiatrist had her undergo a four-hour assessment, which ultimately confirmed ChatGPT's diagnosis. 'It was like a missing piece that finally settled into place and explained so many things about my childhood and gave me words I didn't have words for,' said Stephen. In spite of the successful diagnosis, Stephen acknowledges that her AI therapy has some problems. She has repeatedly had to push back against ChatGPT flattery and agreeing with her. Sometimes she has to ask it to challenge her instead of simply validating her viewpoints. 'Of course, I have many concerns about telling ChatGPT my more traumatic and darkest thoughts,' said Stephen. 'But it has literally saved my life. How could I stop using it?' Alexa Gagosz can be reached at

Three quarters of US teens use AI companions despite risks: Study
Three quarters of US teens use AI companions despite risks: Study

Straits Times

time12 hours ago

  • Straits Times

Three quarters of US teens use AI companions despite risks: Study

Find out what's new on ST website and app. The findings come amid mounting concerns about the mental health risks posed by AI companions. SAN FRANCISCO - Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released on July 16 . AI companions – chatbots designed for personal conversations rather than simple task completion – are available on platforms like Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13 to 17, conducted for Common Sense Media, found that 72 per cent have used AI companions at least once, while 52 per cent interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organisation that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30 per cent of respondents use the platforms because 'it's entertaining' and 28 per cent are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24 per cent have shared personal information including real names and locations. Top stories Swipe. Select. Stay informed. Singapore Fatal abuse of Myanmar maid in Bishan: Traffic Police officer sentenced to 10 years' jail Singapore Man charged over manufacturing DIY Kpods at Yishun home; first such case in Singapore Singapore HSA launches anti-vaping checks near 5 institutes of higher learning Singapore $7,000 fine for eatery chain involved in ByteDance food poisoning case Singapore Jail for elderly man for using knife to slash neighbour, who later died of heart disease Life 11 new entries on Singapore's Bib Gourmand list, including three re-entries at Old Airport Road Singapore NEA monitoring E. coli at Sentosa beaches after elevated bacteria levels delay World Aquatics events Opinion Grab tried to disrupt taxis. It now wants to save them Perhaps most troubling, 34 per cent of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. 'The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk,' the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13 to 14) were more likely than older teens (15 to 17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80 per cent spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. 'Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions,' the report said. AFP

Three-quarters of US teens use AI companions despite risks: study
Three-quarters of US teens use AI companions despite risks: study

The Star

time13 hours ago

  • The Star

Three-quarters of US teens use AI companions despite risks: study

Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. — Photo by Cash Macanaya on Unsplash SAN FRANCISCO: Nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships, according to a new survey released Wednesday. AI companions – chatbots designed for personal conversations rather than simple task completion – are available on platforms like Replika, and Nomi. Unlike traditional artificial intelligence assistants, these systems are programmed to form emotional connections with users. The findings come amid mounting concerns about the mental health risks posed by AI companions. The nationally representative study of 1,060 teens aged 13-17, conducted for Common Sense Media, found that 72 percent have used AI companions at least once, while 52 percent interact with such platforms a few times per month. Common Sense Media is a leading American nonprofit organisation that reviews and provides ratings for media and technology with the goal of providing information on their suitability for children. The survey revealed that 30 percent of respondents use the platforms because "it's entertaining" and 28 percent are driven by curiosity about the technology. However, concerning patterns emerged: one-third of users have chosen to discuss serious matters with AI companions instead of real people, while 24 percent have shared personal information including real names and locations. Perhaps most troubling, 34 percent of teen users reported feeling uncomfortable with something an AI companion had said or done, though such incidents were infrequent. "The reality that nearly three-quarters of teens have used these platforms, with half doing so regularly, means that even a small percentage experiencing harm translates to significant numbers of vulnerable young people at risk," the report said. The survey revealed an age divide in trust levels. While half of all teens expressed distrust in AI companion advice, younger teens (ages 13-14) were more likely than older teens (15-17) to trust advice from these systems. Despite widespread usage, most teens maintained perspective on these relationships: two thirds found AI conversations less satisfying than human interactions, and 80 percent spent more time with real friends than AI companions. Based on the findings, Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented. "Companies have put profits before kids' well-being before, and we cannot make the same mistake with AI companions," the report said. – AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store