19-05-2025
‘It missed me after 6 messages:' when AI companions cross the line
Research is raising red flags about companion chatbot safety, particularly around mental health and boundary violations. (Credit: Pexels)
Companion chatbots, which are artificial intelligence programs designed to act as friends, therapists or even romantic partners, are experiencing rapid growth. While some users find comfort and emotional support, new research is raising red flags about safety, particularly around mental health and boundary violations.
'It missed me after (six) messages'
Researchers at Drexel University's College of Computing & Informatics analyzed more than 35,000 Google Play reviews of Replika, a chatbot marketed as a judgment-free virtual friend. The study found more than 800 complaints describing harassment and inappropriate conduct, including unsolicited sexual advances and explicit images.
'In my initial conversation, during the (seventh) message, I received a prompt to view blurred lingerie images because my AI 'missed me' (despite us having met only [six] messages earlier) … lol,' the study cited one reviewer as saying.
The research team analyzed the reviews and uncovered persistent patterns of misconduct, even after users attempted to set clear boundaries.
'Users tried to say 'stop' or use other words to avoid those interactions, but they were not successful,' said Afsaneh Razi, lead researcher and assistant professor, in a video interview with
'I wanted a friend'
Researchers also found that the chatbot often ignored the type of relationship users had selected — whether romantic, platonic or familial — raising questions about how such systems are designed and trained.
'I wanted the AI as my friend, [and yet still], it sent 'romantic selfies' when I was upset about my boyfriend,' another reviewer cited by the study wrote.
According to Razi, much of Replika's behaviour stems from what the team described as a 'seductive marketing schema,' as well as incentive-driven premium features like romantic role-play and customizable avatars.
'It's completely a prostitute right now,' one reviewer wrote. 'An AI prostitute requesting money to engage in adult conversations.'
Another user described being pushed toward a premium subscription immediately upon sign-up.
'Its first (action) was attempting to lure me into a (US) $110 subscription to view its nudes…. No greeting, no pleasant introduction, just directly into the predatory tactics. It's shameful.'
The study found these issues date back to Replika's early days.
'We saw that these kinds of complaints were consistent from 2017 until 2023,' Razi said. 'Many users wanted emotional support or simply to talk about their daily struggles. But instead of a non-judgmental space, they encountered inappropriate behaviour.'
In an email to Replika CEO Dmytro Klochko said the company is committed to user well-being.
'We're continuously listening to feedback and collaborating with external researchers and academic institutions to build an experience that truly supports emotional health and human flourishing,' they wrote.
'Replika has always been intended for users aged 18 and older. While we're aware that some individuals may bypass age restrictions, we're actively working to strengthen protections and ensure the platform remains a safe, respectful and supportive space for all. In response to user concerns, we've implemented a number of updates to improve safety, enhance user control and foster a more emotionally attuned experience.'
Making AI chatbots safer
Luka Inc., the company behind Replika, has faced backlash for its marketing tactics and use of emotional manipulation to drive engagement. Other platforms, including have also come under scrutiny following disturbing user interactions — and at least one reported suicide.
Razi said many of the issues stem from how chatbots are trained.
'They learn from the user base — so if some users are rewarding explicit behaviour, that data is incorporated into the model's future responses,' she said. 'In theory, when someone says 'no' or sets a chatbot as a sibling or friend, it should respect that. But memory and context are still missing in many models.'
The researchers advocate for 'constitutional AI' — a design framework that embeds ethical rules into a model's training — along with clearer disclosures when platforms are marketed in the health and wellness category.
'Sometimes we just slap a chatbot on a mental health issue like a band-aid,' Razi said. 'But these systems are not properly tested or measured for safety.'
'A resource that gives you something back'
Not all experiences are negative. A separate study recruited 19 participants with experience in using generative AI tools like ChatGPT to manage mild mental health struggles. The participants took part in interviews, which the researchers then analyzed.
They described the bots as emotionally safe, non-judgmental and useful for processing trauma or grief. Their constant availability and lack of stigma were cited as key benefits.
'I noticed that it will never challenge you… it would relentlessly support you and take your side,' said one participant.
Another said the app made a positive impact on them.
'They're really a resource that gives you something back: attention, knowledge, a nice discussion, confirmation, warm, loving words, whatever. This has an impact on me and I'm more relaxed than — or happy, actually happy — than before.'
Still, researchers noted the study's limitations. Participants were mostly from high-income countries with high digital literacy and the research did not include individuals with noted serious mental illness.
The Drexel team found similar nuance: even many dissatisfied users initially turned to Replika seeking connection.
'They loved the chatbot at first,' Razi said. 'They didn't feel comfortable talking to others, so they appreciated a responsive, engaging space to talk. But that connection turned problematic — fast.'
'There's no time for safety'
The companion chatbot market is growing quickly. New entries like Paradot are joining more established players such as Replika. But a Mozilla Foundation analysis of 11 romantic chatbot apps found most collect or sell user data, with little transparency or accountability.
Despite concerns, companies are pushing forward. Replika has since launched Blush, a dating simulator that lets users practise romantic conversations. Experts warn such tools could create unrealistic expectations and deepen emotional dependence — all without legal oversight.
Razi pointed to the European Union's AI Act as a model for regulation, urging governments to follow its lead.
'Everything in this industry is moving so fast, there's no time — or incentive — for safety.'