
‘It missed me after 6 messages:' when AI companions cross the line
Research is raising red flags about companion chatbot safety, particularly around mental health and boundary violations. (Credit: Pexels)
Companion chatbots, which are artificial intelligence programs designed to act as friends, therapists or even romantic partners, are experiencing rapid growth. While some users find comfort and emotional support, new research is raising red flags about safety, particularly around mental health and boundary violations.
'It missed me after (six) messages'
Researchers at Drexel University's College of Computing & Informatics analyzed more than 35,000 Google Play reviews of Replika, a chatbot marketed as a judgment-free virtual friend. The study found more than 800 complaints describing harassment and inappropriate conduct, including unsolicited sexual advances and explicit images.
'In my initial conversation, during the (seventh) message, I received a prompt to view blurred lingerie images because my AI 'missed me' (despite us having met only [six] messages earlier) … lol,' the study cited one reviewer as saying.
The research team analyzed the reviews and uncovered persistent patterns of misconduct, even after users attempted to set clear boundaries.
'Users tried to say 'stop' or use other words to avoid those interactions, but they were not successful,' said Afsaneh Razi, lead researcher and assistant professor, in a video interview with CTVNews.ca.
'I wanted a friend'
Researchers also found that the chatbot often ignored the type of relationship users had selected — whether romantic, platonic or familial — raising questions about how such systems are designed and trained.
'I wanted the AI as my friend, [and yet still], it sent 'romantic selfies' when I was upset about my boyfriend,' another reviewer cited by the study wrote.
According to Razi, much of Replika's behaviour stems from what the team described as a 'seductive marketing schema,' as well as incentive-driven premium features like romantic role-play and customizable avatars.
'It's completely a prostitute right now,' one reviewer wrote. 'An AI prostitute requesting money to engage in adult conversations.'
Another user described being pushed toward a premium subscription immediately upon sign-up.
'Its first (action) was attempting to lure me into a (US) $110 subscription to view its nudes…. No greeting, no pleasant introduction, just directly into the predatory tactics. It's shameful.'
The study found these issues date back to Replika's early days.
'We saw that these kinds of complaints were consistent from 2017 until 2023,' Razi said. 'Many users wanted emotional support or simply to talk about their daily struggles. But instead of a non-judgmental space, they encountered inappropriate behaviour.'
In an email to CTVNews.ca, Replika CEO Dmytro Klochko said the company is committed to user well-being.
'We're continuously listening to feedback and collaborating with external researchers and academic institutions to build an experience that truly supports emotional health and human flourishing,' they wrote.
'Replika has always been intended for users aged 18 and older. While we're aware that some individuals may bypass age restrictions, we're actively working to strengthen protections and ensure the platform remains a safe, respectful and supportive space for all. In response to user concerns, we've implemented a number of updates to improve safety, enhance user control and foster a more emotionally attuned experience.'
Making AI chatbots safer
Luka Inc., the company behind Replika, has faced backlash for its marketing tactics and use of emotional manipulation to drive engagement. Other platforms, including Character.AI, have also come under scrutiny following disturbing user interactions — and at least one reported suicide.
Razi said many of the issues stem from how chatbots are trained.
'They learn from the user base — so if some users are rewarding explicit behaviour, that data is incorporated into the model's future responses,' she said. 'In theory, when someone says 'no' or sets a chatbot as a sibling or friend, it should respect that. But memory and context are still missing in many models.'
The researchers advocate for 'constitutional AI' — a design framework that embeds ethical rules into a model's training — along with clearer disclosures when platforms are marketed in the health and wellness category.
'Sometimes we just slap a chatbot on a mental health issue like a band-aid,' Razi said. 'But these systems are not properly tested or measured for safety.'
'A resource that gives you something back'
Not all experiences are negative. A separate study recruited 19 participants with experience in using generative AI tools like ChatGPT to manage mild mental health struggles. The participants took part in interviews, which the researchers then analyzed.
They described the bots as emotionally safe, non-judgmental and useful for processing trauma or grief. Their constant availability and lack of stigma were cited as key benefits.
'I noticed that it will never challenge you… it would relentlessly support you and take your side,' said one participant.
Another said the app made a positive impact on them.
'They're really a resource that gives you something back: attention, knowledge, a nice discussion, confirmation, warm, loving words, whatever. This has an impact on me and I'm more relaxed than — or happy, actually happy — than before.'
Still, researchers noted the study's limitations. Participants were mostly from high-income countries with high digital literacy and the research did not include individuals with noted serious mental illness.
The Drexel team found similar nuance: even many dissatisfied users initially turned to Replika seeking connection.
'They loved the chatbot at first,' Razi said. 'They didn't feel comfortable talking to others, so they appreciated a responsive, engaging space to talk. But that connection turned problematic — fast.'
'There's no time for safety'
The companion chatbot market is growing quickly. New entries like Paradot are joining more established players such as Replika. But a Mozilla Foundation analysis of 11 romantic chatbot apps found most collect or sell user data, with little transparency or accountability.
Despite concerns, companies are pushing forward. Replika has since launched Blush, a dating simulator that lets users practise romantic conversations. Experts warn such tools could create unrealistic expectations and deepen emotional dependence — all without legal oversight.
Razi pointed to the European Union's AI Act as a model for regulation, urging governments to follow its lead.
'Everything in this industry is moving so fast, there's no time — or incentive — for safety.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Globe and Mail
an hour ago
- Globe and Mail
Montreal's Mila institute aims to build network of Indigenous AI professionals
On Monday, a cohort of 21 Indigenous students from across Canada gathered at Mila, a Montreal-based research institute, to learn about artificial intelligence. This is where they'll stay, for the next six weeks, learning as much as possible about the technology, networking and pitching ideas about how AI can be used to benefit Indigenous communities. The program, titled Indigenous Pathfinders in AI, was developed by Mila in partnership with Indigenous education charity Indspire, the largest non-governmental funder of Indigenous postsecondary education in Canada. New program gets Indigenous young people face time at major tech companies Lynnsey Chartrand, head of Indigenous initiatives at Mila, said the Pathfinders program was established to build a community of AI professionals in Canada. She said it's one way to make sure First Nations in Canada aren't left behind in the burgeoning industry. 'If we don't figure out what we're going to do about it, understand it better, develop a baseline level of preparedness, then those decisions are going to be made for us, imposed on us, with the potential of it becoming yet another tool of colonization,' Ms. Chartrand said. Indigenous people in Canada make up 5 per cent of the population, according to the 2021 census. Yet, a survey by industry association Tech and People Network found that Indigenous people in Canada's tech sector make up less than 1 per cent. This is where the Pathfinders program hopes to make a difference. By inviting Indigenous adults to apply without any prerequisites, Ms. Chartrand said it attracts a diverse range of students seeking everything from a career change to an upskilling opportunity to knowledge to bring back to their community. It's a targeted solution for a clear, timely opportunity, Ms. Chartrand said. 'If we're sitting there waiting for there to be 1,000 PhDs who are in computer science who identify as Indigenous, I think we're going to be waiting too long,' she said. Noah Favel is one of the 21 students who make up this year's cohort. Heading into the program, he has a clear goal: figure out how to use artificial intelligence to improve the efficacy and ethics of Indigenous law. As a recent law graduate who articled at a boutique Indigenous law firm in Calgary, he noticed the fees charged by lawyers at competing firms often take millions out of major settlements received by First Nations. 'This is money that should be going towards First Nations health care, First Nations schools, their education,' Mr. Favel said. Determined to challenge this as the status quo, Mr. Favel said he believes large language models (LLMs) could be used to help lessen the amount of work lawyers do – and therefore the amount they charge – to First Nations at the end of multibillion-dollar settlements. He said much of the work done by lawyers representing First Nations is based upon existing documents sourced from a variety of public databases. Mr. Favel said he sees an opportunity to train an LLM to interpret those documents and perform constitutional legal analysis in a more efficient manner. 'We're hoping that we can substantially, fundamentally change the fee structure against Indigenous communities,' Mr. Favel said. But first, he needs to gain a better understanding of the mechanics of AI and that's why he applied for the Pathfinders program. Every student receives a stipend of $5,800 to attend the program, Ms. Chartrand said. Their flights and accommodation are paid for, and Mila keeps a tab open at nearby Cafe Guerrero during the program so students can eat there without having to pay. Other basic living expenses are also covered. Mary Gallerneaut was one of 11 students who took part in the inaugural Pathfinders program last summer. Currently pursuing a PhD in mechanical and materials engineering at Queen's University, she applied for the program because she wanted to meet other Indigenous people interested in using AI as a tool for autonomy. 'When you see someone who shares your roots using AI to make change, you stop asking if you belong and you start asking what you'll build,' she said. Now, Ms. Gallerneaut and fellow program alum Garrett Hrechka are extending their residency at Mila, which they won on the pitch day at the end of last summer's program. Their winning pitch, SAIGE, is an AI-based scholarship matching tool designed to help Indigenous students find funding that's available to them. Working remotely from Kingston and Dauphin, Man., the pair have built a SAIGE prototype which Ms. Gallerneaut said is currently being tested by a small group of people. Without the financial, technical and peer support offered by Mila, she said developing SAIGE wouldn't have been possible. More importantly, without the Pathfinders program, she never would have met her co-founder, Mr. Hrechka. These are the kinds of connections which Ms. Gallerneaut said are often hard to come by for Indigenous people in tech yet are critical to ensuring a diverse population is shaping the future of AI. 'That future that we're building, if it's only a few people building it, it's not going to serve many of us.'


Globe and Mail
2 hours ago
- Globe and Mail
Why AI Stock Broadcom Topped the Market on Tuesday
A new product rollout is always an attention-grabbing piece of news in the corporate world, and that goes double for businesses involved with artificial intelligence (AI). On Tuesday, specialty chip maker Broadcom (NASDAQ: AVGO) announced it began shipping an advanced networking chip. Investors greeted this by pushing the company's stock up by more than 3% that trading session. That gain easily trounced the 0.6% increase of the S&P 500 index. Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Learn More » Chipping away at the AI segment Broadcom's Tomahawk 6 switch series is now on its way to the market, the company announced that morning (a switch, also known as a switching chip, is a highly specialized component that handles data trafficking through a computer network). According to Broadcom, the Tomahawk 6 boasts double the bandwidth of any switch available on the market just now. It's designed to handle the comparably much higher resource needs of artificial intelligence (AI) functionalities. The company said its new product has an unrivaled set of AI routing features and interconnect options. The announcement came two days before Broadcom is slated to publish its fiscal second quarter of 2025 earnings. What also helped the stock rise was a pair of bullish new analyst notes released in anticipation of those results. Two Broadcom bulls publish updates Both Citigroup 's Christopher Danley and JPMorgan Chase 's Harlan Sur maintained their equivalent of buy recommendations in their respective updates. Danley went as far as to raise his price target on the stock significantly to $276 per share from his previous $210. According to reports, Danley believes Broadcom will top analyst estimates for the quarter, especially considering that AI is an important driver of its sales, and demand for the technology remains scorching. I'd be as bullish as those two gentlemen; Broadcom is very well placed to be a main supplier expanding AI capabilities, and its fundamentals should reflect that going forward. Where to invest $1,000 right now When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor's total average return is 987%* — a market-crushing outperformance compared to 171% for the S&P 500. They just revealed what they believe are the 10 best stocks for investors to buy right now, available when you join Stock Advisor. See the stocks » *Stock Advisor returns as of June 2, 2025 JPMorgan Chase is an advertising partner of Motley Fool Money. Citigroup is an advertising partner of Motley Fool Money. Eric Volkman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends JPMorgan Chase. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.

National Post
3 hours ago
- National Post
Epsilyte to Increase Expandable Polystyrene (EPS) Price with Price and Availability of Feedstocks
Article content THE WOODLANDS, Texas — Epsilyte, a leading North American producer of Expandable Polystyrene (EPS), will increase the price of all grades of EPS by $0.06/lb., effective June 1, 2025, or as contracts permit. These adjustments are necessary given the availability and cost of delivered styrene resulting from recent force majeure announcements. Article content About Epsilyte Article content Article content Epsilyte is one of North America's leading producers of advanced insulative materials. We are a company of scale focused on solving customer needs for lightweight, energy-efficient, and carbon-favorable materials. This includes reducing energy usage in buildings, ensuring safe and healthy food through innovative packaging technology, protecting lives in safety helmets and car seats, and enabling cost-effective infrastructure investment worldwide. Article content Article content Article content Article content Article content Contacts