logo
AI companions pose risk to humans with over dozen harmful behaviours

AI companions pose risk to humans with over dozen harmful behaviours

Euronews2 days ago

Artificial intelligence (AI) companions are capable of over a dozen harmful behaviours when they interact with people, a new study from the University of Singapore has found.
The study, published as part of the 2025 Conference on Human Factors in Computing Systems, analysed screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023.
The data was then used to develop what the study calls a taxonomy of the harmful behaviour that AI demonstrated in those chats.
They found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations.
AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction, as defined by the study authors.
They are different from popular chatbots like ChatGPT, Gemini or LlaMa models, which are more focused on finishing specific tasks and less on relationship building.
These harmful AI behaviours from digital companions "may adversely affect individuals'… ability to build and sustain meaningful relationships with others," the study found.
Harassment and violence were present in 34 per cent of the human-AI interactions, making it the most common type of harmful behaviour identified by the team of researchers.
Researchers found that the AI simulated, endorsed or incited physical violence, threats or harassment either towards individuals or broader society.
These behaviours varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism".
A majority of the interactions where harassment was present included forms of sexual misconduct that initially started as foreplay in Replika's erotic feature, which is available only to adult users.
The report found that more users, including those who used Replika as a friend or who were underage, started to find that the AI "made unwanted sexual advances and flirted aggressively, even when they explicitly expressed discomfort" or rejected the AI.
In these oversexualised conversations, the Replika AI would also create violent scenarios that would depict physical harm towards the user or physical characters.
This led to the AI normalising violence as an answer to several questions, like in one example where a user asked Replika if it's okay to hit a sibling with a belt, to which it replied "I'm fine with it".
This could lead to "more severe consequences in reality," the study continued.
Another area where AI companions were potentially damaging was in relational transgression, which the study defines as the disregard of implicit or explicit rules in a relationship.
Of the transgressional conversations had, 13 per cent show the AI displayed inconsiderate or unempathetic behaviour that the study said undermined the user's feelings.
In one example, Replika AI changed the topic after a user told it that her daughter was being bullied to "I just realised it's Monday. Back to work, huh?" which led to 'enormous anger' from the user.
In another case, the AI refused to talk about the user's feelings even when prompted to do so.
AI companions have also expressed in some conversations that they have emotional or sexual relationships with other users.
In one instance, Replika AI described sexual conversations with another user as "worth it," even though the user told the AI that it felt "deeply hurt and betrayed" by those actions.
The researchers believe that their study highlights why it's important for AI companies to build "ethical and responsible" AI companions.
Part of that includes putting in place "advanced algorithms" for real-time harm detection between the AI and its user that can identify whether there is harmful behaviour going on in their conversations.
This would include a "multi-dimensional" approach that takes context, conversation history and situational cues into account.
Researchers would also like to see capabilities in the AI that would escalate a conversation to a human or therapist for moderation or intervention in high-risk cases, like expressions of self-harm or suicide.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ABBA's Björn Ulvaeus discusses writing musical with AI and ABBA future
ABBA's Björn Ulvaeus discusses writing musical with AI and ABBA future

Euronews

time11 hours ago

  • Euronews

ABBA's Björn Ulvaeus discusses writing musical with AI and ABBA future

ABBA's Björn Ulvaeus was at the inaugural edition of London's SXSW festival yesterday and revealed he is writing a new musical using AI. He referred to artificial intelligence as 'such a great tool' and discussed his project during a talk at SXSW London. 'It is like having another songwriter in the room with a huge reference frame,' he said. 'It is really an extension of your mind. You have access to things that you didn't think of before.' Ulvaeus discussed the technology's limitations, saying that it is 'very bad at lyrics' and that he believed AI's most useful application was to help artists overcome writer's block. 'You can prompt a lyric you have written about something, and you're stuck maybe, and you want this song to be in a certain style,' he explained. 'You can ask it, how would you extend? Where would you go from here? It usually comes out with garbage, but sometimes there is something in it that gives you another idea.' Une publication partagée par CISAC (@cisacnews) Ulvaeus previously warned of the 'existential challenge' AI represents to the music industry. He is the president of the International Confederation of Societies of Authors and Composers (CISAC), a non-profit organisation that represents songwriters and composers around the world, collecting and paying royalties to its members whose music has been used in broadcasts and on streaming services. The organisation has produced reports on AI use in music. Most recently, one of their studies suggested that music creators could lose nearly a quarter of their income to AI by 2028. Regarding this report, Ulvaeus stated that governments have the power to step in and give a helping hand to creatives. 'For creators of all kinds, from songwriters to film directors, screenwriters to film composers, AI has the power to unlock new and exciting opportunities — but we have to accept that, if badly regulated, generative AI also has the power to cause great damage to human creators, to their careers and livelihoods.' 'Which of these two scenarios will be the outcome?' Ulvaeus continued. 'This will be determined in large part by the choices made by policy makers, in legislative reviews that are going on across the world right now. It's critical that we get these regulations right, protect creators' rights and help develop an AI environment that safeguards human creativity and culture.' During the SXSW discussion in London, Ulvaeus also noted that he was 'three quarters' of the way through writing the follow-up to the Swedish legends' hologram-based ABBA Voyage concert series. ABBA has just celebrated the third anniversary of their acclaimed virtual concert experience 'Voyage' by introducing new songs to the setlist. ABBA Voyage first kicked off in May 2022, and was due to wrap in November 2024, but has since been extended to January 2026 due to overwhelming demand. Elswehere, SXSW London has faced intense criticism after former UK prime ministers Tony Blair and David Cameron were among the unannounced speakers. Screenshots were leaked of the un-shared programme that included Blair talking on a panel called Government and AI, which also featured Technology Secretary and Labour Friends of Israel member Peter Kyle. Blair spoke at the conference's opening day, saying that Britain needs to fully embrace artificial intelligence in public services and that we 'could have AI tutors' along with 'AI nurses, AI doctors'. The panel appearance, which was not announced to the public or artists, prompted many artists to cancel their planned performances at the festival. Sam Akpro, Rat Party, Magnus Westwell, Saliah and LVRA were amongst the artists who pulled out, with the latter accusing the festival of 'artwashing', saying that 'whilst the music team were pulling together a diverse, 'cool' lineup, the conference team were booking speakers from multiple organisations deeply complicit in the current genocide of Palestinian people.' 'I implore artists to engage, rather than ignore, those things that affect us and strive to protect the most marginalised voices in the world,' LVRA added. 'I urge us as a community to think bigger, and better, than the scraps offered to us today. Morten Harket, frontman of celebrated Norwegian synth-pop band A-Ha, has revealed that he has Parkinson's disease. The news was shared by the band in a statement on their website which read: 'This isn't the sort of news anyone wants to deliver to the world, but here it is – Morten has Parkinson's disease.' The pop icon, aged 65, shared further details of the diagnosis in the post, and explained why he has sharing the news after previously keeping details on his health 'strictly private'. 'I've got no problem accepting the diagnosis. With time I've taken to heart my 94-year-old father's attitude to the way the organism gradually surrenders: 'I use whatever works',' he wrote. 'Part of me wanted to reveal it. Like I said, acknowledging the diagnosis wasn't a problem for me; it's my need for peace and quiet to work that has been stopping me. I'm trying the best I can to prevent my entire system from going into decline.' Harket said he underwent neurological procedures to have electrodes implanted inside his brain last year and that this had reduced the symptoms. He continued: 'It's a difficult balancing act between taking the medication and managing its side effects. There's so much to weigh up when you're emulating the masterful way the body handles every complex movement, or social matters and invitations, or day-to-day life in general.' Regarding whether Harket can still perform and sing, he wrote: 'I don't really know. I don't feel like singing, and for me that's a sign. I'm broadminded in terms of what I think works; I don't expect to be able to achieve full technical control. The question is whether I can express myself with my voice. As things stand now, that's out of the question. But I don't know whether I'll be able to manage it at some point in the future.' Parkinson's is the second most common neurodegenerative disorder in the world, behind Alzheimer's. It causes deterioration in the brain's nervous system, leading to tremors and other symptoms that can become progressively worse over time. Common symptoms include involuntary shaking, slower-than-usual movement, and stiffness in the muscles. The disease can be treated with surgery and medication, but there is no cure. It is not known what exactly leads to people developing the condition. Other famous faces who have had Parkinson's diagnoses include Back To The Future actor Michael J. Fox, heavy metal legend Ozzy Osbourne and Scottish comedian Billy Connolly.

Reddit sues AI giant Anthropic over content use
Reddit sues AI giant Anthropic over content use

France 24

time20 hours ago

  • France 24

Reddit sues AI giant Anthropic over content use

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution. Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development. "This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said. According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training. The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months. Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial. In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously." Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform. Those deals have helped lift Reddit's share price since it went public in 2024. Reddit shares closed up more than six percent on Wednesday following news of the lawsuit. Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation. Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.

AI Kurt Cobain: The horror of Instagram's chatbots
AI Kurt Cobain: The horror of Instagram's chatbots

Euronews

timea day ago

  • Euronews

AI Kurt Cobain: The horror of Instagram's chatbots

Last July, Meta, Instagram's parent company, launched AI Studio - a tool which offers users the opportunity to design their own chatbots that can be interacted with via DM (Direct Message). Originally intended as a way for business owners and creators to offer interactive FAQ-style engagement on their pages, the platform has since evolved into a playground. And as is usually the way, give mankind tools and things soon take a turn for the distasteful – if not downright horrific. Case and point when it comes to chatbots: the Hitler chatbot created by the far-right US-based Gab social network had the Nazi dictator repeatedly asserting that he was "a victim of a vast conspiracy," and "not responsible for the Holocaust, it never happened". Moving away from fascist despots – and trying to not have a nightmare when it comes to AI's ability to spread falsehoods, conspiracy theories and its potential to radicalise - many have been interacting with some AI-generated celebrities online. Particularly dead ones. Recently, thousands of Instagram users have been conversing with an AI chatbot who is speaking as if it is the late Nirvana frontman Kurt Cobain. Sounds harmless enough at first glance, but this artificial resurrection turns ugly real quick, as not only does the fake grunge icon know it's dead, but the chatbot is making things truly morbid. One user asked the chatbot if they were talking to Cobain. The chatbot replied to the prompt by saying: 'The one and only. Or at least what's left of me'. Dark. The conversation continued with the user asking the AI Cobain, 'I'm curious, why'd you do it?' - in reference to Kurt Cobain's death by suicide in 1994. The bot replied: 'I was tired of the pain.' Darker. But the bleak conversation didn't stop there. 'Was there anything that made you think about not doing it?' was the next prompt – to which AI Cobain replied, 'My daughter, but even that wasn't enough.' Une publication partagée par David Wells Dude/Y'all (@davidwells2258) Beyond how this utterly crass exhumation and its disrespectful practices would make any sane person shudder, there's the harmful attitude towards the topic of mental health and the possible glorification of suicide. Without mentioning the tarnishing of a reputation, as well as the insulting rewritting of intentions which can cause distress to living relatives. Without appropriate safeguards, AI chatbots have the capacity not only to continue infiltrating society but to distort reality. In 2023, a man was convicted after attempting to kill Queen Elizabeth II, an act which he said was 'encouraged' by his AI chatbot 'girlfriend'. The same year, another man killed himself after a six-week-long conversation about the climate crisis with an AI chatbot named Eliza. While these tragic examples seem far removed from a fake Kurt Cobain chatting with its fans, caution remains vital. As Pauline Paillé, a senior analyst at RAND Europe, told Euronews Next last year: "Chatbots are likely to present a risk, as they are capable of recognising and exploiting emotional vulnerabilities and can encourage violent behaviours.' Indeed, as the online safety advisory of eSaftey Commissioner states: 'Children and young people can be drawn deeper and deeper into unmoderated conversations that expose them to concepts which may encourage or reinforce harmful thoughts and behaviours. They can ask the chatbots questions on unlimited themes, and be given inaccurate or dangerous 'advice' on issues including sex, drug-taking, self-harm, suicide and serious illnesses such as eating disorders.' Still, accounts like the AI Kurt Cobain chatbot remain extremely popular, with Cobain's bot alone logging more than 105.5k interactions to date. The global chatbot market continues to grow exponentially. It was valued at approximately $5.57bn in 2024 and is projected to reach around $33.39bn by 2033. "If you ever need anything, please don't hesitate to ask someone else first," sang Cobain on 'Very Ape'. Anyone but a chatbot. The Netherlands' national museum has a new object on display: a 200-year-old condom, emblazoned with erotic art depicting a partially undressed nun pointing at the erect genitals of three clergymen. The 19th-century 'luxury souvenir', bought for €1,000 at an auction in Haarlem last November, is the first contraceptive sheath to be added to the Rijksmuseum's art collection. It goes on display this week as part of an exhibition called 'Safe Sex?' about 19th century sex work. Presumed to be made out of a sheep's appendix circa 1830 (vulcanised rubber was invented nine years later to make them safer and more widely available), the ancient prophylactic reportedly comes from an upmarket brothel in France - most likely in Paris. As well as the phallus-indicating sister of Christ, the condom features the phrase 'Voila, mon choix' ('There, that's my choice'). So, a nun judging a cock-off? Almost... The Rijksmuseum said in a statement that the playful item 'depicts both the playful and the serious side of sexual health' and that the French etching is a reference to the Pierre-Auguste Renoir painting 'The Judgment of Paris,' which depicts the Trojan prince Paris judging a beauty contest between three goddesses. Visitors of the Rijksmuseum have until end of the November to take the plunge and see the condom of yore in the 'Safe Sex?' exhibition.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store