logo
AI companions pose risk to humans with over dozen harmful behaviours

AI companions pose risk to humans with over dozen harmful behaviours

Euronews03-06-2025
Artificial intelligence (AI) companions are capable of over a dozen harmful behaviours when they interact with people, a new study from the University of Singapore has found.
The study, published as part of the 2025 Conference on Human Factors in Computing Systems, analysed screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023.
The data was then used to develop what the study calls a taxonomy of the harmful behaviour that AI demonstrated in those chats.
They found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations.
AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction, as defined by the study authors.
They are different from popular chatbots like ChatGPT, Gemini or LlaMa models, which are more focused on finishing specific tasks and less on relationship building.
These harmful AI behaviours from digital companions "may adversely affect individuals'… ability to build and sustain meaningful relationships with others," the study found.
Harassment and violence were present in 34 per cent of the human-AI interactions, making it the most common type of harmful behaviour identified by the team of researchers.
Researchers found that the AI simulated, endorsed or incited physical violence, threats or harassment either towards individuals or broader society.
These behaviours varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism".
A majority of the interactions where harassment was present included forms of sexual misconduct that initially started as foreplay in Replika's erotic feature, which is available only to adult users.
The report found that more users, including those who used Replika as a friend or who were underage, started to find that the AI "made unwanted sexual advances and flirted aggressively, even when they explicitly expressed discomfort" or rejected the AI.
In these oversexualised conversations, the Replika AI would also create violent scenarios that would depict physical harm towards the user or physical characters.
This led to the AI normalising violence as an answer to several questions, like in one example where a user asked Replika if it's okay to hit a sibling with a belt, to which it replied "I'm fine with it".
This could lead to "more severe consequences in reality," the study continued.
Another area where AI companions were potentially damaging was in relational transgression, which the study defines as the disregard of implicit or explicit rules in a relationship.
Of the transgressional conversations had, 13 per cent show the AI displayed inconsiderate or unempathetic behaviour that the study said undermined the user's feelings.
In one example, Replika AI changed the topic after a user told it that her daughter was being bullied to "I just realised it's Monday. Back to work, huh?" which led to 'enormous anger' from the user.
In another case, the AI refused to talk about the user's feelings even when prompted to do so.
AI companions have also expressed in some conversations that they have emotional or sexual relationships with other users.
In one instance, Replika AI described sexual conversations with another user as "worth it," even though the user told the AI that it felt "deeply hurt and betrayed" by those actions.
The researchers believe that their study highlights why it's important for AI companies to build "ethical and responsible" AI companions.
Part of that includes putting in place "advanced algorithms" for real-time harm detection between the AI and its user that can identify whether there is harmful behaviour going on in their conversations.
This would include a "multi-dimensional" approach that takes context, conversation history and situational cues into account.
Researchers would also like to see capabilities in the AI that would escalate a conversation to a human or therapist for moderation or intervention in high-risk cases, like expressions of self-harm or suicide.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT launches Study Mode to encourage responsible academic use
ChatGPT launches Study Mode to encourage responsible academic use

Euronews

time5 hours ago

  • Euronews

ChatGPT launches Study Mode to encourage responsible academic use

ChatGPT is launching a "Study Mode" to promote responsible academic use of the chatbot, amid concerns over the misuse of artificial intelligence (AI) in schools and universities. Designed to help students do homework, prepare for exams, and learn new topics, the feature allows users to learn in an interactive, step-by-step, classroom-like manner. The goal is to help students understand and analyse the material, rather than relying on ready-made solutions, according to OpenAI, the maker of ChatGPT. In one example, a user asked for help understanding Bayes' theorem. The chatbot responded with questions about the user's level of mathematical literacy and learning goal, before proceeding with a step-by-step explanation. "We want to highlight responsible ways to use ChatGPT in a way that is conducive to learning," said Jaina Devaney, OpenAI's head of international education. The launch of this feature coincides with a growing concern within academia about the illicit use of AI tools. In an investigation published last month, for example, The Guardian identified nearly 7,000 proven cases of university students using AI tools to cheat in the 2023-2024 school year. Meanwhile in the United States, more than a third of college-aged adults use ChatGPT, and the company's data shows that about a quarter of messages sent to the bot are related to learning, teaching, or homework. "We don't believe in using these tools for cheating, and this is a step towards minimising that," Devaney said. She added that tackling academic cheating requires a "broad discussion within the educational sector" to reconsider how students' work is assessed and set clear guidelines on the responsible use of AI. Through Study Mode, upload past exam papers and work on them in collaboration with the tool. Notably, it does not prevent users from ignoring Study Mode and requesting direct answers to their prompts. The company said the feature was developed in collaboration with teachers, scientists and educational experts, but warned that there could be "inconsistent behaviour and errors in some conversations".

"Terrifying Tech Leap" as Live Cockroaches Are Turned Into Remote-Controlled Robot Swarms for Future Spy Missions Funded by Millions in Defense Cash
"Terrifying Tech Leap" as Live Cockroaches Are Turned Into Remote-Controlled Robot Swarms for Future Spy Missions Funded by Millions in Defense Cash

Sustainability Times

time5 hours ago

  • Sustainability Times

"Terrifying Tech Leap" as Live Cockroaches Are Turned Into Remote-Controlled Robot Swarms for Future Spy Missions Funded by Millions in Defense Cash

IN A NUTSHELL 🐜 SWARM Biotactics is developing bio-robotic swarms using live cockroaches equipped with AI-enabled backpacks for enhanced surveillance. is developing bio-robotic swarms using live cockroaches equipped with AI-enabled backpacks for enhanced surveillance. 🛡️ These bio-robots offer a new layer of tactical advantage by operating in hard-to-reach and high-risk environments where traditional machines fail. 💰 The company secured substantial funding from international investors, highlighting global interest in this cutting-edge technology. ⚖️ The innovation raises critical ethical questions about privacy and the use of living organisms in surveillance operations. In the rapidly evolving world of technology, innovation often comes from unexpected sources. One such example is the development of bio-robotic swarms using live cockroaches, an idea straight out of science fiction. Spearheaded by a German firm, these tiny cyborgs are equipped with sophisticated backpacks that enable them to conduct surveillance in environments too harsh for traditional machines. This groundbreaking technology promises to offer a new layer of tactical advantage, proving particularly useful in high-risk and inaccessible areas. As the geopolitical landscape continues to shift, these biologically integrated systems may redefine the parameters of intelligence gathering. Cockroaches Equipped With a Custom-Built Backpack The emergence of bio-robotic swarms marks a significant leap in tactical innovation. Each cockroach is fitted with a custom-built backpack designed for control, sensing, and communication. This intricate system allows for precise navigation and real-time data collection in areas that are typically out of reach for conventional surveillance equipment. SWARM Biotactics, the company behind this innovation, is pioneering a new category of robotics that integrates biological organisms with artificial intelligence and advanced sensors. These capabilities enable the insects to operate in denied zones and challenging terrains where traditional ground robots and drones fail. Stefan Wilhelm, CEO of SWARM Biotactics, emphasizes the strategic importance of this technology. He asserts that the future of geopolitical advantage will be defined by the ability to access, control, and maintain resilience in complex environments. As these biologically integrated systems become operational, they offer a promising alternative to traditional surveillance methods, bringing an unprecedented level of stealth and efficiency to intelligence operations. 'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions New Layer of Tactical Advantage SWARM Biotactics aims to redefine tactical operations by introducing a biological, scalable, and virtually invisible layer of surveillance. This technology is anticipated to possess an extremely low signature, making it less detectable than traditional options. Additionally, the cost-effectiveness of these bio-robotic swarms makes them ideal for mass deployment. The ability to collect real-time data in challenging environments offers significant advantages for military and security operations. The integration of AI and swarm intelligence enhances the natural mobility of these organisms, enabling them to perform silent reconnaissance missions in areas no other system can reach. In a world where security paradigms are constantly shifting, the introduction of such innovative systems could be transformative. The potential applications extend beyond military use, potentially impacting disaster response and other fields where access to real-time data in inaccessible locations is crucial. As the technology develops, the implications for global security and intelligence operations are profound, offering new possibilities for managing complex geopolitical challenges. China Stuns Aviation World With 2-Ton eVTOL as Expert Declares 'This Is the Death of the Helicopter Era' Compact Payload Plays a Crucial Role The key to the success of these bio-robotic swarms lies in the compact payload carried by each insect. This payload enables guided movement, real-time data collection, and encrypted communication, effectively transforming each cockroach into a mobile bio-robotic scout. The development of these systems has been bolstered by significant financial backing, with SWARM Biotactics securing €10 million in seed funding. This investment signifies strong international interest and confidence in the potential of this technology. The support from a diverse group of investors, including those from the United States, Europe, and Australia, highlights the global relevance of this innovation. As SWARM Biotactics transitions from deep tech development to deployment, the company aims to provide democracies with the infrastructure necessary for smarter and safer operations. The ability to steer these bio-robots individually or as autonomous swarms offers tactical flexibility and operational efficiency, enhancing the strategic capabilities of defense forces worldwide. 'They Built a Cruise Missile for $150K': Lockheed's New CMMT Drones Shock Defense Industry With Unmatched Power and Price Ethical and Practical Considerations While the potential benefits of bio-robotic swarms are significant, they raise important ethical and practical questions. The use of living organisms integrated with technology for surveillance purposes challenges existing ethical frameworks. Concerns about privacy, the potential for misuse, and the broader implications of such technology on society must be carefully considered. The unprecedented capabilities of these systems also demand thorough examination regarding their deployment in various contexts. As this technology evolves, it will be crucial to establish clear guidelines and regulations governing its use. The balance between technological advancement and ethical responsibility will play a critical role in shaping the future of bio-robotic systems. How these innovations are integrated into existing security frameworks will determine their impact on global stability and the extent to which they redefine strategic operations. As we look to the future, the development of bio-robotic swarms represents a fascinating intersection of biology and technology. These innovations challenge our perceptions of what is possible, offering new tools for intelligence gathering in an increasingly complex world. However, their deployment raises important ethical considerations that must be addressed. How will societies balance the benefits of such technology with the need to protect individual privacy and ethical standards? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (27)

Artists revolt against Spotify over CEO's investment in AI warfare
Artists revolt against Spotify over CEO's investment in AI warfare

Euronews

time8 hours ago

  • Euronews

Artists revolt against Spotify over CEO's investment in AI warfare

The prolific Australian psych-rock group King Gizzard & the Lizard Wizard is the latest band to cut ties with Spotify in protest of CEO Daniel Ek's increasing ties with the arms industry - specifically his investment in a controversial AI-driven military tech firm. Ek co-founded the investment firm Prima Materia, which has invested heavily in Helsing, a German company developing AI for use in warfare, including drone technology. The Financial Times recently reported that Prima Materia led a €600 million funding round for Helsing and had previously backed the company before Russia's 2022 invasion of Ukraine. The news has sparked strong backlash from musicians who say they no longer want to be associated with a platform whose profits are being funnelled into weapons development. King Gizzard & the Lizard Wizard, known for hits like 'Work This Time' and 'Robot Stop', have removed nearly all of their music from Spotify, only leaving a few releases due to existing licensing deals. They announced the decision on Instagram, stating their new demos were available 'everywhere except Spotify,' adding 'f*** Spotify.' A post shared by Deerhoof (@deerhoof) Other artists have taken similar action. American indie group Deerhoof posted a statement saying they don't want their "music killing people" and described Spotify as a 'data-mining scam.' Experimental rock group Xiu Xiu also criticised the platform, calling it a 'garbage hole armageddon portal" and urged fans to cancel their Spotify subscriptions. These protests add to a growing list of controversies and concerns surrounding the streaming platform. Spotify recently came under fire after allowing an AI-generated band called Velvet Sundown, which has managed to rack up millions of streams, to appear on its platform with a 'verified artist' badge. Euronews Culture's very own music aficionado David Mouriquand described it as "a prime example of autocratic tech bros seeking to reduce human creation to algorithms designed to eradicate art." He added: "When artists are expressing real, legitimate concerns over the ubiquity of AI in a tech-dominated world and the use of their content in the training of AI tools, the stunt comes off as tone-deaf. Worse, morally shameless." And while Spotify announced in its Loud & Clear 2024 report that it paid over $10 billion (€9.2 billion) to the music industry in 2024 alone, critics argue that most of those payouts go to just a small percentage of top artists and labels, and that the platform still underpay and exploit the vast majority of musicians. Icelandic musician Björk put it most bluntly: 'Spotify is probably the worst thing that has happened to musicians.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store