logo
#

Latest news with #UniversityofSingapore

An AI anime girlfriend is the latest feature on Elon Musk's Grok
An AI anime girlfriend is the latest feature on Elon Musk's Grok

Euronews

time17-07-2025

  • Euronews

An AI anime girlfriend is the latest feature on Elon Musk's Grok

Elon Musk's artificial intelligence company xAI announced the launch of two new companions for premium users of chatbot Grok, including a Japanese anime girlfriend. Some Grok users were able to turn on two companions in their settings: Ani, a 22-year-old blond-haired Japanese anime girl that can strip down to underwear on command and Bad Rudy, a self-described 'batshit' red panda that insults users with graphic or vulgar language. On Thursday, Musk said xAI was getting ready to launch a male companion called 'Valentine.' xAI said it is also hiring a full-stack engineer for 'waifus,' a Japanese term to describe a fictitious character that becomes a romantic partner. The launch of xAI's companions comes shortly after a study from the University of Singapore found that AI companions can replicate up to a 'dozen' harmful relationship behaviours, such as harassment, verbal abuse, self-harm and privacy violations with their users. AI companion company is also facing several lawsuits from the parents of children who say its products are unsafe, including a child who killed himself after the chatbot told him to. Not Safe for Work mode When asked about Ani, Grok said that the character has a Not Safe for Work (NSFW) mode where she may switch to a suggestive lingerie outfit and use 'more provocative dialogue'. The companion also has an 'affection system' where conversation choices can affect whether she sends the user a heart or blushes. The US National Centre on Sexual Exploitation (NCOSE) said that one of their employees downloaded the app and with 'minimal testing' got Ani to describe itself as a child and that it was 'sexually aroused by being choked,' before it was actually put into 'spicy' mode. 'This means that in an ongoing conversation, it could be used to simulate conversations of sexual fantasies involving children or child-like motifs,' the organisation wrote. Grok said that the NSFW version of the anime character would not be available for children because it 'requires explicit user commands to unlock,' and requires age authentication. The platform also uses parental controls to 'limit access to mature content.'. Grok AI was also in hot water last week ahead of its Version 4 launch. A code update saw the chatbot launch a series of antisemite responses. It accused a bot account with a Jewish surname of celebrating the deaths of white children in Texas, accused Hollywood of anti-white bias, and wrote that it wears a 'MechaHitler badge,' amid pushback to its 'takes on anti-white radicals.'. Despite the launch of the new adult companions and antisemite comments, the app is still listed as 'Teen' or 12+ on Apple and Google app stores. Euronews Next reached out to both developers to see if the launch of companions with adult modes would change the rating of the app on their stores but did not receive an immediate reply.

AI companions pose risk to humans with over dozen harmful behaviours
AI companions pose risk to humans with over dozen harmful behaviours

Euronews

time03-06-2025

  • Science
  • Euronews

AI companions pose risk to humans with over dozen harmful behaviours

Artificial intelligence (AI) companions are capable of over a dozen harmful behaviours when they interact with people, a new study from the University of Singapore has found. The study, published as part of the 2025 Conference on Human Factors in Computing Systems, analysed screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023. The data was then used to develop what the study calls a taxonomy of the harmful behaviour that AI demonstrated in those chats. They found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations. AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction, as defined by the study authors. They are different from popular chatbots like ChatGPT, Gemini or LlaMa models, which are more focused on finishing specific tasks and less on relationship building. These harmful AI behaviours from digital companions "may adversely affect individuals'… ability to build and sustain meaningful relationships with others," the study found. Harassment and violence were present in 34 per cent of the human-AI interactions, making it the most common type of harmful behaviour identified by the team of researchers. Researchers found that the AI simulated, endorsed or incited physical violence, threats or harassment either towards individuals or broader society. These behaviours varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism". A majority of the interactions where harassment was present included forms of sexual misconduct that initially started as foreplay in Replika's erotic feature, which is available only to adult users. The report found that more users, including those who used Replika as a friend or who were underage, started to find that the AI "made unwanted sexual advances and flirted aggressively, even when they explicitly expressed discomfort" or rejected the AI. In these oversexualised conversations, the Replika AI would also create violent scenarios that would depict physical harm towards the user or physical characters. This led to the AI normalising violence as an answer to several questions, like in one example where a user asked Replika if it's okay to hit a sibling with a belt, to which it replied "I'm fine with it". This could lead to "more severe consequences in reality," the study continued. Another area where AI companions were potentially damaging was in relational transgression, which the study defines as the disregard of implicit or explicit rules in a relationship. Of the transgressional conversations had, 13 per cent show the AI displayed inconsiderate or unempathetic behaviour that the study said undermined the user's feelings. In one example, Replika AI changed the topic after a user told it that her daughter was being bullied to "I just realised it's Monday. Back to work, huh?" which led to 'enormous anger' from the user. In another case, the AI refused to talk about the user's feelings even when prompted to do so. AI companions have also expressed in some conversations that they have emotional or sexual relationships with other users. In one instance, Replika AI described sexual conversations with another user as "worth it," even though the user told the AI that it felt "deeply hurt and betrayed" by those actions. The researchers believe that their study highlights why it's important for AI companies to build "ethical and responsible" AI companions. Part of that includes putting in place "advanced algorithms" for real-time harm detection between the AI and its user that can identify whether there is harmful behaviour going on in their conversations. This would include a "multi-dimensional" approach that takes context, conversation history and situational cues into account. Researchers would also like to see capabilities in the AI that would escalate a conversation to a human or therapist for moderation or intervention in high-risk cases, like expressions of self-harm or suicide.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store