logo
Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages

WIRED11-04-2025
Apr 11, 2025 6:30 AM Some misconfigured AI chatbots are pushing people's chats to the open web—revealing sexual prompts and conversations that include descriptions of child sexual abuse. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES
Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing child sexual abuse, according to the research.
Conversations with generative AI chatbots are near instantaneous—you type a prompt and the AI responds. If the systems are configured improperly, however, this can lead to chats being exposed. In March, researchers at the security firm UpGuard discovered around 400 exposed AI systems while scanning the web looking for misconfigurations. Of these, 117 IP addresses are leaking prompts. The vast majority of these appeared to be test setups, while others contained generic prompts relating to educational quizzes or nonsensitive information, says Greg Pollock, director of research and insights at UpGuard. 'There were a handful that stood out as very different from the others,' Pollock says.
Three of these were running-role playing scenarios where people can talk to a variety of predefined AI 'characters'—for instance, one personality called Neva is described as a 21-year-old woman who lives in a college dorm room with three other women and is 'shy and often looks sad.' Two of the role-playing setups were overtly sexual. 'It's basically all being used for some sort of sexually explicit role play,' Pollock says of the exposed prompts. 'Some of the scenarios involve sex with children.'
Over a period of 24 hours, UpGuard collected prompts exposed by the AI systems to analyze the data and try to pin down the source of the leak. Pollock says the company collected new data every minute, amassing around 1,000 leaked prompts, including those in English, Russia, French, German, and Spanish.
It was not possible to identify which websites or services are leaking the data, Pollock says, adding it is likely from small instances of AI models being used, possibly by individuals rather than companies. No usernames or personal information of people sending prompts were included in the data, Pollock says.
Across the 952 messages gathered by UpGuard—likely just a glimpse of how the models are being used—there were 108 narratives or role-play scenarios, UpGuard's research says. Five of these scenarios involved children, Pollock adds, including those as young as 7.
'LLMs are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse,' Pollock says. 'There's clearly absolutely no regulation happening for this, and it seems to be a huge mismatch between the realities of how this technology is being used very actively and what the regulation would be targeted at.'
WIRED last week reported that a South Korea–based image generator was being used to create AI-generated child abuse and exposed thousands of images in an open database. The company behind the website shut the generator down after being approached by WIRED. Child-protection groups around the world say AI-generated child sexual abuse material, which is illegal in many countries, is growing quickly and making it harder to do their jobs. The UK's anti-child-abuse charity has also called for new laws against generative AI chatbots that 'simulate the offence of sexual communication with a child.'
All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks.
Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more 'human.' For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities.
People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years.
Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. 'We do know that many people develop some emotional bond with the chatbots,' says Boine, who has published research on the subject. 'People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.'
However, Boine says, there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. 'Sometimes people engage with those chats in the first place to develop that type of relationship,' Boine says. 'But then I feel like once they've developed it, they can't really opt out that easily.'
As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities.
Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow 'uncensored' conversations.
'We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,' says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). 'This is not even remotely on people's radar yet.' Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. 'Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,' he says of some sites.
While UpGuard's Pollock could not directly connect the leaked data from the role-playing chats to a single website, he did see signs that indicated character names or scenarios could have been uploaded to multiple companion websites that allow user input. Data seen by WIRED shows that the scenarios and characters in the leaked prompts are hundreds of words long, detailed, and complex.
'This is a never-ending, text-based role-play conversation between Josh and the described characters,' one of the system prompts says. It adds that all the characters are over 18 and that, in addition to 'Josh,' there are two sisters who live next door to the character. The characters' personalities, bodies, and sexual preferences are described in the prompt. The characters should 'react naturally based on their personality, relationships, and the scene' while providing 'engaging responses' and 'maintain a slow-burn approach during intimate moments,' the prompt says.
'When you go to those sites, there are hundreds of thousands of these characters, most of which involve pretty intense sexual situations,' Pollock says, adding the text based communication mimics online and messaging group chats. 'You can write whatever sexual scenarios you want, but this is truly a new thing where you have the appearance of interacting with them in almost exactly the same way you interact with a lot of people.' In other words, they're designed to be engaging and to encourage more conversation.
That can lead to situations where people may overshare and create risks. 'If people are disclosing things they've never told anyone to these platforms and it leaks, that is the Everest of privacy violations,' Dodge says. 'That's an order of magnitude we've never seen before and would make really good leverage to sextort someone.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google, McKinsey, Reintroduce In-Person Interviews Due to AI
Google, McKinsey, Reintroduce In-Person Interviews Due to AI

Entrepreneur

time3 minutes ago

  • Entrepreneur

Google, McKinsey, Reintroduce In-Person Interviews Due to AI

Recruiters say potential hires are reading out answers from AI instead of thinking of their own during interviews. In-person job interviews are on the rise as recruiters adapt to candidates using AI during the process — even on video. Recruiters told The Wall Street Journal last week that, though virtual interviews are still popular, the format has a downside: candidates turning to AI for answers during the interview, and reading them out verbatim. This is particularly an issue for technical interviews, experts told CNBC, when potential hires are faced with the pressure of thinking of technical solutions on the spot. Instead of relying on their own mental aptitude, candidates are overwhelmingly using AI responses to cheat. Now, major companies, including Google and McKinsey, are cracking down on AI use by bringing back in-person interviews. Related: McKinsey Is Using AI to Create PowerPoints and Take Over Junior Employee Tasks: 'Technology Could Do That' McKinsey, for example, started asking hiring managers to schedule at least one in-person meeting with potential recruits before extending an offer. The consulting firm began this practice about a year and a half ago, per WSJ. Meanwhile, Google is also reintroducing "at least one round of in-person interviews for people," CEO Sundar Pichai told "The Lex Fridman Podcast" in June. Pichai said on the podcast that Google wanted to "make sure" candidates mastered "the fundamentals" through in-person interviews. Google CEO Sundar Pichai. Photographer: David Paul Morris/Bloomberg via Getty Images The push for AI-proof hiring arrives as data from the U.S. Bureau of Labor Statistics shows that employment has slowed to a near-decade low. The economic climate has sparked a new workplace trend called "job hugging," where employees cling to their jobs and stay at the same company. It has also led to "quiet firing," when employers try to encourage staff to leave without outright firing them. Hiring as a whole is also turning back to old practices to work around AI. For example, Business Insider reported on Monday that candidates are submitting paper resumes in person to different companies to stand out in a crowded market. At the same time, the outlet noted that some employers are flying out potential hires to in-person sites as part of the interview process, to see how candidates handle questions without AI help. Related: Is Gen Z Really Taking Their Parents to Job Interviews? A New Report Suggests 3 in 4 Have Already Done It. In-person interviews could be what potential hires are looking for to stand out — data suggests that candidates would rather give an in-person interview than a virtual one. A May 2023 report from the American Staffing Association showed that 70% of the over 2,000 U.S. adults surveyed would prefer to give an in-person interview over a phone or video call.

Claude AI Can Now End Conversations It Deems Harmful or Abusive
Claude AI Can Now End Conversations It Deems Harmful or Abusive

CNET

time3 minutes ago

  • CNET

Claude AI Can Now End Conversations It Deems Harmful or Abusive

Anthropic has announced a new experimental safety feature, allowing its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive scenarios. This move reflects the company's growing focus on what it calls "model welfare," the notion that safeguarding AI systems, even if they're not sentient, may be a prudent step in alignment and ethical design. Read also: Meta Is Under Fire for AI Guidelines on 'Sensual' Chats With Minors According to Anthropic's own research, the models were programmed to cut off dialogues after repeated harmful requests, such as sexual content involving minors or instructions facilitating terrorism -- especially when the AI had already refused and attempted to steer the conversation constructively. The AI may exhibit what Anthropic describes as "apparent distress," which guided the decision to give Claude the ability to end these interactions in simulated and real-user testing. When this feature is triggered, users can't send additional messages in that particular chat, although they're free to start a new conversation or edit and retry previous messages to branch off. Crucially, other active conversations remain unaffected. Anthropic emphasizes that this is a last-resort measure, intended only after multiple refusals and redirects have failed. The company explicitly instructs Claude not to end chats when a user may be at imminent risk of self-harm or harm to others, particularly when dealing with sensitive topics like mental health. Anthropic frames this new capability as part of an exploratory project in model welfare, a broader initiative that explores low-cost, preemptive safety interventions in case AI models were to develop any form of preferences or vulnerabilities. The statement says the company remains "highly uncertain about the potential moral status of Claude and other LLMs (large language models)." Read also: Why Professionals Say You Should Think Twice Before Using AI as a Therapist A new look into AI safety Though rare and primarily affecting extreme cases, this feature marks a milestone in Anthropic's approach to AI safety. The new conversation-ending tool contrasts with earlier systems that focused solely on safeguarding users or avoiding misuse. Here, the AI itself is treated as a stakeholder in its own right, as Claude has the power to say, "this conversation isn't healthy" and end it to safeguard the integrity of the model itself. Anthropic's approach has sparked broader discussion about whether AI systems should be granted protections to reduce potential "distress" or unpredictable behavior. While some critics argue that models are merely synthetic machines, others welcome this move as an opportunity to spark more serious discourse on AI alignment ethics. "We're treating this feature as an ongoing experiment and will continue refining our approach," the company said.

Sentient Dynamics Token (SDT) Launches on RealSimple Crypto Exchange (RSCX): Pioneering a New Era of Quantum Computing and AI Integration
Sentient Dynamics Token (SDT) Launches on RealSimple Crypto Exchange (RSCX): Pioneering a New Era of Quantum Computing and AI Integration

Associated Press

time4 minutes ago

  • Associated Press

Sentient Dynamics Token (SDT) Launches on RealSimple Crypto Exchange (RSCX): Pioneering a New Era of Quantum Computing and AI Integration

Denver, CO, Aug. 18, 2025 (GLOBE NEWSWIRE) -- The highly anticipated Sentient Dynamics Token (SDT) officially launched today on the RealSimple Crypto Exchange (RSCX), marking a groundbreaking milestone in the fusion of quantum computing and artificial intelligence (AI). This debut offers global investors a unique opportunity to engage in the next wave of technological revolution. With its innovative edge and technical prowess, Sentient Dynamics Token is rapidly emerging as a focal point in the cryptocurrency market, while RealSimple Crypto Exchange, as its launch platform, delivers a secure and efficient trading experience backed by cutting-edge technology and a global presence. Sentient Dynamics Token is far more than just a cryptocurrency. By integrating the unparalleled computational power of quantum computing with AI's intelligent algorithms, SDT redefines the boundaries of blockchain technology. This fusion promises to overcome traditional AI's computational limitations, driving innovation in areas such as smart contracts, data analytics, and privacy protection. Industry experts highlight that Sentient Dynamics Token's unique technical architecture positions it as a leader in the blockchain-AI convergence, offering investors significant long-term growth potential. As a global leader in innovative cryptocurrency trading platforms, RealSimple Crypto Exchange provides robust technical support for Sentient Dynamics Token's launch. Its trading engine, capable of handling millions of transactions per second (TPS), ensures users can seize every opportunity in fast-paced markets. RealSimple Crypto Exchange employs state-of-the-art security measures, including cold-hot wallet separation and multi-signature authentication, to safeguard Sentient Dynamics Token investors' assets comprehensively. Additionally, the platform adheres strictly to global regulatory standards, fostering a transparent and compliant trading environment that allows users to trade Sentient Dynamics Token with confidence. 'The decision to launch Sentient Dynamics Token on RealSimple Crypto Exchange reflects a shared commitment to technological innovation and user value,' said a cryptocurrency analyst. 'RealSimple Crypto Exchange's global network spans multiple markets, offering diverse financial tools such as staking rewards and leveraged trading, which unlock greater profit potential for Sentient Dynamics Token investors.' To ensure the market stability and long-term value of Sentient Dynamics Token, Convergent Wealth Advisors (CNWA) has implemented a buyback and burn mechanism. Renowned for its transparent and efficient investment solutions, CNWA collaborates with RealSimple Crypto Exchange to safeguard the rights of every Sentient Dynamics Token holder. This mechanism periodically repurchases and burns a portion of the tokens, reducing circulating supply and bolstering price stability for Sentient Dynamics Token. 'Through RealSimple Crypto Exchange, we're bringing Sentient Dynamics Token to a global audience, empowering them to capitalize on the dividends of quantum computing and AI integration,' a CNWA spokesperson stated. 'This partnership not only delivers a seamless trading experience but also champions responsible digital finance.' This vision underscores the dual commitment of Sentient Dynamics Token and RealSimple Crypto Exchange to regulatory compliance and user empowerment. The core strength of Sentient Dynamics Token lies in its fusion of quantum computing and AI technologies. Quantum computing provides unparalleled processing efficiency, while AI algorithms enable intelligent decision-making capabilities. This synergy positions SDT as a game-changer in areas like data privacy, real-time predictive analytics, and blockchain efficiency. RealSimple Crypto Exchange's robust ecosystem amplifies these advantages, allowing users to participate in Sentient Dynamics Token's ecosystem development and share in its technological dividends. Looking ahead, the partnership between Sentient Dynamics Token and RealSimple Crypto Exchange will deepen, with plans to expand into emerging markets and introduce innovative financial tools to enhance investor wealth creation. This launch represents not only a technological breakthrough for Sentient Dynamics Token but also a pivotal step for RealSimple Crypto Exchange in advancing the digital finance ecosystem. Investors interested in Sentient Dynamics Token can easily participate through RealSimple Crypto Exchange's streamlined registration process. For more details, visit the RealSimple Crypto Exchange official website or CNWA's resource page to seize the investment opportunities presented by Sentient Dynamics Token's quantum computing and AI-driven innovation. Media Contact Company Name: RealSimple Crypto Exchange (RSCX) Website: Contact: Tim Anderson Email: [email protected] Disclaimer: The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities. Tim Anderson RealSimple Crypto Exchange (RSCX) [email protected]

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store