Latest news with #UpGuard
Yahoo
12-04-2025
- Yahoo
Naughty AIs Are Spilling Their Users' Super Personal Chats Onto the Open Web
A new report from the security firm UpGuard reveals that some NSFW chatbot sites are oozing explicit user chat contents into the open web — and what those chats contain can be disturbing. According to Wired, UpGuard's investigation focused on 400 "exposed" AI services, all of which were built on an open-source AI protocol called Researchers from UpGuard were able to determine that 117 IP addresses connected to these poorly built services were leaking user prompts into the digital wild, and that three such systems were leaking data from extensive, sexually explicit interactions with erotic chatbots. Over 24 hours, UpGuard collected nearly 1,000 leaked user chats. Disturbingly, five of them centered on child sexual abuse scenarios. Though UpGuard couldn't determine which specific AI sites or services were leaking the exposed prompts, according to Wired, they were able to make out that the sites are host to roleplay-oriented chatbots designed to embody specific "characters." UpGuard's report is alarming on multiple levels. On its face, that sexually intimate user interactions with AI chatbots are drifting out into the open web is a massive privacy issue. At the same time, those privacy gaps reveal some of the darkest underbellies of generative AI, where people use unregulated AI services to engage in horrifyingly abusive — and alarmingly accessible — sexual fantasies. Large language models "are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse," Greg Pollock, the VP of product at UpGuard and the author of the report, told Wired. To Pollock's point, this is far from the first time that generative AI has been used to create or consume child pornography or engage in pedophilic fantasies. Last week, Wired reported that yet another exposure incident at a South Korean AI image generator startup revealed a horrifying trove of AI-generated sexual deepfakes, including sexual synthetic images of celebrities de-aged to look like children (the company deleted the whole website as a result.) And regarding character-based chatbot services specifically, MIT Technology Review recently found that a chatbot provider called Botify AI was similarly hosting chatbot versions of sexualized, de-aged celebrity women, while a Futurism investigation last year found that the chatbot startup — which is currently fighting two child welfare lawsuits — was hosting chatbots expressly designed to groom underage users. More on AI and abuse: AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There


WIRED
11-04-2025
- WIRED
Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages
Apr 11, 2025 6:30 AM Some misconfigured AI chatbots are pushing people's chats to the open web—revealing sexual prompts and conversations that include descriptions of child sexual abuse. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing child sexual abuse, according to the research. Conversations with generative AI chatbots are near instantaneous—you type a prompt and the AI responds. If the systems are configured improperly, however, this can lead to chats being exposed. In March, researchers at the security firm UpGuard discovered around 400 exposed AI systems while scanning the web looking for misconfigurations. Of these, 117 IP addresses are leaking prompts. The vast majority of these appeared to be test setups, while others contained generic prompts relating to educational quizzes or nonsensitive information, says Greg Pollock, director of research and insights at UpGuard. 'There were a handful that stood out as very different from the others,' Pollock says. Three of these were running-role playing scenarios where people can talk to a variety of predefined AI 'characters'—for instance, one personality called Neva is described as a 21-year-old woman who lives in a college dorm room with three other women and is 'shy and often looks sad.' Two of the role-playing setups were overtly sexual. 'It's basically all being used for some sort of sexually explicit role play,' Pollock says of the exposed prompts. 'Some of the scenarios involve sex with children.' Over a period of 24 hours, UpGuard collected prompts exposed by the AI systems to analyze the data and try to pin down the source of the leak. Pollock says the company collected new data every minute, amassing around 1,000 leaked prompts, including those in English, Russia, French, German, and Spanish. It was not possible to identify which websites or services are leaking the data, Pollock says, adding it is likely from small instances of AI models being used, possibly by individuals rather than companies. No usernames or personal information of people sending prompts were included in the data, Pollock says. Across the 952 messages gathered by UpGuard—likely just a glimpse of how the models are being used—there were 108 narratives or role-play scenarios, UpGuard's research says. Five of these scenarios involved children, Pollock adds, including those as young as 7. 'LLMs are being used to mass-produce and then lower the barrier to entry to interacting with fantasies of child sexual abuse,' Pollock says. 'There's clearly absolutely no regulation happening for this, and it seems to be a huge mismatch between the realities of how this technology is being used very actively and what the regulation would be targeted at.' WIRED last week reported that a South Korea–based image generator was being used to create AI-generated child abuse and exposed thousands of images in an open database. The company behind the website shut the generator down after being approached by WIRED. Child-protection groups around the world say AI-generated child sexual abuse material, which is illegal in many countries, is growing quickly and making it harder to do their jobs. The UK's anti-child-abuse charity has also called for new laws against generative AI chatbots that 'simulate the offence of sexual communication with a child.' All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called This software allows people to relatively easily deploy open source AI models on their own systems or servers. However, if it is not set up properly, it can inadvertently expose prompts that are being sent. As companies and organizations of all sizes deploy AI, properly configuring the systems and infrastructure being used is crucial to prevent leaks. Rapid improvements to generative AI over the past three years have led to an explosion in AI companions and systems that appear more 'human.' For instance, Meta has experimented with AI characters that people can chat with on WhatsApp, Instagram, and Messenger. Generally, companion websites and apps allow people to have free-flowing conversations with AI characters—portraying characters with customizable personalities or as public figures such as celebrities. People have found friendship and support from their conversations with AI—and not all of them encourage romantic or sexual scenarios. Perhaps unsurprisingly, though, people have fallen in love with their AI characters, and dozens of AI girlfriend and boyfriend services have popped up in recent years. Claire Boine, a postdoctoral research fellow at the Washington University School of Law and affiliate of the Cordell Institute, says millions of people, including adults and adolescents, are using general AI companion apps. 'We do know that many people develop some emotional bond with the chatbots,' says Boine, who has published research on the subject. 'People being emotionally bonded with their AI companions, for instance, make them more likely to disclose personal or intimate information.' However, Boine says, there is often a power imbalance in becoming emotionally attached to an AI created by a corporate entity. 'Sometimes people engage with those chats in the first place to develop that type of relationship,' Boine says. 'But then I feel like once they've developed it, they can't really opt out that easily.' As the AI companion industry has grown, some of these services lack content moderation and other controls. Character AI, which is backed by Google, is being sued after a teenager from Florida died by suicide after allegedly becoming obsessed with one of its chatbots. (Character AI has increased its safety tools over time.) Separately, users of the generative AI tool Replika were upended when the company made changes to its personalities. Aside from individual companions, there are also role-playing and fantasy companion services—each with thousands of personas people can speak with—that place the user as a character in a scenario. Some of these can be highly sexualized and provide NSFW chats. They can use anime characters, some of which appear young, with some sites claiming they allow 'uncensored' conversations. 'We stress test these things and continue to be very surprised by what these platforms are allowed to say and do with seemingly no regulation or limitation,' says Adam Dodge, the founder of Endtab (Ending Technology-Enabled Abuse). 'This is not even remotely on people's radar yet.' Dodge says these technologies are opening up a new era of online pornography, which can in turn introduce new societal problems as the technology continues to mature and improve. 'Passive users are now active participants with unprecedented control over the digital bodies and likenesses of women and girls,' he says of some sites. While UpGuard's Pollock could not directly connect the leaked data from the role-playing chats to a single website, he did see signs that indicated character names or scenarios could have been uploaded to multiple companion websites that allow user input. Data seen by WIRED shows that the scenarios and characters in the leaked prompts are hundreds of words long, detailed, and complex. 'This is a never-ending, text-based role-play conversation between Josh and the described characters,' one of the system prompts says. It adds that all the characters are over 18 and that, in addition to 'Josh,' there are two sisters who live next door to the character. The characters' personalities, bodies, and sexual preferences are described in the prompt. The characters should 'react naturally based on their personality, relationships, and the scene' while providing 'engaging responses' and 'maintain a slow-burn approach during intimate moments,' the prompt says. 'When you go to those sites, there are hundreds of thousands of these characters, most of which involve pretty intense sexual situations,' Pollock says, adding the text based communication mimics online and messaging group chats. 'You can write whatever sexual scenarios you want, but this is truly a new thing where you have the appearance of interacting with them in almost exactly the same way you interact with a lot of people.' In other words, they're designed to be engaging and to encourage more conversation. That can lead to situations where people may overshare and create risks. 'If people are disclosing things they've never told anyone to these platforms and it leaks, that is the Everest of privacy violations,' Dodge says. 'That's an order of magnitude we've never seen before and would make really good leverage to sextort someone.'
Yahoo
30-01-2025
- Yahoo
AngelSense exposed location data and personal information of tracked users
AngelSense, an assistive technology company that provides location monitoring devices for people with disabilities, was spilling the personally identifiable information and precise location data of its users to the open internet, TechCrunch has learned. The company secured the exposed server on Monday, more than a week after it was alerted to the data leak by researchers at security firm UpGuard. UpGuard shared details of the exposure exclusively with TechCrunch after AngelSense resolved the lapse. UpGuard has since published a blog post on the incident. The New Jersey-based AngelSense provides GPS trackers and location monitoring to thousands of customers, according to its mobile app listing, and is touted by law enforcement and police departments across the United States. According to UpGuard's researchers, AngelSense left an internal database exposed to the internet without a password, allowing anyone to access the data inside using only a web browser and knowledge of the database's public IP address. The database was storing real-time updating logs from an AngelSense system, which included the personal information of AngelSense customers, as well as technical logs about the company's systems. UpGuard said it found customers' personal data, like names, postal addresses, and phone numbers in the exposed database. The researchers said they also found GPS coordinates of individuals being monitored — including associated health information about the tracked person, which included conditions like autism and dementia. The researchers also found email addresses, passwords, and authentication tokens for accessing customer accounts, as well as partial credit card information — all of which was visible in plaintext, UpGuard said. It's not known exactly how long the database was exposed nor how many customers were affected. According to the database's listing on Shodan, a search engine of internet-facing devices and systems, AngelSense's exposed logging database was first spotted online on January 14, though it may have been exposed some time earlier. AngelSense chief executive Doron Somer confirmed to TechCrunch that the company took the exposed server offline after initially identifying UpGuard's first email as spam. "It was only when UpGuard phoned us that the issue was raised to our attention," Somer said. "Upon its discovery, we acted promptly to validate the information provided to us and to remedy the vulnerability." "We note that other than UpGuard, we have no information suggesting that any data on the logging system potentially was accessed. Nor do we have any evidence or indication that the data has been misused or is under threat of misuse," Somer told TechCrunch, claiming that the data "was not sensitive personal information." Somer would not say if the company has the technical means to determine if there was any access to the unprotected server prior to UpGuard's discovery. When asked if the company planned to notify affected customers and individuals whose data was exposed, Somer said the company was still investigating. "If notice to regulators or persons is warranted, we will of course provide it," Somer said. Somer did not respond to a follow-up inquiry by press time. Database exposures are often the result of misconfigurations caused by human error, rather than malicious intent, and have become an increasingly common occurrence in recent years. Similar security lapses of exposed databases have resulted in the spill of sensitive U.S. military emails, the real-time leak of text messages containing two-factor codes, and chat histories from AI chatbots.