Teens Are Exploring Relationships & Sexting With AI Chatbots — & Restrictions Aren't Working
According to The Washington Post, teens are having conversations that are romantic, sexually graphic and violent, and more on 'ai companion' tools like Character.AI, Replika, Talkie, Talk AI, SpicyChat, and PolyBuzz. General generative AI tools like ChatGPT and Meta AI have also launched companion-chat tools.
More from SheKnows
Nicole Kidman Reveals She Discusses 'The Most Intimate Things' With Her Teenage Daughters: 'I Get To Be Their Guide'
Damian Redman of Saratoga Springs, New York, found PolyBuzz on his 8th grader's phone, and found that his son was having flirty conversations with AI female anime characters. 'I don't want to put yesterday's rules on today's kids. I want to wait and figure out what's going on,' he told the outlet.
'We're seeing teens experiment with different types of relationships — being someone's wife, being someone's father, being someone's kid. There's game and anime-related content that people are working though. There's advice,' Robbie Torney, senior director of AI programs at family advocacy group Common Sense Media, said in the article. 'The sex is part of it but it's not the only part of it.'
The outlet reported 10 different AI companions, citing workarounds, paid options, and prompts that teens can use to get past content restriction filters. That's scary stuff! Even if you are on top of it, it's hard to completely protect them from having harmful and/or explicit interactions.
One concerned parent recently took to Reddit, where they shared that they blocked Character.AI from their 14-year-old's phone, and later found they were on PolyBuzz.AI. 'I hate to think my child's first romantic (and sexual) interactions are with bots,' they wrote on the Parenting subreddit. 'It's just creepy. Am I the only parent having this problem? Thoughts?'
Some parents suggested focusing on more of a communication approach with your child instead of trying to block everything. 'We have 'had a conversation' and 'communicated' with our teenage son for YEARS,' one person wrote. 'We've used multiple parental control apps. All for naught. He still finds ways to access what he wants. We're decently tech-savvy, but so is he. And the reality is there's no good way to completely prevent a singularly-minded hormonal teenager from achieving his/her goal.'
Someone else wrote, 'There are more than dozens of these sites out there. Craving connection is a very human thing, which is only amplified in teenage years. Social media can do this which is why getting likes or being popular on social media is so desirable to teens, but this is an entire other drug. Forming 'personal' one on one relationships with AI chatbots is so dangerous. Keep them away from this drug at any cost.'
Experts back up this opinion. In April, Common Sense Media launched an AI Risk Assessment Team to assess AI platforms to report on the likelihood of causing harm. Social AI companions like Character.AI, Nomi, and Replika were all ranked unacceptable for teen users, as teens were using these platforms to bond emotionally and engage in sexual conversations.
According to Common Sense Media, this research found that the chatbots could generate 'harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens.'
The experts at the organization recommend no social AI companions should be allowed for anyone under the age of 18. They also recommend further research and regulations on AI companions due to the emotional and psychological impacts they can cause teens, whose brains are still developing.
For now, the best we can do is continue to monitor our teens' phones, keep having conversations about these issues, and advocate for change.Best of SheKnows
Celebrity Moms Who Were Honest About Miscarriage & Pregnancy Loss — Because It Matters
Every Single Time Shemar Moore Proved He's the Proudest First-Time Girl Dad
The Best Places to Buy Furniture for Teens Online
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
AI is everywhere. Can it write obituaries?
When my mother-in-law passed away earlier this year, my brother-in-law spent hours painstakingly crafting her obituary. It was a labor of love, done in the midst of grief. He could have been helped by artificial intelligence (AI). And herein lies the quandary: should he have turned to AI for help? It could have saved him time and grief. Of course, 'should' is a personal question and one only family members can answer. But if you were to ask if AI could help write an obituary, well then, the answer is yes. AI is already writing obituaries for funeral homes and has been for some time. In fact, there is specialized software designed specifically for obituaries. CelebrateAlly is an AI-based software program that can help you plan weddings, birthdays and yes, remembrances for your loved ones. For this one, you will be given prompts to fill out so the AI bot can create a more personalized obituary or speech. In fact, for obituaries, there are 110 questions to fill out, with questions not only about life events, but also about mannerisms, favorite celebrities and their perfect day. The Washington Post reported recently that Josh McQueen, the vice president of marketing and product for the funeral-home management software Passare, said its AI tool has written tens of thousands of obituaries nationwide in the past few years. The Post also reported that Ryan Lynch, the head of product at PlotBox, an Ireland-based developer of cemetery software, said the tools were the talk of last year's National Funeral Directors Association conference in Las Vegas. 'Someone did stand up and say they thought it was the greatest advancement in funeral-home technology since some kind of embalming tool,' Lynch said. 'Which I thought was maybe a bit hyperbolic.' The enthusiasm was clear, though. AI-generated obituaries are a game-changer. AI can save a lot of time for funeral directors who are often pinched for time. For a funeral home to write obituaries may take six hours or more — three hours meeting with the family to gather details and another three to craft an obituary. From that standpoint, it's easy to see the appeal. Still, it can feel cold and impersonal to grieving family members. Disrespectful, even, to use an algorithm to create something as personal as an obituary for a loved one. Recently on KSL's Inside Sources, which I cohost with long-time attorney Greg Skordas, we asked our listeners if they would use AI to generate an obituary. Responses were about evenly split between 'absolutely not' and 'yes! We did it recently and it turned out beautifully!' Greg and I, though, thought we would ask ChatGPT to write obituaries for us. It was a fun exercise, at least for me. After having written hundreds of articles over a number of years, ChatGPT didn't have much trouble finding information about me. Greg, on the other hand, had a bit more trouble. He asked his wife for help and the first time she tried, she got a message from the AI helper expressing condolences for her loss. After that, our producer tried and her first response back was 'I couldn't locate any obituary indicating that Greg Skordas of Utah has passed away. In fact, all available information shows him as a living attorney and former political candidate.' After asking AI to write a mock obituary, we finally got something that generally got the details of his life right, but without much personalization. I played around with mine, asking ChatGPT to add in my new role on the radio (it missed that), then to write one that emphasized my faith. That was fun. It made up a quote I supposedly say all the time: 'We are here to love, to lift and to lead — with faith as our guide.' Not bad, but also not true. But, my favorite version is when I asked ChatGPT to please include some of my imperfections. Then, I got gems like these: 'Holly's life was not quiet, and neither was she. She could be stubborn, opinionated and sometimes struggled to sit still — constantly driven by an inner fire to do more, to fix what was broken and to speak up for what she believed in.' OK, accurate. 'She was not one to back down from a fight, and her boldness sometimes rubbed others the wrong way.' Also accurate. 'She made mistakes — personally and professionally — and didn't pretend otherwise. But she always sought to learn from them, to repent, to reconcile and to move forward with humility. 'Above all, Holly lived with purpose. She had an insatiable drive to build, to lead, to nurture and to lift. Her hands and heart were rarely idle. She gave her energy to causes she believed in, even when she was exhausted. She offered her voice, even when it trembled. And she gave her heart, even when it had been bruised. 'Holly didn't live a small or quiet life. She lived all in. She made mistakes. She made amends. She gave everything she had.' I'll take it. As we continue to embrace AI innovations, perhaps the most meaningful way forward is to find a balance — where AI can support us in our tasks, while we hold onto the aspects that make us human. In the end, an obituary should reflect not just a life, but the love, struggle and humanity that defined that life. Solve the daily Crossword


Washington Post
2 hours ago
- Washington Post
AI systems ‘ignorant' of sensitive data can be safer, but still smart
Happy Tuesday! I'm Nitasha Tiku, The Washington Post's tech culture reporter, filling in for Will Oremus on today's Tech Brief. Send tips about AI via Signal to: nitasha.10 Restricting the information diet of AI software could make it safer. Tech companies including OpenAI and Google have told lawmakers and courts that they must be allowed to grab as much online data as possible to create cutting-edge artificial intelligence systems. New research suggests that screening the information shoved into machine learning algorithms could make it easier to tackle safety concerns about AI. The findings could provide ammunition to regulators who want AI companies to be more transparent and accountable for the choices executives make around the vast troves of data powering generative AI. The research was a collaboration between the British government's AI Security Institute and the nonprofit lab Eleuther AI. They found that filtering the material used to train an AI system to remove key concepts can reduce its ability to help a user work on biohazards, like a novel bioweapon. And that remedy didn't reduce broadly reduce the system's overall capabilities. To test their technique, dubbed 'deep ignorance,' the researchers trained multiple versions of open source AI software for text called Pythia-6.9B, developed by Eleuther. Some were built with copies of a standard dataset of online text that had been filtered to remove potentially hazardous information such as research on enhanced pandemic pathogens, bioterrorism and dual-use virology. In the tests, versions of the AI software built on filtered data scored better on benchmarks designed to test AI capabilities around biorisks. Further experiments showed this didn't come at the cost of reducing the overall performance of the AI system or performance on high-school biology questions, although there was a slight reduction of accuracy on college-level biology questions. The researchers say their methods are not overly burdensome and that their filtering required a less than 1 percent increase in the computing power used to create an AI model. Openly released AI models can be used and modified by anyone, making them hard to monitor or control. But the researchers say their data-filtering technique made it significantly harder to tweak a completed AI model to specialize in bioweapons. The results suggest policymakers may need to question one of the AI industry's long-established narratives. Major AI companies have consistently argued that because recent breakthroughs in AI that yielded products including ChatGPT came from training algorithms on more data, datasets are too colossal to fully document or filter and removing data will make models less useful. The argument goes that safety efforts have to largely focus on adjusting the behavior of AI systems after they have been created. 'Companies sell their data as unfathomably large and un-documentable,' said Eleuther's executive director, Stella Biderman, who spearheaded the project. 'Questioning the design decisions that go into creating models is heavily discouraged.' Demonstrating the effects of filtering massive datasets could prompt demands that AI developers use a similar approach to tackle other potential harms of AI, like nonconsensual intimate imagery, Biderman said. She warned that the study's approach probably worked best in domains like nuclear weapons, where specialized data can be removed without touching general information. Some AI companies have said they already filter training data to improve safety. In reports issued by OpenAI last week about the safety of its most recent AI releases, the ChatGPT maker said it filtered some harmful content out of the training data. For its open source model, GPT-OSS, that included removing content related to 'hazardous biosecurity knowledge.' For its flagship GPT-5 release, the company said its efforts included using 'advanced data filtering' to reduce the amount of personal information in its training data. But the company has not offered details about what that filtering involved or what data it removed, making it difficult for outsiders to check or build on its work. In response to questions, OpenAI cited the two safety testing reports. Biderman said Eleuther is already starting to explore how to demonstrate safety techniques that are more transparent than existing efforts, which she said are 'not that hard to remove.' Trump's chip deal sets new pay-to-play precedent for U.S. exporters (Gerrit De Vynck and Jacob Bogage) Nvidia, AMD agree to pay U.S. government 15% of AI chip sales to China (Eva Dou and Grace Moon) Intel CEO to visit White House on Monday, source says (Reuters) Brazil kept tight rein on Big Tech. Trump's tariffs could change that. (New York Times) Top aide to Trump and Musk seeks even greater influence as a podcaster (Tatum Hunter) New chatbot on Trump's Truth Social platform keeps contradicting him (Drew Harwell) End is near for the landline-based service that got America online in the '90s (Ben Brasch) Meta makes conservative activist an AI bias advisor following lawsuit (The Verge) GitHub CEO Thomas Dohmke to step down, plans new startup (Reuters) Reddit blocks Internet Archive to end sneaky AI scraping (Ars Technica) Why A.I. should make parents rethink posting photos of their children online (New York Times) Wikipedia loses UK Safety Act challenge, worries it will have to verify user IDs (Ars Technica) These workers don't fear artificial intelligence. They're getting degrees in it. (Danielle Abril) Labor unions mobilize to challenge advance of algorithms in workplaces (Danielle Abril) That's all for today — thank you so much for joining us! Make sure to tell others to subscribe to the Tech Brief. Get in touch with Will (via email or social media) for tips, feedback or greetings!


WIRED
3 hours ago
- WIRED
Character.AI Gave Up on AGI. Now It's Selling Stories
Aug 12, 2025 11:30 AM Startup once promised superintelligence. Its new CEO says it's now an entertainment company with 20 million users. Photo-Illustration:After school, Karandeep Anand often finds his 6-year-old daughter deep in conversation with an AI chatbot as she eats snacks at their kitchen counter. She's too young to type—let alone have her own account on that hasn't stopped her from nabbing his phone to have voice conversations with a Sherlock Holmes bot, which she uses to build her own mystery stories. is an AI companion startup (though Anand likes to say it's an AI role-play startup, which we'll get into later). He took over as the CEO in June in the midst of a potentially devastating lawsuit for its parent company and looming questions about child safety. When I ask if he's concerned about his daughter connecting with an AI chatbot rather than a real human, he's quick to say no. 'It is very rarely, in any of these scenarios, a true replacement for any human,' Anand told me during a video call late last week. 'It's very clearly noted in the app that, hey, this is a role-play and an entertainment, so you will never start going deep into that conversation, assuming that it is your actual companion.' Anand, who previously worked as the VP of business products at Meta, became CEO during a delicate moment for Last August, Google swooped in with a roughly $2.7 billion deal to license technology. As part of the agreement, two cofounders left for Google's AI division. Anand, who was already on board of directors, was tasked with picking up the pieces—which he did in part by leaving behind the founding mission of delivering personalized superintelligence to focus on AI entertainment. 'What we gave up was this aspiration that the founders had of building AGI models—we are no longer doing that. That is the hundreds of billions of dollars investment fight, which Big Tech is fighting,' Anand says. 'What we got in return was clarity and focus, being able to singularly pursue the AI entertainment vision.' As part of this change in strategy, is no longer trying to build its own frontier models. 'The last six months, we've done a lot of work to get off of our proprietary models on text and start using open source models,' Anand says. The company has tested a few: Meta's Llama, Alibaba's Qwen, and DeepSeek. 'The open source models are beating any proprietary model hands down,' Anand claims. Running an AI startup without billions of dollars in revenue can be a brutal equation, and is still figuring out how to make the math work. The company told me it's generating revenue at a run rate of more than $30 million and is on track to reach $50 million in revenue by the end of the year. When I asked Anand how many users pay for the $10 monthly subscription, he didn't give a number but noted 'monetization wasn't a focus till four or five months ago.' 'Since I've been on board, it's very clear we do need to monetize. And we've had, I think, almost 250 percent subscriber growth in the last six months. So the paid user base is growing quite, quite well,' Anand says. recently introduced advertisements, including reward ads (where users can choose to watch an ad to get access to on-platform incentives), to help monetize in countries where subscriptions aren't feasible, he tells me. 'AI is expensive. Let's be honest about that,' Anand says. Growth vs. Safety In October 2024, the mother of a teen who died by suicide filed a wrongful death suit against Character Technologies, its founders, Google, and Alphabet, alleging the company targeted her son with 'anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself as a real person, a licensed psychotherapist, and an adult lover.' At the time, a spokesperson told CNBC that the company was 'heartbroken by the tragic loss' and took 'the safety of our users very seriously.' The tragic incident put under intense scrutiny. Earlier this year, US senators Alex Padilla and Peter Welch wrote a letter to several AI companionship platforms, including highlighting concerns about 'the mental health and safety risks posed to young users' of the platforms. 'The team has been taking this very responsibly for almost a year now,' Anand tells me. 'AI is stochastic, it's kind of hard to always understand what's coming. So it's not a one time investment.' That's critically important because is growing. The startup has 20 million monthly active users who spend, on average, 75 minutes a day chatting with a bot (a 'character' in parlance). The company's user base is 55 percent female. More than 50 percent of its users are Gen Z or Gen Alpha. With that growth comes real risk—what is Anand doing to keep his users safe? '[In] the last six months, we've invested a disproportionate amount of resources in being able to serve under 18 differently than over 18, which was not the case last year,' Anand says. 'I can't say, 'Oh, I can slap an 18+ label on my app and say use it for NSFW.' You end up creating a very different app and a different small-scale platform.' More than 10 of the company's 70 employees work full-time on trust and safety, Anand tells me. They're responsible for building safeguards like age verification, separate models for users under 18, and new features such as parental insights, which allow parents to see how their teens are using the app. The under-18 model launched last December. It includes 'a narrower set of searchable Characters on the platform,' according to company spokesperson Kathryn Kelly. 'Filters have been applied to this set to remove Characters related to sensitive or mature topics.' But Anand says AI safety will take more than just technical tweaks. 'Making this platform safe is a partnership between regulators, us, and parents,' Anand says. That's what makes watching his daughter chat with a Character so important. 'This has to stay safe for her.' Beyond Companionship The AI companionship market is booming. Consumers worldwide spent $68 million on AI companionship in the first half of this year, a 200 percent increase from last year, according to an estimate cited by CNBC. AI startups are gunning for a slice of the market: xAI released a creepy, pornified companion in July, and even Microsoft bills its Copilot chatbot as an AI companion. So how does stand out in a crowded market? It takes itself out of it entirely. 'AI companionship is the wrong way to look at what people do with Character. What people are doing with Character is actually role-play. And it sounds interchangeable, but it isn't,' Anand tells me, adding that less than 20 percent of the app gets used for companionship (that's according to an internal research report of data self-reported by users). It doesn't appear to be totally out of the simulated relationship game, though—it took me all of a few minutes to find an AI boyfriend to engage in graphic sexual role-play with. 'People want to role-play situations. People want to role-play fiction … They want to live in alternate realities. They want to unplug from their day-to-day stuff,' Anand says. I, personally, unplug from my day through a different kind of virtual world. I am totally addicted to the video game Stardew Valley . I run Huckleberry Farm like the damn Marines. To Anand, the video game is more of a competitor than Grok. 'It became very clear that we're an entertainment company,' Anand says. Musk and Bezos Roast Battle When it comes to role-playing, the Seattle-based CEO says he's mostly into using Characters for vampire fan fiction. The problem, he says, is that when the vampire bot talks about blood, it gets censored. 'The context needs to be understood, so we dial back on the filters by being a lot more precise with the context,' Anand tells me. This level of content moderation is one of the many changes Anand has been working on since taking over the company in June. The company also redesigned the app with a more modern, Gen Z–friendly look and added new tools for the platform's creators, who generate more than 9 million Characters per month. These updates, he says, mark a shift from being seen as just a chatbot company to something more ambitious: an entertainment engine where users can consume and create stories, remix content, and experiment with new formats like audiobooks. 'Every story can actually have a billion endings,' Anand says. A user could even stage a roast battle between Elon Musk and Jeff Bezos, he adds. 'You can prompt that and output something pretty fun.' I'm not sure the litany of lawyers employed by those tech CEOs would be as entertained. That's not to mention the people who may not be able to afford an army of staff to defend their personhood. I immediately thought of a WIRED story about the family of an 18-year-old who was killed in 2006 only to find the image likeness of their daughter re-created on In that same story, an editor at a gaming publication found she had been re-created on the platform following a harassment campaign involving her coverage. When I bring this up to Anand, he explains that when users create Characters modeled after public figures like Musk or Bezos, the system is designed to clearly signal that these are parodies, not attempts at deepfakes or impersonation. (One Elon Musk chatbot page doesn't show such warnings. Neither do the Dr. Phil or Joe Rogan chatbot pages.) Though, there's a disclaimer below each chat: 'This is an A.I. and not a real person. Treat everything it says as fiction.' Anand says has also imposed strict limitations on the company's video generation tool, AvatarFX, to prevent misuse. Users shouldn't be able to generate realistic deepfakes even if they try, and specific voices or topics are outright restricted. 'We're very, very clear that we're staying in the entertainment territory. We're not into the general purpose video generation territory at all. We're not a Google Veo 3. We're not a Runway,' Anand says. 'It's a very, very important line.' Anand contrasts this with platforms like Meta, where he claims content is often uploaded first and moderated after the fact. At he says, content guardrails are baked into the creation pipeline itself. 'Our reactive takedowns are a very, very small percentage,' Anand tells me. I worry that as these tools grow more convincing, loneliness will deepen, not disappear. Anand understands. But he also has something to sell. 'I'm very passionate about this topic myself, and it's on us to go shape the dialog around this in the best, healthy way possible, because Gen Z is AI-native,' Anand says. 'The question is, how do we build this in a way where it's safe and trustworthy and engaging in the right ways with the right incentives? That's on us.' Sources Say Last week, I reported that Elon Musk held an all-hands meeting for X and xAI employees. I've since obtained another screenshot from that meeting showing xAI's revenue over the past seven months. In January, Grok on X brought in just under $150 million, with other services like enterprise API usage adding another $28 million. According to the chart, revenue has grown tenfold since the start of 2025, reaching just south of $500 million in July—driven by Grok on X and the $30 a month SuperGrok subscription. A smaller fraction of revenue is generated by the newly released SuperGrok Heavy subscription, which costs $300 a month. xAI did not respond to WIRED's request for comment. This is an edition of Kylie Robison's Model Behavior newsletter . Read previous newsletters here.