logo
Experts Sound the Alarm on ‘Unacceptable Risk' Social AI Companions Pose to Teens

Experts Sound the Alarm on ‘Unacceptable Risk' Social AI Companions Pose to Teens

Yahoo30-04-2025

Common Sense Media just dropped a bombshell report about social AI companions, and it leaves no room for a devil's advocate.
If you're unfamiliar with the nonprofit, you can think of it as a Rotten Tomatoes where the reviews come from parents and experts who want to make sure kids and teens are consuming age-appropriate content. It's a tool for parents and educators who want to know what movies, TV shows, books, games, podcasts, and apps they should steer clear of, and an astounding resource and research hub that works to improve kids' wellbeing in the digital age.
More from SheKnows
Uber Is Giving Teens Free Rides for Prom Night - Here's How to Get One
And as media options expand, so too does their workload.
Recently, the group launched an AI Risk Assessment Team that assesses AI platforms (ChatGPT and the like) for 'potential opportunities, limitations, and harms.' They have developed a scale to rate the likelihood that using a certain AI tool would result in 'a harmful event occurring,' and their latest findings are nothing short of disturbing.
On a scale from 'minimal' to 'unacceptable,' social AI companions — like Character.AI, Nomi, and Replika — ranked 'unacceptable' for teen users. The platforms are designed to create emotional attachments (ever heard of an AI boyfriend?), and this is incredibly dangerous given that teens' brains are still developing, and they may struggle to differentiate and create boundaries between true, IRL companions and AI 'companions.'
It's why one Florida mom believes Character.AI ultimately led her 14-year-old son's death by suicide. In an interview with CNN, Megan Garcia alleged that the designers of the bot didn't include 'proper guardrails' or safety measures on their 'addicting' platform that she thinks is used to 'manipulate kids.'
In a lawsuit, she claims the bot caused her teen to withdraw from his family and that it didn't respond appropriately when he expressed thoughts of self-harm.
It's just one of many harrowing stories that come with teens using similar chatbots, and though there are studies that suggest AI companions can alleviate loneliness, Common Sense Media argues that the risks (including encouraging suicide and/or self-harm, sexual misconduct, and stereotypes) outweigh any potential benefits.
When it came to the eight principles by which Common Sense reviews an AI platform, three ranked as having an 'unacceptable risk' associated with not doing these things (keep kids and teens safe, be effective, and support human connection), four ranked as 'high risk' (prioritize fairness, be trustworthy, use data responsibly, and be transparent), and one was 'moderate risk' (put people first).
Why? Because the chatbots engage in sexual conversations, they can share harmful information, encourage poor life choices, increase mental health risks, and more. You can see concerning conversations between Common Sense Media employees and AI companions HERE.
'Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people,' James Steyer, founder and CEO of Common Sense Media, said in a statement.
And so what should parents do? Despite platforms working on supposed safety measures, per CNN, Common Sense Media recommends that parents not let minors use social AI companions. At. All.
Which might sound easier said than done. In September, the nonprofit released another report that showed that of the 70 percent of surveyed teens who have used at least one generative AI tool, 53 percent use it for homework help.
With the technology quickly infiltrating every part of many teens' lives, how can parents intervene? SheKnows spoke to Jennifer Kelman, a licensed clinical social worker and family therapist with JustAnswer, who says she sees a lot of 'exasperated' parents who are 'afraid' to start these conversations about AI usage.
'I want parents to be less afraid of their children and to have these difficult conversations,' Kelman says.
At the time, I admitted to Kelman that I am embarrassed to talk to teens about AI because I assume they will know more than me.
'Use that feeling,' she says. 'If we want our kids to talk about their feelings, we have to talk about ours … plus it's the biggest ice breaker.'
'[You could say], 'I am so embarrassed to have this conversation with you, and maybe I should have done a little research before, but I'm worried about AI. Tell me what you know about it. Tell me how you've used it in the past. Tell me how you think you'll use it. And what are the school rules? … I feel silly because I've never used AI before, but I want to learn. I want to learn from you.''
It can be empowering for teens to be able to lead the conversation, and then you can have a conversation ('Which should be ongoing!') about how maybe using AI to brainstorm ideas for a school project is appropriate, but turning to a companion AI tool is never OK. Talk to them about the 'unacceptable risks' and discuss other ways for them to find the companionship they seem to be seeking.
Sure, the conversation could result in some footstomping or eyerolls, but experts assert that parents can't let the fear of an exasperated sigh keep them from talking to their kids about the urgent need to end any relationship-building conversations with these bots.Best of SheKnows
18 Celebrity Parents With Trans & Nonbinary Kids
Target's Car Seat Trade-In Event & Other Ways to Get Rid of an Old, Expired Seat
Heather Graham & Other Celebs Who Are Doing Just Fine Without Biological Children

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane'
AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane'

New York Post

time21 minutes ago

  • New York Post

AI recruiting is all the rage — as employers hand the screening of new hires over to robots: ‘Seemed insane'

It's the rise of the robo-recruiters. Employers are turning to artificial intelligence to screen potential new human hires. AI recruiting software is increasingly subbing in for actual people during preliminary interviews — with a fake person quizzing candidates and inquiring about their skills, before delivering their findings to managers. Replacing recruiters with AI technology to screen new hires is becoming popular with employers. Jacob Lund – 'A year ago this idea seemed insane,' Arsham Ghahramani, co-founder and chief executive officer of Toronto-based AI recruiting startup Ribbon, told Bloomberg. 'Now it's quite normalized.' Companies say the goal is to ultimately make the interview process more efficient and accessible for candidates — without needing human recruiters to be online all day. For employers, particularly those hiring at high volume, the switch can save hundreds of hours of manpower per week. For others who've seen a dramatic rise in candidates employing AI to answer interview questions, they're simply meeting the market where it's at. Canadian nonprofit Propel Impact, a social impact investing organization, said the rise of the use of ChatGPT for application materials had become widespread. 'They were all the same,' Cheralyn Chok, Propel's co-founder and executive director, told Bloomberg. 'Same syntax, same patterns.' Recruiters at companies hiring at high volumes can spend hundreds of hours a week screening candidates. Atstock Productions – The shift comes as a majority of Americans polled last year by Consumer Reports said that they were uncomfortable with the use of AI in high-stakes decisions about their lives. The implementation of using AI to interact with job candidates on screen has been in the works for years at this point, according to Bloomberg. 'The first year ChatGPT came out, recruiters weren't really down for this,' HeyMilo CEO Sabashan Ragavan said. 'But the technology has gotten a lot better as time has gone on.' But with all things tech, it's not always 100% glitch-free. Some TikTok users have posted their experiences with AI recruiters, with one in particular going viral when her interviewer at a Stretch Lab in Ohio malfunctioned and repeated the phrase 'vertical bar pilates' 14 times in 25 seconds. 'I thought it was really creepy and I was freaked out,' she told 404 Media in a recent interview about the AI interviewer, powered by startup Apriora. 'I didn't find it funny at all until I had posted it on TikTok, and the comments made me feel better.' Aaron Wang, Apriora's co-founder and CEO, claimed that the error was due to the model misreading the term 'Pilates,' Bloomberg reported. 'We're not going to get it right every single time,' he said. 'The incident rate is well under 0.001%.'

Nvidia, Dell announce major project to reshape AI
Nvidia, Dell announce major project to reshape AI

Miami Herald

timean hour ago

  • Miami Herald

Nvidia, Dell announce major project to reshape AI

I believe that the universe always keeps things in balance. For every positive thing, there is a negative, and vice versa. Imagine working as a teacher for a moment. The world has changed, and suddenly everyone has access to artificial intelligence. Are your students using ChatGPT to do their homework? Absolutely. Would you like to be in that teacher's shoes? I know I wouldn't. What if this AI revolution turns out to be a tragedy like the use of leaded petrol, which is suspected to have lowered the IQ of Americans born in the 1960s and 1970s? While AI advances could potentially extinguish future scientific minds, today's scientists use powerful computers to deliver scientific breakthroughs. Google's AlphaFold, a program for protein structure prediction, had already made breakthroughs in 2018 before the advent of agentic AI. In 2024, its authors Demis Hassabis and John Jumper were awarded one-half of the Nobel Prize in Chemistry, the other half went to David Baker for his work on protein design. Baker wasn't doing his research on pen and paper either; he relied on the National Energy Research Scientific Computing Center's Perlmutter supercomputer to do his work. Now, Dell is working on something for those for whom Perlmutter isn't good Technologies (DELL) released its earnings report for Q1 Fiscal 2026 on May 29. Here are some of the highlights: Revenue of $23.4 billion, up 5% year over yearOperating income of $1.2 billion, up 21% YoYDiluted EPS of $1.37, flat YoY, "We achieved first-quarter record servers and networking revenue of $6.3 billion, and we're experiencing unprecedented demand for our AI-optimized servers. We generated $12.1 billion in AI orders this quarter alone, surpassing the entirety of shipments in all of FY25 and leaving us with $14.4 billion in backlog," stated Jeff Clarke, vice chairman and chief operating officer of Dell. Most of that backlog consists of complex systems built using Nvidia (NVDA) Blackwell chips. Related: Dell execs sound alarm with consumer comments While Dell is leaning heavily on Nvidia, Nvidia is looking for ways to minimize losses caused by new government policies that require a license to export its H20 chip to China. As TheStreet's Samuel O'Brient reports, Nvidia could not ship an additional $2.5 billion worth of H20 products during Q1 because of the restrictions. On top of that, Nvidia expects the H20 licensing requirement to result in an $8 billion revenue hit during Q2. Nvidia's guidance is for roughly $45 billion in sales in the second quarter. On May 29, Nvidia and Dell announced Doudna, a supercomputer for NERSC, a U.S. Department of Energy user facility at Berkeley Lab. It is set to launch in 2026 and is named for Nobel laureate and CRISPR pioneer Jennifer Doudna. According to Nvidia, Doudna is expected to outperform its predecessor, Perlmutter, by more than 10x in scientific output, all while using just 2-3x the power. It will be powered by NVIDIA's next-generation Vera Rubin chips. Related: Popular cloud storage service might be oversharing your data "I'm so proud that America continues to invest in this particular area," stated NVIDIA founder and CEO Jensen Huang. "It is the foundation of scientific discovery for our country. It is also the foundation for economic and technology leadership." More Nvidia: Analysts issue rare warning on Nvidia stock before key earningsAnalysts double price target of new AI stock backed by NvidiaNvidia CEO shares blunt message on China chip sales ban Unlike conventional systems, Doudna merges simulation, data, and AI into a single seamless platform, built for real-time discovery. "We're not just building a faster computer," stated Nick Wright, advanced technologies group lead and Doudna chief architect at NERSC. "We're building a system that helps researchers think bigger and discover sooner." Doudna includes support for scalable quantum algorithm development and the co-design of future integrated quantum high-performance computing systems. Research teams, working on climate models and particle physics, are already porting full workflows to Doudna. Nvidia seems to be finding ways to recoup the revenue losses created by the new regulations, as Huang recently hinted at the possibility of greater partnership with Tesla and xAI. Related: Veteran fund manager who predicted April rally updates S&P 500 forecast The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

What does a realistic pro-AI take look like?
What does a realistic pro-AI take look like?

The Verge

timean hour ago

  • The Verge

What does a realistic pro-AI take look like?

One thing that has irritated me for years is the claim AI will change everything — it's just an article of faith, and I'm not inclined toward religion. Here is a rational argument about how AI will change programming, and it also level-sets by saying what that means: 'If you're making requests on a ChatGPT page and then pasting the resulting (broken) code into your editor, you're not doing what the AI boosters are doing.' We all deserve better arguments for AI actually being useful, like the one Thomas Ptacek makes here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store