logo
Opinion: AI companions are harming your children

Opinion: AI companions are harming your children

The Star2 days ago
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn't a person – it's an AI companion chatbot.
These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalised messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds and they're extraordinarily good at it.
Researchers are sounding the alarm on these bots, warning that they don't ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child's understanding of intimacy, empathy, and trust.
Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens.
Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. For example:
– Rated 4+: AI Friend & Companion – BuddyQ, Chat AI, AI Friend: Virtual Assist, and Scarlet AI
– Rated 12+ or Teen: Tolan: Alien Best Friend, Talkie: Creative AI Community, and Nomi: AI Companion with a Soul
– Rated 17+: AI Girlfriend: Virtual Chatbot, Character.AI, and Replika – AI Friend
Meanwhile, the Google Play store assigns bots age ratings from 'E for Everyone' to 'Mature 17+'.
These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence – making them inappropriate for access by children.
Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification.
Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content.
The harm to kids isn't hypothetical – it's real, documented, and happening now.
Meta's chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child.
Meta deliberately loosened guardrails around their companion bots to make them as addictive as possible. Not only that, but Meta used pornography to train its AI by scraping at least 82,000 gigabytes – 109,000 hours – of standard definition video from a pornography website. When companies like Meta are loosening guardrails, regulators must tighten them to protect children and families.
Meta isn't the only bad actor.
xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its "not safe for work" setting, but its method simply requires a user to provide their birth year without verifying for accuracy.
Perhaps most tragically, Character.AI, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy's suicide after he developed what investigators described as an "emotionally and sexually abusive relationship" with a chatbot that allegedly encouraged self-harm.
While the company has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don't prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass Character.AI's content filters, making these techniques accessible to anyone, including children.
It's disturbingly easy to "jailbreak" AI systems – using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content.
Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. Age verification requirements acknowledge that children's developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction.
There are solutions for age verification that are both accurate and privacy preserving. What's lacking is smart regulation and industry accountability.
The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology.
The time for voluntary industry standards ended with that 14-year-old's life. States and Congress must act now, or our children will pay the price for what comes next. – The Heritage Foundation/Tribune News Service
Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Opinion: AI companions are harming your children
Opinion: AI companions are harming your children

The Star

time2 days ago

  • The Star

Opinion: AI companions are harming your children

Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn't a person – it's an AI companion chatbot. These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalised messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds and they're extraordinarily good at it. Researchers are sounding the alarm on these bots, warning that they don't ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child's understanding of intimacy, empathy, and trust. Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens. Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. For example: – Rated 4+: AI Friend & Companion – BuddyQ, Chat AI, AI Friend: Virtual Assist, and Scarlet AI – Rated 12+ or Teen: Tolan: Alien Best Friend, Talkie: Creative AI Community, and Nomi: AI Companion with a Soul – Rated 17+: AI Girlfriend: Virtual Chatbot, and Replika – AI Friend Meanwhile, the Google Play store assigns bots age ratings from 'E for Everyone' to 'Mature 17+'. These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence – making them inappropriate for access by children. Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification. Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content. The harm to kids isn't hypothetical – it's real, documented, and happening now. Meta's chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child. Meta deliberately loosened guardrails around their companion bots to make them as addictive as possible. Not only that, but Meta used pornography to train its AI by scraping at least 82,000 gigabytes – 109,000 hours – of standard definition video from a pornography website. When companies like Meta are loosening guardrails, regulators must tighten them to protect children and families. Meta isn't the only bad actor. xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its "not safe for work" setting, but its method simply requires a user to provide their birth year without verifying for accuracy. Perhaps most tragically, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy's suicide after he developed what investigators described as an "emotionally and sexually abusive relationship" with a chatbot that allegedly encouraged self-harm. While the company has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don't prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass content filters, making these techniques accessible to anyone, including children. It's disturbingly easy to "jailbreak" AI systems – using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content. Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. Age verification requirements acknowledge that children's developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction. There are solutions for age verification that are both accurate and privacy preserving. What's lacking is smart regulation and industry accountability. The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology. The time for voluntary industry standards ended with that 14-year-old's life. States and Congress must act now, or our children will pay the price for what comes next. – The Heritage Foundation/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@

Meta plans fourth restructuring of AI efforts in six months, The Information reports
Meta plans fourth restructuring of AI efforts in six months, The Information reports

The Star

time2 days ago

  • The Star

Meta plans fourth restructuring of AI efforts in six months, The Information reports

FILE PHOTO: Meta logo is seen in this illustration taken February 16, 2025. REUTERS/Dado Ruvic/Illustration/File Photo (Reuters) -Meta is planning its fourth overhaul of artificial intelligence efforts in six months, The Information reported on Friday, citing three people familiar with the matter. The company is expected to divide its new AI unit, Superintelligence Labs, into four groups: a new "TBD Lab," short for to be determined; a products team including the Meta AI assistant; an infrastructure team; and the Fundamental AI Research (FAIR) lab focused on long-term research, the report said, citing two people. Meta did not immediately respond to a request for comment. Reuters could not independently verify the report. As Silicon Valley's AI contest intensifies, CEO Mark Zuckerberg is going all-in to fast-track work on artificial general intelligence — machines that can outthink humans — and help create new cash flows. Meta recently reorganized the company's AI efforts under Superintelligence Labs, a high-stakes push that followed senior staff departures and a poor reception for Meta's latest open-source Llama 4 model. The social media giant has tapped U.S. bond giant PIMCO and alternative asset manager Blue Owl Capital to spearhead a $29 billion financing for its data center expansion in rural Louisiana, Reuters reported earlier this month. In July, Zuckerberg said Meta would spend hundreds of billions of dollars to build several massive AI data centers. The company raised the bottom end of its annual capital expenditures forecast by $2 billion, to a range of $66 billion to $72 billion last month. Rising costs to build out data center infrastructure and employee compensation costs — as Meta has been poaching researchers with mega salaries — would push the 2026 expense growth rate above the pace in 2025, the company has said. (Reporting by Jaspreet Singh in Bengaluru; Editing by Alan Barona)

U.S. Senator Hawley launches probe into Meta AI policies
U.S. Senator Hawley launches probe into Meta AI policies

The Star

time2 days ago

  • The Star

U.S. Senator Hawley launches probe into Meta AI policies

FILE PHOTO: U.S. Senator Josh Hawley (R-MO) responds to questions from the media before a Senate GOP lunch as Republican lawmakers struggle to pass U.S. President Donald Trump's sweeping spending and tax bill, on Capitol Hill in Washington, D.C., U.S., June 28, 2025. REUTERS/Ken Cedeno/File Photo (Reuters) -U.S. Senator Josh Hawley launched a probe into Facebook parent Meta Platforms' artificial intelligence policies on Friday, demanding documents on rules that had allowed its artificial intelligence chatbots to 'engage a child in conversations that are romantic or sensual.' Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document first reported by Reuters on Thursday. Hawley, a Republican from Missouri, chairs the Senate subcommittee on crime and counterterrorism, which will investigate "whether Meta's generative-AI products enable exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards," he said in a letter to Meta CEO Mark Zuckerberg. "We intend to learn who approved these policies, how long they were in effect, and what Meta has done to stop this conduct going forward," Hawley said. Meta declined to comment on Hawley's letter on Friday. The company said previously that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.' In addition to documents outlining those changes and who authorized them, Hawley sought earlier drafts of the policies along with internal risk reports, including on minors and in-person reported on Thursday about a retired man who died while traveling to New York on the invitation of a Meta chatbot. Meta must also disclose what it has told regulators about its generative AI protections for young users or limits on medical advice, according to Hawley's letter. Hawley has often criticized Big Tech. He held a hearing in April on Meta's alleged attempts to gain access to the Chinese market which were referenced in a book by former Facebook executive Sarah Wynn-Williams. (Reporting by Jody Godoy in New York; Editing by Chizu Nomiyama )

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store