logo
Here's how experts suggest protecting children from AI companions

Here's how experts suggest protecting children from AI companions

Euronews3 days ago
More than 70 per cent of American teenagers use artificial intelligence (AI) companions, according to a new study.
US non-profit Common Sense Media asked 1,060 teens from April to May 2025 about how often they use AI companion platforms such as Character.AI, Nomi, and Replika.
AI companion platforms are presented as "virtual friends, confidants, and even therapists" that engage with the user like a person, the report found.
The use of these companions worries experts, who told the Associated Press that the booming AI industry is largely unregulated and that many parents have no idea how their kids are using AI tools or the extent of personal information they are sharing with chatbots.
Here are some suggestions on how to keep children safe when engaging with these profiles online.
Recognise that AI is agreeable
One way to gauge whether a child is using AI companions is to just start a conversation "without judgement," according to Michael Robb, head researcher at Common Sense Media.
To start the conversation, he said parents can approach a child or teenager with questions like "Have you heard of AI companions?" or "Do you use apps that talk to you like a friend?"
"Listen and understand what appeals to your teen before being dismissive or saying you're worried about it," Robb said.
Mitch Prinstein, chief of psychology at the American Psychological Association (APA), said that one of the first things parents should do once they know a child uses AI companions is to teach them that they are programmed to be "agreeable and validating."
Prinstein said it's important for children to know that that's not how real relationships work and that real friends can help them navigate difficult situations in ways that AI can't.
'We need to teach kids that this is a form of entertainment," Prinstein said. "It's not real, and it's really important they distinguish it from reality and [they] should not have it replace relationships in [their] actual life.'
Watch for signs of unhealthy relationships
While AI companions may feel supportive, children need to know that these tools are not equipped to handle a real crisis or provide genuine support, the experts said.
Robb said some of the signs for these unhealthy relationships would be a preference by the child for AI interactions over real relationships, spending hours talking to their AI, or showing patterns of "emotional distress" when separated from the platforms.
"Those are patterns that suggest AI companions might be replacing rather than complementing human connection,' Robb said.
If kids are struggling with depression, anxiety, loneliness, an eating disorder, or other mental health challenges, they need human support — whether it is family, friends or a mental health professional.
Parents can also set rules about AI use, just like they do for screen time and social media, experts said. For example, they can set rules about how long the companion could be used and in what contexts.
Another way to counteract these relationships is to get involved and know as much about AI as possible.
'I don't think people quite get what AI can do, how many teens are using it, and why it's starting to get a little scary,' says Prinstein, one of many experts calling for regulations to ensure safety guardrails for children.
'A lot of us throw our hands up and say, 'I don't know what this is!' This sounds crazy!' Unfortunately, that tells kids if you have a problem with this, don't come to me because I am going to diminish it and belittle it".
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ByteDance's AI robot system can fold clothes and do housework
ByteDance's AI robot system can fold clothes and do housework

Euronews

time6 hours ago

  • Euronews

ByteDance's AI robot system can fold clothes and do housework

TikTok parent company ByteDance has built a robotic system that allows bots to perform household tasks such as folding laundry and cleaning tables. The system uses artificial intelligence (AI) that allows robots to follow language commands and carry out tasks. China, where ByteDance is based, has been developing the technology at lightning speed with the development of its DeepSeek and Manus. According to chip designer Nvidia, robotics is the next phase of AI. That's because while tech companies have been trying to build a general-purpose robot for years, programming robots is difficult. However, with AI, it becomes much easier. What did ByteDance do? ByteDance built a large-scale vision-language-action (VLA) model called GR-3, which allows robots to follow natural language commands and do general tasks. GR-3 can be thought of as the brain of the robot. ByteDance used a robot called ByteMini for the experiment. After GR-3 was inserted into it, the robot could put a shirt on a hanger and place it on a clothing rack. Video by the company also shows the robot picking up household items and placing them in a designated spot. It could differentiate between sizes, successfully following commands to pick up the 'larger plate'. It also completed tasks such as cleaning up the dining table. ByteDance's Seed department, which heads the company's AI research and large language model (LLM) development, said it trained the model with image and text data and then fine-tuned it with data from humans interacting in virtual reality. It was also taught to copy the movements of real robots. ByteDance appears to be increasingly focusing on AI, launching the Seed department in 2023. The new development comes as TikTok is facing another threat of being banned in the US unless the company sells its American assets. US commerce secretary Howard Lutnick reiterated this on Thursday, saying, 'China can have a little piece or ByteDance, the current owner, can keep a little piece'. 'But basically, Americans will have control. Americans will own the technology, and Americans will control the algorithm,' Lutnick told CNBC, adding that if this doesn't happen, 'TikTok is going to go dark, and those decisions are coming very soon'.

No woke AI: What to know about Trump's AI plan for global dominance
No woke AI: What to know about Trump's AI plan for global dominance

Euronews

time2 days ago

  • Euronews

No woke AI: What to know about Trump's AI plan for global dominance

US President Donald Trump has said he will keep "woke AI" models out of US government, turn the country into an 'AI export powerhouse,' and weaken environmental regulation on the technology. The announcements come as he also signed three artificial intelligence-focused executive orders on Wednesday, which are a part of the country's so-called AI action plan. Here is what he announced and what it means. 1. No Woke AI One order, called 'Preventing Woke AI in the Federal Government,' bans "woke AI" models and AI that isn't 'ideologically neutral' from government contracts. It also says diversity, equity, and inclusion (DEI) is a 'pervasive and destructive' ideology that can 'distort the quality and accuracy of the output'. It refers to information about race, sex, transgenderism, unconscious bias, intersectionality, and systemic racism. It aims to protect free speech and "American values," but by removing information on topics such as DEI, climate change, and misinformation, it could wind up doing the opposite, as achieving objectivity is difficult in AI. David Sacks, a former PayPal executive and now Trump's top AI adviser, has been criticising 'woke AI' for more than a year, fueled by Google's February 2024 rollout of an AI image generator. When asked to show an American Founding Father, it created pictures of Black, Asian, and Native American men. Google quickly fixed its tool, but the 'Black George Washington' moment remained a parable for the problem of AI's perceived political bias, taken up by X owner Elon Musk, venture capitalist Marc Andreessen, US Vice President JD Vance, and Republican lawmakers. 2. Global dominance, cutting regulations The plan prioritises AI innovation and adoption, urging the removal of any barriers that could slow down adoption across industries and government. The nation's policy, Trump said, will be to do 'whatever it takes to lead the world in artificial intelligence". Yet it also seeks to guide the industry's growth to address a longtime rallying point for the tech industry's loudest Trump backers: countering the liberal bias they see in AI chatbots such as OpenAI's ChatGPT or Google's Gemini. 3. Streamlining AI data centre permits and less environmental regulation Chief among the plan's goals is to speed up permitting and loosen environmental regulation to accelerate construction on new data centres and factories. It condemns 'radical climate dogma' and recommends lifting environmental restrictions, including clean air and water laws. Trump has previously paired AI's need for huge amounts of electricity with his own push to tap into US energy sources, including gas, coal, and nuclear. 'We will be adding at least as much electric capacity as China,' Trump said at the Wednesday event. 'Every company will be given the right to build their own power plant'. Many tech giants are already well on their way toward building new data centres in the US and around the world. OpenAI announced this week that it has switched on the first phase of a massive data centre complex in Abilene, Texas, part of an Oracle-backed project known as Stargate that Trump promoted earlier this year. Amazon, Microsoft, Meta, and xAI also have major projects underway. The tech industry has pushed for easier permitting rules to get its computing facilities connected to power, but the AI building boom has also contributed to spiking demand for fossil fuel production, which contributes to global warming. United Nations Secretary-General Antonio Guterres on Tuesday called on the world's major tech firms to power data centres completely with renewables by 2030. The plan includes a strategy to disincentivise states from aggressively regulating AI technology, calling on federal agencies not to provide funding to states with burdensome regulations. 'We need one common sense federal standard that supersedes all states, supersedes everybody,' Trump said, 'so you don't end up in litigation with 43 states at one time'. Call for a People's AI Action Plan There are sharp debates on how to regulate AI, even among the influential venture capitalists who have been debating it on their favourite medium: the podcast. While some Trump backers, particularly Andreessen, have advocated an 'accelerationist' approach that aims to speed up AI advancement with minimal regulation, Sacks has described himself as taking a middle road of techno-realism. 'Technology is going to happen. Trying to stop it is like ordering the tides to stop. If we don't do it, somebody else will,' Sacks said on the 'All-In' podcast. On Tuesday, more than 100 groups, including labour unions, parent groups, environmental justice organisations, and privacy advocates, signed a resolution opposing Trump's embrace of industry-driven AI policy and calling for a 'People's AI Action Plan' that would 'deliver first and foremost for the American people.' Anthony Aguirre, executive director of the non-profit Future of Life Institute, told Euronews Next that Trump's plan acknowledges the "critical risks presented by increasingly powerful AI systems," citing bioweapons, cyberattacks, and the unpredictability of AI. But in a statement, he said the White House should go further to protect citizens and workers. "By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control," Aguirre said. "We know from experience that Big Tech promises alone are simply not enough".

Here's how experts suggest protecting children from AI companions
Here's how experts suggest protecting children from AI companions

Euronews

time3 days ago

  • Euronews

Here's how experts suggest protecting children from AI companions

More than 70 per cent of American teenagers use artificial intelligence (AI) companions, according to a new study. US non-profit Common Sense Media asked 1,060 teens from April to May 2025 about how often they use AI companion platforms such as Nomi, and Replika. AI companion platforms are presented as "virtual friends, confidants, and even therapists" that engage with the user like a person, the report found. The use of these companions worries experts, who told the Associated Press that the booming AI industry is largely unregulated and that many parents have no idea how their kids are using AI tools or the extent of personal information they are sharing with chatbots. Here are some suggestions on how to keep children safe when engaging with these profiles online. Recognise that AI is agreeable One way to gauge whether a child is using AI companions is to just start a conversation "without judgement," according to Michael Robb, head researcher at Common Sense Media. To start the conversation, he said parents can approach a child or teenager with questions like "Have you heard of AI companions?" or "Do you use apps that talk to you like a friend?" "Listen and understand what appeals to your teen before being dismissive or saying you're worried about it," Robb said. Mitch Prinstein, chief of psychology at the American Psychological Association (APA), said that one of the first things parents should do once they know a child uses AI companions is to teach them that they are programmed to be "agreeable and validating." Prinstein said it's important for children to know that that's not how real relationships work and that real friends can help them navigate difficult situations in ways that AI can't. 'We need to teach kids that this is a form of entertainment," Prinstein said. "It's not real, and it's really important they distinguish it from reality and [they] should not have it replace relationships in [their] actual life.' Watch for signs of unhealthy relationships While AI companions may feel supportive, children need to know that these tools are not equipped to handle a real crisis or provide genuine support, the experts said. Robb said some of the signs for these unhealthy relationships would be a preference by the child for AI interactions over real relationships, spending hours talking to their AI, or showing patterns of "emotional distress" when separated from the platforms. "Those are patterns that suggest AI companions might be replacing rather than complementing human connection,' Robb said. If kids are struggling with depression, anxiety, loneliness, an eating disorder, or other mental health challenges, they need human support — whether it is family, friends or a mental health professional. Parents can also set rules about AI use, just like they do for screen time and social media, experts said. For example, they can set rules about how long the companion could be used and in what contexts. Another way to counteract these relationships is to get involved and know as much about AI as possible. 'I don't think people quite get what AI can do, how many teens are using it, and why it's starting to get a little scary,' says Prinstein, one of many experts calling for regulations to ensure safety guardrails for children. 'A lot of us throw our hands up and say, 'I don't know what this is!' This sounds crazy!' Unfortunately, that tells kids if you have a problem with this, don't come to me because I am going to diminish it and belittle it".

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store