logo
Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys

Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys

Yahoo18 hours ago

Mattel may seem like an unchanging, old-school brand. Most of us are familiar with it – be it through Barbie, Fisher-Price, Thomas & Friends, Uno, Masters of the Universe, Matchbox, MEGA or Polly Pocket.
But toys are changing. In a world where children grow up with algorithm-curated content and voice assistants, toy manufacturers are looking to AI for new opportunities.
Mattel has now partnered with OpenAI, the company behind ChatGPT, to bring generative AI into some of its products. As OpenAI's services are not designed for children under 13, in principle Mattel will focus on products for families and older children.
But this still raises urgent questions about what kind of relationships children will form with toys that can talk back, listen and even claim to 'understand' them. Are we doing right by kids, and do we need to think twice before bringing these toys home?
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences.
For as long as there have been toys, children have projected feelings and imagined lives onto them. A doll could be a confidante, a patient or a friend.
But over recent decades, toys have become more responsive. In 1960, Mattel released Chatty Cathy, which chirped 'I love you' and 'Let's play school'. By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came Furby and Tamagotchi in the 1990s, creatures requiring care and attention, mimicking emotional needs.
The 2015 release of 'Hello Barbie', which used cloud-based AI to listen to and respond to children's conversations, signalled another important, albeit short-lived, change. Barbie now remembered what children told her, sending data back to Mattel's servers. Security researchers soon showed that the dolls could be hacked, exposing home networks and personal recordings.
Putting generative AI in the mix is a new development. Unlike earlier talking toys, such systems will engage in free-flowing conversation. They may simulate care, express emotion, remember preferences and give seemingly thoughtful advice. The result will be toy that don't just entertain, but interact on a psychological level. Of course, they won't really understand or care, but they may appear to.
Details from Mattel or Open AI are scarce. One would hope that safety features will be built in, including limitations on topics and pre-scripted responses for sensitive themes and when conversations go off course.
But even this won't be foolproof. AI systems can be 'jailbroken' or tricked into bypassing restrictions through roleplay or hypothetical scenarios. Risks can only be minimised, not eradicated.
The risks are multiple. Let's start with privacy. Children can't be expected to understand how their data is processed. Parents often don't either – and that includes me. Online consent systems nudge us all to click 'accept all', often without fully grasping what's being shared.
Then there's psychological intimacy. These toys are designed to mimic human empathy. If a child comes home sad and tells their doll about it, the AI might console them. The doll could then adapt future conversations accordingly. But it doesn't actually care. It's pretending to, and that illusion can be powerful.
This creates potential for one-sided emotional bonds, with children forming attachments to systems that cannot reciprocate. As AI systems learn about a child's moods, preferences and vulnerabilities, they may also build data profiles to follow children into adulthood.
These aren't just toys, they're psychological actors.
A UK national survey I conducted with colleagues in 2021 about possibilities of AI in toys that profile child emotion found that 80% of parents were concerned about who would have access to their child's data. Other privacy questions that need answering are less obvious, but arguably more important.
When asked whether toy companies should be obliged to flag possible signs of abuse or distress to authorities, 54% of UK citizens agreed – suggesting the need for a social conversation with no easy answer. While vulnerable children should be protected, state surveillance into the family domain has little appeal.
Yet despite concerns, people also see benefits. Our 2021 survey found that many parents want their children to understand emerging technologies. This leads to a mixed response of curiosity and concern. Parents we surveyed also supported having clear consent notices, printed on packaging, as the most important safeguard.
My more recent 2025 research with Vian Bakir on online AI companions and children found stronger concerns. Some 75% of respondents were concerned about children becoming emotionally attached to AI. About 57% of people thought that it is inappropriate for children to confide in AI companions about their thoughts, feelings or personal issues (17% thought it is appropriate, and 27% were neutral).
Our respondents were also concerned about the impact on child development, seeing scope for harm.
In other research, we have argued that current AI companions are fundamentally flawed. We provide seven suggestions to redesign them, involving remedies for over-attachment and dependency, removal of metrics based on extending engagement though personal information disclosure and promotion of AI literacy among children and parents (which represents a huge marketing opportunity by positively leading social conversation).
It's hard to know how successful the new venture will be. It might be that that Empathic Barbie goes the way of Hello Barbie, to toy history. If it does not, the key question for parents is this: whose interests is this toy really serving, your child's or that of a business model?
Toy companies are moving ahead with empathic AI products, but the UK, like many countries, doesn't yet have a specific AI law. The new Data (Use and Access) Act 2025 updates the UK's data protection and privacy and electronic communications regulations, recognising need for strong protections for children. The EU's AI Act also makes important provisions.
International governance efforts are vital. One example is IEEE P7014.1, a forthcoming global standard on the ethical design of AI systems that emulate empathy (I chair the working group producing the standard).
The organisation behind the standard, the IEEE, ultimately identifies potential harms and offers practical guidance on what responsible use looks like. So while laws should set limits, detailed standards can help define good practice.
The Conversation approached Mattel about the issues raised in this article and it declined to comment publicly.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Andrew McStay is funded by EPSRC Responsible AI UK (EP/Y009800/1) and is affiliated with IEEE.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This $1,299 Robot Dog Plays Soccer, Dances and Can 'Evolve' With AI
This $1,299 Robot Dog Plays Soccer, Dances and Can 'Evolve' With AI

CNET

timean hour ago

  • CNET

This $1,299 Robot Dog Plays Soccer, Dances and Can 'Evolve' With AI

After an initial appearance at CES 2025, the Hengbot $1,299 Sirius robot dog is going on preorder Thursday, highlighting an agile body that's able to play soccer while also integrating with OpenAI to develop its own personality. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) While robot dogs have been around for decades with brands like Sony's Aibo, Hengbot's Sirius robot is putting a particular emphasis on how quickly it can move. The company has been showing this off in videos that demonstrate it dancing to piano music or shaking hands next to an actual dog. The Sirius robot has a total of 14 axis across its legs and head along with proprietary joints named Neurocore that let it move more naturally. Hengbot says it's also customizable using a "game-like" editor, featuring the ability to customize the robot's personality to be more like specific dogs such as a Corgi, set up different facial reactions to appear on its head or upload voice packs to customize what the robot sounds like. However Hengbot does note that, because this is a robot after all, the Sirius robot dog is meant primarily for flat indoor surfaces and isn't meant to be played with outside. So while dancing and soccer games (which can be played with an included controller) are fine, the Sirius robot is unlikely to be running up and down the stairs of a house. The robot's battery will also let it run for 40 to 60 minutes when moving or one to two hours when standing still and Hengbot says it takes about an hour to recharge. Other specs on Sirius include an 8-megapixel camera used for vision, two USB-C ports used for development and accessories Hengbot's Sirius robot dog can be set up with controllers and a VR headset to play a soccer game. Hengbot Using a network connection, the Sirius robot develops it's personality with an integration with large language models like OpenAI, which it also uses to process voice commands. This can include asking it to dance, sit or to act like other animals like a cat. Hengbot says the dog itself does not collect data but also advises that the Sirius is designed more for adults who would like to use some of the more creative features that the dog is capable of. And that distinction is important, as AI is still prone to hallucinations and other mistakes. For instance, last year my colleague Bridget Carey tested a ChatGPT-enabled teddy bear named Poe made by the Los Angeles-based toy maker Skyrocket. The bear created stories using prompts within the app but at launch its vocabulary was rather verbose for young audiences. While Hengbot's Sirius robot will primarily be dancing, barking and moving about, Hengbot does say that the Sirius robot is eventually intended to integrate with a wider creative platform meant for hosting and sharing different projects. The Sirius robot is available to preorder on Hengbot's website and it's expected to be available this fall.

ChatGPT was down — Here's what happened during the quick OpenAI outage
ChatGPT was down — Here's what happened during the quick OpenAI outage

Tom's Guide

time2 hours ago

  • Tom's Guide

ChatGPT was down — Here's what happened during the quick OpenAI outage

OpenAI seems to be down, as ChatGPT is not working for a lot of people right now. Reports began appearing on the outage tracking site Down Detector at around 9:30am BST / 1:30am PT / 4:30am ET, with a sudden spike happening shortly after. According to OpenAI's status page, this seemed to only be affecting the API level (the implementation of ChatGPT tech into other apps), but it was more widespread than that. Fortunately, a fix was implemented quickly afterwards and operations seem to have returned to normal. Update: Date: 2025-06-26T09:21:29+00:00 Title: BREAKING: ChatGPT is operational again Content: OK I'm calling it. After receiving reports from other folks and testing ChatGPT myself, it looks as if service has been restored. We'll keep a look out in case there's a dreaded second spike, but for now, I'm calling it. Downdetector reports are down to 102, which is considered normal. Update: Date: 2025-06-26T09:13:41+00:00 Title: Well, that was fast... Content: No need to call it yet, but that spike has massively fallen off — indicating that a fix has been implemented. We'll keep monitoring! Update: Date: 2025-06-26T09:10:21+00:00 Title: Who is affected right now? Content: Looking at the UK heatmap on downdetector, there seems to be a huge concentration in London and Manchester, as well as reports cropping up from Glasgow, Cambridge and Birmingham. Update: Date: 2025-06-26T09:08:58+00:00 Title: Here come the X posts Content: DOWN AGAIN? #chatgptJune 26, 2025 Looking like people are starting to share their frustrations! Update: Date: 2025-06-26T09:06:23+00:00 Title: What components are down? Content: This is an interesting question to answer, as I'm hearing one thing from OpenAI, and another from the people using the service. According to OpenAI, the issue affects the APIs — the tools developers use to implement ChatGPT into their apps. This is causing slow down or stopping access to it at the moment. Meanwhile, if you take a look at what the users are reporting, they're saying ChatGPT access is unavailable right now more broadly. We'll keep investigating. Update: Date: 2025-06-26T09:02:35+00:00 Title: U.S. may not be affected Content: Now, this could just be because everyone in the U.S. is probably asleep. But this outage report is completely focused on the UK and Europe. Looking at Status Gator gets us a little more information on this, as the heat map is particularly prominent in Europe right now. Update: Date: 2025-06-26T08:58:39+00:00 Title: That's a big spike Content: Welcome to the OpenAI outage live blog! So far, 1269 concurrent reports have been made over the past hour. We're getting an understanding of what's happening, and reaching out to OpenAI for comment.

Exclusive: How Claude became an emotional support bot
Exclusive: How Claude became an emotional support bot

Axios

time2 hours ago

  • Axios

Exclusive: How Claude became an emotional support bot

People who talk to Anthropic's Claude chatbot about emotional issues tend to grow more positive as the conversation unfolds, according to new Anthropic research shared exclusively with Axios. Why it matters: Having a trusted confidant available 24/7 can make people feel less alone, but chatbots weren't designed for emotional support. Bots have displayed troubling tendencies, like reinforcing delusional behavior or encouraging self-harm, that are especially problematic for young people or adults struggling with their mental health. Driving the news: Anthropic released new research Thursday that explores how users turn to its chatbot for support and connection and what happens when they do. While anecdotes of users prompting AI bots like Claude and ChatGPT for emotional support are widespread, Anthropic's report is the first formal acknowledgment by the AI provider of this use. What they're saying:"We find that when people come to Claude for interpersonal advice, they're often navigating transitional moments — figuring out their next career move, working through personal growth, or untangling romantic relationships," per the report. The report calls these interactions with chatbots "affective use," defined roughly as personal exchanges with Claude motivated by emotional or psychological needs. Zoom in: The report found evidence that users don't necessarily turn to chatbots deliberately looking for love or companionship, but some conversations evolve that way. "We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship — despite that not being the original reason someone reached out," per the report. "As these conversations with Claude progress, we found that the person's expressed sentiment often becomes more positive," Anthropic societal impacts researcher Miles McCain told Axios. "And while we can't claim that these shifts represent lasting emotional benefits, the absence of clear negative spirals is reassuring." Researchers behind the report told Axios that the results are preliminary and that measuring "expressed sentiment" can be limited. By the numbers: Anthropic found that AI companionship isn't fully replacing the real thing anytime soon. Most people still use Claude for work tasks and content creation. A relatively small number (2.9%) of interactions with Claude constituted "affective use" — a finding that confirms previous research from OpenAI. Companionship and roleplay combined were 0.5% of conversations. Romantic or sexual roleplay — which Claude's training actively discourages — was less than 0.1%, according to the report. What they did: Anthropic analyzed user behavior with Clio, a tool it launched last year that works like Google Trends — aggregating chats while stripping out identifying details. Clio anonymizes and aggregates Claude chats to keep specific conversations private while revealing broader trends. This is similar to the way Google tracks what people are searching for without revealing (or allowing humans to have access to) personal search histories. The research excluded conversations that focused on writing stories, fictional dialogues, blog posts or other content. Among conversations that included roleplaying, Anthropic says the researchers only analyzed "meaningful interactive" chats, meaning that they included four or more human messages. Yes, but: While the internet is full of people claiming that they've cut costs on therapy by turning to a chatbot, there's plenty of evidence that bots make particularly bad therapists because they're so eager to please users. Anthropic says it didn't study extreme usage patterns or how chatbots can reinforce delusions or conspiracy theories, which the company admits is worthy of a separate study. To keep chats private, the researchers only looked at clusters of conversations from multiple users and didn't analyze individual users' conversations over time, which makes it difficult to study emotional dependency, Anthropic notes. "We need large, rigorous trials that are of longer duration, because if you just relieve a person's anxiety, stress or depression on a very short term basis, that's not what we're after," physician-researcher Eric Topol told Axios. "We're after durable benefits. So I'm confident that we're going to get there, but we're not there yet." Zoom out: Anthropic, founded by former OpenAI staff, pitches Claude as a more responsible alternative to ChatGPT. "Safety is deeply ingrained in everything that we do, and it underpins all of our work," Alexandra Sanderford, Anthropic's head of safeguards, policy and enforcement, told Axios. "We really do try to prioritize human values and welfare." The company has recently shared assessments of potentially alarming behavior by Claude in hypothetical test scenarios, including a willingness to blackmail users. Last week Anthropic also released findings

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store