Latest news with #TeddyRuxpin
Yahoo
a day ago
- Entertainment
- Yahoo
Mattel and OpenAI have partnered up – here's why parents should be concerned about AI in toys
Mattel may seem like an unchanging, old-school brand. Most of us are familiar with it – be it through Barbie, Fisher-Price, Thomas & Friends, Uno, Masters of the Universe, Matchbox, MEGA or Polly Pocket. But toys are changing. In a world where children grow up with algorithm-curated content and voice assistants, toy manufacturers are looking to AI for new opportunities. Mattel has now partnered with OpenAI, the company behind ChatGPT, to bring generative AI into some of its products. As OpenAI's services are not designed for children under 13, in principle Mattel will focus on products for families and older children. But this still raises urgent questions about what kind of relationships children will form with toys that can talk back, listen and even claim to 'understand' them. Are we doing right by kids, and do we need to think twice before bringing these toys home? Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences. For as long as there have been toys, children have projected feelings and imagined lives onto them. A doll could be a confidante, a patient or a friend. But over recent decades, toys have become more responsive. In 1960, Mattel released Chatty Cathy, which chirped 'I love you' and 'Let's play school'. By the mid-1980s, Teddy Ruxpin had introduced animatronic storytelling. Then came Furby and Tamagotchi in the 1990s, creatures requiring care and attention, mimicking emotional needs. The 2015 release of 'Hello Barbie', which used cloud-based AI to listen to and respond to children's conversations, signalled another important, albeit short-lived, change. Barbie now remembered what children told her, sending data back to Mattel's servers. Security researchers soon showed that the dolls could be hacked, exposing home networks and personal recordings. Putting generative AI in the mix is a new development. Unlike earlier talking toys, such systems will engage in free-flowing conversation. They may simulate care, express emotion, remember preferences and give seemingly thoughtful advice. The result will be toy that don't just entertain, but interact on a psychological level. Of course, they won't really understand or care, but they may appear to. Details from Mattel or Open AI are scarce. One would hope that safety features will be built in, including limitations on topics and pre-scripted responses for sensitive themes and when conversations go off course. But even this won't be foolproof. AI systems can be 'jailbroken' or tricked into bypassing restrictions through roleplay or hypothetical scenarios. Risks can only be minimised, not eradicated. The risks are multiple. Let's start with privacy. Children can't be expected to understand how their data is processed. Parents often don't either – and that includes me. Online consent systems nudge us all to click 'accept all', often without fully grasping what's being shared. Then there's psychological intimacy. These toys are designed to mimic human empathy. If a child comes home sad and tells their doll about it, the AI might console them. The doll could then adapt future conversations accordingly. But it doesn't actually care. It's pretending to, and that illusion can be powerful. This creates potential for one-sided emotional bonds, with children forming attachments to systems that cannot reciprocate. As AI systems learn about a child's moods, preferences and vulnerabilities, they may also build data profiles to follow children into adulthood. These aren't just toys, they're psychological actors. A UK national survey I conducted with colleagues in 2021 about possibilities of AI in toys that profile child emotion found that 80% of parents were concerned about who would have access to their child's data. Other privacy questions that need answering are less obvious, but arguably more important. When asked whether toy companies should be obliged to flag possible signs of abuse or distress to authorities, 54% of UK citizens agreed – suggesting the need for a social conversation with no easy answer. While vulnerable children should be protected, state surveillance into the family domain has little appeal. Yet despite concerns, people also see benefits. Our 2021 survey found that many parents want their children to understand emerging technologies. This leads to a mixed response of curiosity and concern. Parents we surveyed also supported having clear consent notices, printed on packaging, as the most important safeguard. My more recent 2025 research with Vian Bakir on online AI companions and children found stronger concerns. Some 75% of respondents were concerned about children becoming emotionally attached to AI. About 57% of people thought that it is inappropriate for children to confide in AI companions about their thoughts, feelings or personal issues (17% thought it is appropriate, and 27% were neutral). Our respondents were also concerned about the impact on child development, seeing scope for harm. In other research, we have argued that current AI companions are fundamentally flawed. We provide seven suggestions to redesign them, involving remedies for over-attachment and dependency, removal of metrics based on extending engagement though personal information disclosure and promotion of AI literacy among children and parents (which represents a huge marketing opportunity by positively leading social conversation). It's hard to know how successful the new venture will be. It might be that that Empathic Barbie goes the way of Hello Barbie, to toy history. If it does not, the key question for parents is this: whose interests is this toy really serving, your child's or that of a business model? Toy companies are moving ahead with empathic AI products, but the UK, like many countries, doesn't yet have a specific AI law. The new Data (Use and Access) Act 2025 updates the UK's data protection and privacy and electronic communications regulations, recognising need for strong protections for children. The EU's AI Act also makes important provisions. International governance efforts are vital. One example is IEEE P7014.1, a forthcoming global standard on the ethical design of AI systems that emulate empathy (I chair the working group producing the standard). The organisation behind the standard, the IEEE, ultimately identifies potential harms and offers practical guidance on what responsible use looks like. So while laws should set limits, detailed standards can help define good practice. The Conversation approached Mattel about the issues raised in this article and it declined to comment publicly. This article is republished from The Conversation under a Creative Commons license. Read the original article. Andrew McStay is funded by EPSRC Responsible AI UK (EP/Y009800/1) and is affiliated with IEEE.


Vox
19-06-2025
- Entertainment
- Vox
What we learned the last time we put AI in a Barbie
is a senior technology correspondent at Vox and author of the User Friendly newsletter. He's spent 15 years covering the intersection of technology, culture, and politics at places like The Atlantic, Gizmodo, and Vice. The first big Christmas gift I remember getting was an animatronic bear named Teddy Ruxpin. Thanks to a cassette tape hidden in his belly, he could talk, his eyes and mouth moving in a famously creepy way. Later that winter, when I was sick with a fever, I hallucinated that the toy came alive and attacked me. I never saw Teddy again after that. These days, toys can do a lot more than tell pre-recorded stories. So-called smart toys, many of which are internet-connected, are a $20 billion business, and increasingly, they're artificially intelligent. Mattel and OpenAI announced a partnership last week to 'bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.' They're planning to announce their first product later this year. It's unclear what this might entail: maybe it's Barbies that can gossip with you or a self-driving Hot Wheels or something we haven't even dreamed up yet. All of this makes me nervous as a young parent. I already knew that generative AI was invading classrooms and filling the internet with slop, but I wasn't expecting it to take over the toy aisle so soon. After all, we're already struggling to figure out how to manage our kids' relationship with the technology in their lives, from screen time to the uncanny videos made to trick YouTube's algorithm. As it seeps further into our society, a growing number of people are using AI without even realizing it. So you can't blame me for being anxious about how children might encounter the technology in unexpected ways. User Friendly A weekly dispatch to make sure tech is working for you, instead of overwhelming you. From senior technology correspondent Adam Clark Estes. Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. AI-powered toys are not as new as you might think. They're not even new for Mattel. A decade ago, the toy giant released Hello Barbie, an internet-connected doll that listened to kids and used AI to respond (think Siri, not ChatGPT). It was essentially the same concept as Teddy Ruxpin except with a lot of digital vulnerabilities. Naturally, security researchers took notice and hacked Hello Barbie, revealing that bad actors could steal personal information or eavesdrop on conversations children were having with the doll. Mattel discontinued the doll in 2017. Hello Barbie later made an appearance in the Barbie movie alongside other poor toy choices like Sugar Daddy Ken and Pregnant Midge. Despite this cautionary tale, companies keep trying to make talking AI toys a thing. One more recent example comes from the mind of Grimes, of all people. Inspired by the son she shares with Elon Musk, the musician teamed up with a company called Curio to create a stuffed rocket ship named Grok. The embodied chatbot is supposed to learn about whomever is playing with it and become a personalized companion. In real life, Grok is frustratingly dumb, according to Katie Arnold-Ratliff, a mom and writer who chronicled her son's experience with the toy in New York magazine last year. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' 'When it started remembering things about my kid, and speaking back to him, he was amazed,' Arnold-Ratliff told me this week. 'That awe very quickly dissipated once it was like, why are you talking about this completely unrelated thing.' Grok is still somewhere in their house, she said, but it has been turned off for quite some time. It turns out Arnold-Ratliff's son is more interested in inanimate objects that he can make come alive with his imagination. Sure, he'll play Mario on his Nintendo Switch for long stretches of time, but afterward, he'll draw his own worlds on paper. He'll even create digital versions of new levels on Super Mario Maker but get frustrated when the software can't keep up with his imagination. This is a miraculous paradox when it comes to kids and certain tech-powered toys. Although an adult might think that, for instance, AI could prompt kids to think about play in new ways or become an innovative new imaginary friend, kids tend to prefer imagining on their own terms. That's according to Naomi Aguiar, PhD, a researcher at Oregon State University who studies how children form relationships with AI chatbots. 'There's nothing wrong with children's imaginations. They work fine,' Aguiar said. 'What captures the hearts and minds of young children is often what they create for themselves with the inanimate artifacts.' Aguiar did concede that AI can be a powerful educational tool for kids, especially for those who don't have access to resources or who may be on the spectrum. 'If we focus on solutions to specific problems and train the models to do that, it could open up a lot of opportunities,' she told me. Putting AI in a Barbie, however, is not solving a particular problem. None of this means that I'm allergic to the concept of tech-centric toys for kids. Quite the opposite, in fact. Ahead of the Mattel-OpenAI announcement, I'd started researching toys my kid might like that incorporated some technology — enough to make them especially interesting and engaging — but stopped short of triggering dystopian nightmares. Much to my surprise, what I found was something of a mashup between completely inanimate objects and that terrifying Teddy Ruxpin. One of these toys is called a Toniebox, a screen-free audio player with little figurines called Tonies that you put atop the box to unlock content — namely songs, stories, and so forth. Licenses abound, so you can buy a Tonie that corresponds with pretty much any popular kids character, like Disney princesses or Paddington Bear. There are also so-called Creative Tonies that allow you to upload your own audio. For instance, you could ostensibly have a stand-in for a grandparent to enable story time, even if Grandma and Grandpa are not physically there. The whole experience is mediated with an app that the kid never needs to see. There's also the Yoto Player and the Yoto Mini, which are similar to the Toniebox but use cards instead of figurines and have a very low-resolution display that can show a clock or a pixelated character. Because it has that display, kids can also create custom icons to show up when they record their own content onto a card. Yoto has been beta-testing an AI-powered story generator, which is designed for parents to create custom stories for their kids. If those audio players are geared toward story time, a company called Nex makes a video game console for playtime. It's called Nex Playground, and kids use their movements to control it. This happens thanks to a camera equipped with machine-learning capabilities to recognize your movements and expressions. So imagine playing Wii Sports, but instead of throwing the Nintendo controller through your TV screen when you're trying to bowl, you make the bowling motion to play the game. Nex makes most of its games in-house, and all of the computation needed for its gameplay happens on the device itself. That means there's no data being collected or sent to the cloud. Once you download a game, you don't even have to be online to play it. 'We envision toys that can just grow in a way where they become a new way to interact with technology for kids and evolve into something that's much deeper, much more meaningful for families,' David Lee, CEO of Nex, said when I asked him about the future of toys. It will be a few more years before I have to worry about my kid's interactions with a video game console, much less an AI-powered Barbie — and certainly not Teddy Ruxpin. But she loves her Toniebox. She talks to the figurines and lines them up alongside each other, like a little posse. I have no idea what she's imagining them saying back. In a way, that's the point.