Anthropic cofounder says plenty of parents would buy an AI teddy bear to keep their kids busy
Jack Clark, the cofounder of Anthropic, "a lot" of parents will want an AI teddy bear to help entertain their kids — himself included.
"I think most parents, if they could acquire a well-meaning friend that could provide occasional entertainment to their child when their child is being very trying, they would probably do it," he said on an episode of the Conversations with Tyler podcast that posted last week.
AI tools for kids' entertainment are already here — including a Grimes-backed stuffed rocket ship, which kids can chat with and ask questions to, and a storytelling bear that uses artificial intelligence to generate narratives.
While Clark wasn't explicitly talking about those, he said he'd be supportive of toys with expanded capabilities — "smart AI friends" that could interact with children on the same level as someone in their age group.
"I am annoyed I can't buy the teddy bear yet," said Clark, who acted as policy director at OpenAI for 2 years before transitioning to Anthropic.
Clark said he doesn't think he's alone, either — as soon as children display a need to socialize, parents look for some way to get them to interact with their peers, he said. An AI companion could be an addition, rather than a substitute, he said.
"I think that once your lovable child starts to speak and display endless curiosity and a need to be satiated, you first think, 'How can I get them hanging out with other human children as quickly as possible?'" he said, adding that he's also placed his child on a preschool waitlist.
He's especially wished for the help of an AI tool while doing chores, he added.
"I've had this thought, 'Oh, I wish you could talk to your bunny occasionally so that the bunny would provide you some entertainment while I'm putting the dishes away, or making you dinner, or something,'" Clark said. "Often, you just need another person to be there to help you wrangle the child and keep them interested. I think lots of parents would do this."
Not all tech leaders agree — Sam Altman, CEO of OpenAI and father as of February, says he doesn't want his son's best friend to be a bot.
"These AI systems will get to know you over the course of your life so well — that presents a new challenge and level of importance for how we think about privacy in the world of AI," Altman said while testifying before the Senate last week.
A paper released by researchers at Microsoft and Carnegie Mellon University said AI being used "improperly" by knowledge workers could lead to the "deterioration of cognitive faculties" — and students are frequently using AI to "help" them with their assignments. But some research does show children can be taught, early on, to work alongside AI, rather than to depend on it entirely.
Clark is an advocate for measured exposure — he said removing a hypothetical AI friend from a kid's life entirely could result in them developing an unhealthy relationship with the technology later on in life. If a child starts to show a preference for their AI companion over their human friends, it's up to their parents to reorient them.
"I think that's the part where you have them spend more time with their friends, but you keep the bunny in their life because the bunny is just going to get smarter and be more around them as they grow up," he said. "If you take it away, they'll probably do something really strange with smart AI friends in the future."
Like any other technology that's meant to provide entertainment, Clark said, it's ultimately up to parents to regulate their child's use.
"We do this today with TV, where if you're traveling with us, like on a plane with us, or if you're sick, you get to watch TV — the baby — and otherwise, you don't, because from various perspectives, it seems like it's not the most helpful thing," he said. "You'll probably need to find a way to gate this. It could be, 'When mom and dad are doing chores to help you, you get the thing. When they're not doing chores, the thing goes away.'"
Clark did not immediately respond to a request for comment from Business Insider.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
17 minutes ago
- Yahoo
Amazon wants to become a global marketplace for AI
Amazon Web Services isn't betting on one large language model (LLM) winning the artificial intelligence race. Instead, it's offering customers a buffet of models to choose from. AWS, the cloud computing arm of Amazon (AMZN), aims to become the go-to infrastructure layer for the AI economy, regardless of which model wins out. By making customer choice a defining principle, AWS hopes to win out against rivals that have aligned closely with specific LLM providers — notably Microsoft (MSFT), which partnered with ChatGPT creator OpenAI ( 'We don't think that there's going to be one model to rule them all,' Dave Brown, vice president of compute and networking at AWS, told Yahoo Finance. The model-neutral approach is embedded into Amazon Bedrock, a service that allows AWS customers to build their own applications using a wide range of models, with more than 100 to choose from. Brown added that after Chinese startup DeepSeek surprised the world, AWS had a fully managed version of the disruptive model available on Bedrock within a week. Two years after its launch, Bedrock is now the fastest-growing service offered by AWS, which accounted for over 18% of Amazon's total revenue in the first quarter. It's why Amazon CEO Andy Jassy sees Bedrock as a core part of the company's AI growth strategy. But to understand the competitive advantage AWS hopes to offer with Bedrock, you have to go back to its origin story. Bedrock dates back to a six-page internal memo that Atul Deo, AWS's director of product management, wrote in 2020. Before OpenAI's ChatGPT launched in 2022 and made 'generative AI' a household term, Deo pitched a service that could generate code from plain English prompts using large language models. But Jassy, the head of AWS at the time, didn't buy it. 'His initial reaction was, 'This seems almost like a pipe dream,'' Deo said. He added that while a tool that makes coding easy sounds obvious now, the technology was 'still not quite there.' When that project, initially known as Code Whisperer, launched in 2023, the team realized they could offer the service for a broader set of use cases, giving customers a choice of different models with 'generic capabilities' that 'could be used as a foundation to build a lot of interesting applications,' according to Deo. Deo noted that the team steered away from doubling down on its own model after it recognized a pattern of customers wanting choice in other AWS services. This led to AWS becoming the first provider to offer a range of different models to customers. With this foundational approach in mind, Amazon renamed the project Bedrock. To be sure, the model-agnostic approach has risks, and many analysts don't consider Amazon to be leading the AI race, even though it has ramped up its AI spending. If there is ultimately one model to rule them all, similar to how Google came to dominate search, Amazon could risk further falling behind. At the beginning of the year, Amazon and its peers Meta (META), Microsoft, and Google parent Alphabet (GOOG) expected to spend $325 billion combined, mostly on AI infrastructure. To keep pace, Amazon has hedged its bets with its own technology and one LLM provider in particular: Anthropic. In November 2024, AWS doubled its investment in Anthropic to $8 billion in a deal that requires Anthropic to train its large language model, Claude, using only AWS's chips. (For comparison, Microsoft has invested over $13 billion into OpenAI.) The $8 billion deal allows Amazon to prove out its AI training infrastructure and deepen ties with one LLM provider while continuing to offer customers a wide selection of models on Bedrock. 'I mean, this is cloud selling 101, right?' said Dan Rosenthal, head of go-to-market partnerships at Anthropic. 'There are some cases where it's been very clear that a customer wants to use a different model on Bedrock for something that we just frankly don't focus on, and that's great. We want to win where we have a right to win.' Amazon also launched its own family of foundational models, called Nova, at the end of 2024, two years after the launch of ChatGPT. But competition and expectations remain high: Revenue at AWS increased 16.9% to $29.27 billion in Q1, marking the third time in a row it missed analyst estimates despite double-digit growth. The Anthropic partnership also underscores a bigger competition AWS may be fighting with chipmakers, including Nvidia (NVDA), which recently staged a $1 trillion rally in just two months after an earnings print that eased investor concerns about chip export controls. While Amazon is an Nvidia customer, it also produces highly effective and more affordable AI chips based on power consumed (known as 'price performance'). On Bedrock, AWS lets clients choose whether to use its own CPUs and GPUs or chips from competitors like Intel (INTC), AMD (AMD), and Nvidia. 'We're able to work with the model providers to really optimize the model for the hardware that it runs,' Brown said. 'There's no change the customer has to make.' Customers not only have a choice of model but also a choice of which infrastructure the model should run and train on. This helps AWS compete on price — a key battleground with Nvidia, which offers the most expensive chips on the market. This 'coopetition' dynamic could position Amazon to take market share from Nvidia if it can prove its own chips can do the job for a lower sticker price. It's a bet that Amazon is willing to spend on, with capital expenditures expected to hit $100 billion in 2025, up from $83 billion last year. While AWS doesn't break out its costs for AI, CEO Andy Jassy said on an earnings call in February that the 'vast majority of that capex spend is on AI for AWS.' In an April letter to shareholders, Jassy noted that 'AI revenue is growing at triple-digit YoY percentages and represents a multibillion-dollar annual revenue run rate.' Sign in to access your portfolio

Yahoo
21 minutes ago
- Yahoo
OpenAI claims to have hit $10B in annual revenue
OpenAI says it recently hit $10 billion in annual recurring revenue, up from around $5.5 billion last year. That figure includes revenue from the company's consumer products, ChatGPT business products, and its API, an OpenAI spokesperson told CNBC. Currently, OpenAI is serving more than 500 million weekly active users and 3 million paying business customers. The revenue milestone comes roughly two and a half years after OpenAI launched its popular chatbot platform, ChatGPT. The company is targeting $125 billion in revenue by 2029. OpenAI is under some pressure to increase revenue quickly. The company burns billions of dollars each year hiring and recruiting talent to work on its AI products, and securing the necessary infrastructure to train and run AI systems. OpenAI has not disclosed its operating expenses or whether it is close to profitability. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
34 minutes ago
- Yahoo
OpenAI's annualized revenue hits $10 billion, up from $5.5 billion in December 2024
(Reuters) -OpenAI ( said on Monday that its annualized revenue run rate surged to $10 billion as of June, positioning the company to hit its full-year target amid booming AI adoption. Its projected annual revenue figure based on current revenue data, which was about $5.5 billion in December 2024, has demonstrated strong growth as the adoption and use of its popular ChatGPT artificial-intelligence models continue to rise. This means OpenAI is on track to achieve its revenue target of $12.7 billion in 2025, which it had shared with investors earlier. The $10 billion figure excludes licensing revenue from OpenAI-backer Microsoft and large one-time deals, an OpenAI spokesperson confirmed. The details were first reported by CNBC. Considering the startup lost about $5 billion last year, OpenAI's revenue milestone shows how far ahead the company is in revenue scale compared to its competitors, which are also benefiting from growing AI adoption. Anthropic recently crossed $3 billion in annualized revenue on booming demand from code-gen startups using its models. OpenAI said in March it would raise up to $40 billion in a new funding round led by SoftBank Group, at a $300 billion valuation. In more than two years since it rolled out its ChatGPT chatbot, the company has introduced a bevy of subscription offerings for consumers as well as businesses. OpenAI had 500 million weekly active users as of the end of this March.