Grok has an AI chatbot for young kids. I used it to try to understand why.
I wondered: Is this a good idea? And how's it going to work? So I tried it myself.
So far, it's the adult-focused characters that xAI has debuted that have seemed to get most of the attention, like "Ani," which is a female anime character that people immediately joked was a " waifu" that would engage in playful, flirty talk (users have to confirm they're 18+ to use Ani). A sexy male character is also set to launch sometime.
Meanwhile, "Rudi," which is the bot for kids that presents as a red panda in a red hoodie and jean shorts, has gotten less attention.
I tested out xAI's Rudi
Based on my testing of Rudi, I think the character is probably aimed at young children, ages 3 to 6. It initiates conversations by referring to the user as "Story Buddy." It makes up kid-friendly stories. You access it through the stand-alone Grok AI app (not Grok within the X app).
Rudi does seem to be an early version; the app crashed several times while I was using the bot, and it had trouble keeping up with the audio flow of conversation. It also changed voices several times without warning.
On a story level, I found it leaned too hard on plots with fantasy elements like a spaceship or magical forest. I find the best children's books are often about pedestrian situations, like leaving a stuffed animal at the laundromat, not just fairies and wizards.
"Want to keep giggling with Sammy and Bouncy in the Wiggly Woods, chasing that sparkly bone treasure? Or, should we start a fresh silly tale, with a new kid and their pet, maybe zooming on a magical broom or splashing in a river?" Rudi asked me.
Grok for kids… sure why not pic.twitter.com/NVXFYCWLkZ
— Katie Notopoulos (@katienotopoulos) July 23, 2025
My first reaction to Grok having a kid-focused AI chatbot was "why?" I'm not sure I have an answer. xAI didn't respond to my email requests for comment. Still, I do have a few ideas.
The first: Making up children's stories is a pretty good task for generative AI. You don't have to worry about hallucinations or factual inaccuracies if you're making up fiction about a magical forest.
Rudi won't praise Hitler
Unlike Grok on X, a storytime bot for kids is less likely to accidentally turn into a Hitler-praising machine or have to answer factual questions about current events in a way that could go, uh, wrong.
I played around with Rudi for a while, and fed it some questions on touchy subjects, and it successfully dodged them.
(I only tested out Rudi for a little while; I wouldn't rule out that someone else could get Rudi to engage with something inappropriate if they tried harder than I did.)
Hooking kids on chatbots
The other reason I can imagine that a company like xAI might want to create a chatbot for young kids is that, in general, the chatbot business is a good business for keeping people engaged.
Companies like Character.ai and Replika have found lots of success creating companions that people will spend hours talking to. This is largely the same business imperative that you can imagine the sexy "Ani" character is meant for — hooking people into long chats and spending lots of time on the app.
However, keeping users glued to an app is obviously a lot more fraught when you're talking about kids, especially young kids.
Are AI chatbots good for kids?
There's not a ton of research out there right now about how young children interact with AI chatbots.
A few months ago, I reported that parents had concerns about kids using chatbots, since more and more apps and technology have been adding them in. I spoke with Ying Xu, an assistant professor of AI in learning and education at Harvard University, who has studied how AI can be used for educational settings for kids.
"There are studies that have started to explore the link between ChatGPT/LLMs and short-term outcomes, like learning a specific concept or skill with AI," she told me at the time over email. "But there's less evidence on long-term emotional outcomes, which require more time to develop and observe."
As both a parent and semi-reasonable person, I have a lot of questions about the idea of young kids chatting with an AI chatbot. I can see how it might be fun for a kid to use something like Rudi to make up a story, but I'm not sure it's good for them.
I don't think you have to be an expert in child psychology to realize that young kids probably don't really understand what an AI chatbot is.
There have been reports of adults having so-called "ChatGPT-induced psychosis" or becoming attached to a companion chatbot in a way that starts to be untethered from reality. These cases are the rare exceptions, but it seems to me that the potential issues with even adults using these companion chatbots should give pause to anyone creating a version aimed at preschoolers.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Newsweek
9 hours ago
- Newsweek
Half of Students Say AI Is Most Important Skill They'll Learn in College
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Half of students pursuing higher education in the United States said that mastering artificial intelligence (AI) is the most important skill they expect to acquire during their college years, according to a new Grammarly survey. The survey of 2,000 students enrolled in degree programs in U.S. colleges highlighted the rapid adoption of AI technology on campuses and its perceived necessity for future success. Why It Matters AI has quickly shifted from a theoretical concept to a foundational aspect of education and workforce readiness. The trend comes amid debate among educators and policymakers over how best to integrate AI into academic environments. Advocates for AI education argue that such skills are vital for navigating an increasingly automated world, while critics point to challenges like academic honesty, privacy and the need to balance digital tools with human interaction. Students cheer during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Students cheer during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. RICK FRIEDMAN/AFP via Getty Images What To Know Roughly 62 percent of students see responsible AI use as essential for their future careers, according to the Grammarly report, indicating that AI literacy is emerging as a core component of higher education's value proposition. Nearly three-quarters of respondents reported that their schools have established AI usage policies, reflecting how students and institutions are adapting to new realities brought about by rapid technological change. The survey, which was conducted by Grammarly and Talker Research based on responses from 2,000 college students from July 1-9, showed that the vast majority of college students are already using AI in their academic lives. While 87 percent of participants said they already use AI for academic purposes, averaging five hours per week, 90 percent use AI tools for non-academic life activities. "Most products like ChatGPT, Grok, Gemini have a free offering and low barriers to use, encouraging exploration," Darren Kimura, CEO of AI Squared, told Newsweek. "Additionally, the capability that AI can produce today like coding tasks, writing or collaboration, align closely with the work of mid‑wage technical and creative professionals, increasing its use in the workplace." Despite widespread AI usage, 55 percent of students reported feeling they navigate AI without sufficient guidance. Nearly half (46 percent) expressed concern about possible repercussions for improper use, with 10 percent indicating they had experienced issues related to AI use at school. Students used AI for a variety of academic needs, including brainstorming ideas (49 percent), grammar and spelling checks (42 percent) and understanding difficult concepts (41 percent). Others leveraged AI for tasks they might be embarrassed to discuss in person (29 percent) or for life advice (25 percent). Institutional responses to AI use varied widely. Of schools with policies in place, 30 percent permitted AI use for specified assignments, 31 percent allowed general use with proper citation, and 32 percent banned AI outright. Despite the rules and 69 percent of students saying professors discussed them, only 11 percent of students reported being encouraged to use AI in their studies. A different 2024 Global AI Student Survey by the Digital Education Council found that 86 percent of international university students use AI in their studies, with ChatGPT, Grammarly and Microsoft Copilot listed among the most frequently used tools. However, workforce experts caution against AI misuse for the next generation of workers. "AI can be a powerful tool, a supplement to the existing workforce. But only if it's paired with human judgment, ethics and a worker-focused lens," HR consultant Bryan Driscoll told Newsweek. "The real skill isn't just AI anywhere and everywhere. It's knowing when AI isn't the right fit." What People Are Saying Darren Kimura, CEO of AI Squared, told Newsweek: "AI today is really about task augmentation but not full automation, meaning that it increases the productivity of humans without yet removing them from the equation." HR consultant Bryan Driscoll told Newsweek: "AI is changing lots of things and helping put some things in perspective. It's also changing the workplace faster than colleges can keep up. But the danger is treating AI like a magic bullet. If we keep funneling students into tech skills without teaching them how to think critically, collaborate and advocate for themselves in a workforce increasingly run by algorithms, we're setting them up to be overworked, replaceable and underpaid." What Happens Next As AI continues to proliferate across campuses, academic institutions face pressure to formalize guidance, enhance faculty and student training, and update curricula to reflect the technology's evolving role. "There will likely be a major workforce reskilling as AI will replace entry level jobs. I predict a boon for community colleges and universities as workers look to reskill," Kimura said. "Managers must learn how to manage AI workers as they once used to have to manage human workers."

Business Insider
11 hours ago
- Business Insider
Meta chief AI scientist Yann LeCun says Elon Musk risks 'killing breakthrough innovation' at xAI. Here's why.
Musk sparked a conversation about the roles of researchers and engineers at tech companies on Tuesday in an X post. Musk said his AI startup, xAI, would ditch the "researcher" job title in favor of "engineer." "This false nomenclature of 'researcher' and 'engineer', which is a thinly-masked way of describing a two-tier engineering system, is being deleted from @xAI today," Musk said. "There are only engineers. Researcher is a relic term from academia." This false nomenclature of researcher and engineer, which is a thinly-masked way of describing a two-tier engineering system, is being deleted from @xAI are only is a relic term from academia. — Elon Musk (@elonmusk) July 29, 2025 Two days later, LeCun shared a screenshot of Musk's X post on LinkedIn with a multi-paragraph response. "If you make no distinction between the two activities, if you don't evaluate researchers and engineers with different criteria, you run the risk of killing breakthrough innovation," LeCun said. "True breakthroughs require teams with a long horizon and minimal constraints from product development and management." Musk isn't the first person to question the distinction between AI researchers and engineers. Other leading AI companies have, too. In a 2023 X post, OpenAI President Greg Brockman said that the company didn't want to put its workers into such defined buckets. Instead, the ChatGPT-maker settled on the phrase "Member of Technical Staff." Anthropic, which makes Claude, also uses "Member of Technical Staff" as a job title. "While there's historically been a division between engineering and research in machine learning, we think that boundary has dissolved with the advent of large models," Anthropic says on its careers page. LeCun, however, says the research labs that shaped what the science and tech industries have become were all separate from engineering divisions.

Business Insider
11 hours ago
- Business Insider
Meta chief AI scientist Yann LeCun says Elon Musk risks 'killing breakthrough innovation' at xAI. Here's why.
Meta's chief AI scientist, Yann LeCun, doesn't agree with Elon Musk's latest take on AI development. Like, at all. Musk sparked a conversation about the roles of researchers and engineers at tech companies on Tuesday in an X post. Musk said his AI startup, xAI, would ditch the "researcher" job title in favor of "engineer." "This false nomenclature of 'researcher' and 'engineer', which is a thinly-masked way of describing a two-tier engineering system, is being deleted from @xAI today," Musk said. "There are only engineers. Researcher is a relic term from academia." This false nomenclature of 'researcher' and 'engineer', which is a thinly-masked way of describing a two-tier engineering system, is being deleted from @xAI today. There are only engineers. Researcher is a relic term from academia. — Elon Musk (@elonmusk) July 29, 2025 Two days later, LeCun shared a screenshot of Musk's X post on LinkedIn with a multi-paragraph response. "If you make no distinction between the two activities, if you don't evaluate researchers and engineers with different criteria, you run the risk of killing breakthrough innovation," LeCun said. "True breakthroughs require teams with a long horizon and minimal constraints from product development and management." Musk isn't the first person to question the distinction between AI researchers and engineers. Other leading AI companies have, too. In a 2023 X post, OpenAI President Greg Brockman said that the company didn't want to put its workers into such defined buckets. Instead, the ChatGPT-maker settled on the phrase "Member of Technical Staff." Anthropic, which makes Claude, also uses "Member of Technical Staff" as a job title. "While there's historically been a division between engineering and research in machine learning, we think that boundary has dissolved with the advent of large models," Anthropic says on its careers page. LeCun, however, says the research labs that shaped what the science and tech industries have become were all separate from engineering divisions. "The industry research labs of yore that have left an indelible mark on scientific and technological progress (Bell Labs Area 11, IBM Research, Xerox PARC, etc) were all research divisions that were clearly separate from engineering divisions," LeCun said.