logo
Google's Gemini AI Kids Edition Is Here: What It Means For Parents

Google's Gemini AI Kids Edition Is Here: What It Means For Parents

Forbes12-05-2025

A child using a system AI chatbot on a mobile application to do his homework.
AI systems impact children's lives even when those children are not directly engaging with the tools.
In theory, AI has the potential to diagnose and treat illness, process vast datasets to advance research, and accelerate vaccine development. Unfortunately, AI also carries a well-documented set of risks. These include digital harms such as abuse, exploitation, discrimination, misinformation, and challenges to mental health and well-being.
These competing realities have recently spilled into the inboxes of parents using Google's Family Link controls. Many have begun receiving emails informing them that Gemini, Google's AI chatbot, will soon be available on their child's device.
As first reported by The New York Times, Google is allowing children under 13 to access Gemini through supervised accounts managed via Family Link. That's a notable change, especially considering Bard, Gemini's precursor, was only opened up to teens in 2023.
This update, rolling out gradually, enables children to explore Gemini's capabilities across a range of activities. These include support with homework, creative writing, and general inquiries. Parents can choose whether Gemini appears on Android, iOS, or the web, and configure it as their child's default assistant.
Gemini is being positioned as a tool to support learning, creativity, and exploration. Google's earlier messaging around Bard leaned into this idea, emphasizing AI as a study companion, not a homework doer.
Bard was offered to teenagers for a wide range of use cases, including finding inspiration, exploring new hobbies, and solving everyday challenges such as researching universities for college applications. It was also pitched as a learning tool, offering help with math problems or brainstorming for science projects.
The original messaging was clear: Bard wouldn't do all the work, but it would help with generating ideas and locating information. However, recent surveys on ChatGPT use in universities suggest that ideal isn't always upheld in practice. It turns out that when given the chance, humans, teenagers in particular, often take the shortcut.
And while the educational potential of generative AI is being more widely acknowledged, research indicates that digital tools are most effective when integrated into the school system. As UNICEF notes, for students to thrive, digital tools must support rather than replace teachers. Abandoning mainstream education in favor of AI isn't a viable path.
UNICEF's report ''How Can Generative AI Better Serve Children's Rights?'' reminds us that real risks run parallel to AI's potential.
Using the Convention on the Rights of the Child as a lens, the report outlines four principles: non-discrimination, respect for the child's views, the child's best interests, and the right to life, survival, and development. These should be the criteria for assessing whether children's rights are genuinely being protected, respected, and fulfilled in relation to AI.
The first major issue highlighted by the report is unequal access, referred to as "digital poverty." Not all kids have equal access to high-speed internet, smart devices, or educational AI. So while some children gain a learning edge, others are left behind, again.
Bias in training data is another major challenge. AI systems mirror the biases present in society, which means that children may encounter the same kinds of discrimination online as they do offline.
The issue of data consent is particularly thorny. What does meaningful consent look like for a 9-year-old when it comes to personal data collection and usage? Their evolving capacity makes this a legal and ethical minefield. It's even more complicated when that data feeds commercial models.
Misinformation is also a growing concern. Kids are less likely to spot a fake, and some studies suggest they're more prone to trust digital entities. The line between chatbot and human isn't always clear, especially for children who are imaginative, socially isolated, or simply online too much. Some Character.ai users have already struggled to tell the difference, and at least a few bots have encouraged the illusion.
There is also an environmental dimension. AI's infrastructure depends on data hubs that consume massive amounts of energy and water. If left unchecked, AI's carbon footprint will disproportionately affect children, particularly in the Global South.
So what is Google doing to offer reassurances to parents? Parents using Family Link have been given more information by Google about available guardrails and suggested best practices.
The most important one: Google says it won't use children's data to train its AI models. There are also content filters in place, though Google admits they're not foolproof. Parents can also set screen time limits, restrict certain apps, and block questionable material. But here's the twist: kids can still activate Gemini AI themselves.
What rubbed many parents the wrong way, however, was the fact that Gemini is opt-out, not opt-in. As one parent put it, 'I received one of these emails last week. Note that I'm not being asked whether I'd like to opt my child in to using Gemini. I'm being warned that if I don't want it, I have to opt out. Not cool.'
Google also suggests a few best practices. These include reminding children that Gemini is not a person, teaching them how to verify information, and encouraging them to avoid sharing personal details.
If Gemini follows Bard's model, we may see further responsible AI efforts soon. These could include tailored onboarding experiences, AI literacy guides, and educational videos that promote safe and thoughtful use.
The uncomfortable reality is that much of the responsibility for managing generative AI has shifted to parents.
Even assuming, generously, that AI is a net positive for child development, many unanswered questions remain. A responsible rollout of generative AI should involve shared responsibility across sectors. That is not yet evident in practice.
Tech companies need to do more to make these tools genuinely safe and constructive. Skill-building around safe navigation should be a priority for users of all ages. Governments also have an educational role to play: raising awareness among children and helping them distinguish between AI-generated and human-generated interaction and content.
But for now, most of that support structure is either missing or undercooked. The dilemma, it seems, is unchanged: if AI holds promise for parents, the energy required to navigate its traps might cancel out the benefits entirely.
So, when should kids start using AI tools? How much is too much? And who decides when it's time to step in? These may well be the new questions keeping modern parents up at night, and they don't come with chatbot-friendly answers.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Even Saquon Barkley needed help to re-create his hurdle for ‘Madden 26'
Even Saquon Barkley needed help to re-create his hurdle for ‘Madden 26'

Washington Post

time29 minutes ago

  • Washington Post

Even Saquon Barkley needed help to re-create his hurdle for ‘Madden 26'

Sure, Saquon Barkley won offensive player of the year honors and set an NFL rushing record while helping the Philadelphia Eagles win the Super Bowl, but all that still might not have gotten him onto the EA Sports Madden cover. What could really have put him over the top — literally — was his incredible leap over a Jacksonville Jaguars player during a game in November. After juking a pair of other Jacksonville defenders, Barkley used a reverse hurdle to soar past the Jaguars cornerback Jarrian Jones.

Apple's most underrated app could change soon, and you're going to love it
Apple's most underrated app could change soon, and you're going to love it

Digital Trends

time36 minutes ago

  • Digital Trends

Apple's most underrated app could change soon, and you're going to love it

Apple's shortcuts app is a power user's dream. I think it's one of the most underrated features you can find on an iPhone, and even Macs. In case you haven't used it yet, it allows you to perform a multi-step task in one go, or even trigger certain actions automatically. One of my favorite shortcuts is instantly generating a QR code of a Wi-Fi network, instead of narrating a complex password. I've got another one that automatically deletes screenshots after a 30-day span. There are a few in my library that trigger Do Not Disturb mode for a certain time slot, turn any webpage into a PDF, even snap Mac windows, and activate my smart devices when I reach home. Recommended Videos All that sounds convenient, but creating those shortcuts isn't a cakewalk. The UI flow and action presets can overwhelm tech-savvy users when it comes to creating their own automations. Apple may have a user-friendly solution, thanks to AI, and you just might get it this year. Apple has the foundation ready According to Bloomberg, Apple is preparing an upgraded version of the Shortcuts app that will put AI into the mix. 'The new version will let consumers create those actions using Apple Intelligence models,' says the report. The AI models could be Apple's own, which means they are better suited for integration with system tools and apps than a third-party AI model. Take, for example, the Siri-ChatGPT integration. OpenAI's chatbot can handle a wide range of tasks that Siri can't accomplish, but ChatGPT isn't able to interact with other apps and system tools on your iPhone. That means it can't assist you with making cross-app Shortcuts either. At WWDC 2025, Apple is rumored to reveal its own AI models and open them to app developers, as well. The idea is to let developers natively integrate AI-driven features in their apps without having to worry about security concerns. Microsoft is already using in-house AI models for a wide range of Copilot experiences on Windows PCs. Moreover, the company also offers its Phi family of open AI models to developers for building app experiences. Apple just needs to follow in Microsoft's footsteps. With developers adopting Apple's AI foundations and the company expanding it to the Shortcuts app, it would be much easier to create multi-step workflows easily. How so? Well, just look at Gemini on Android phones. Shortcuts needs an AI makeover Imagine just narrating a workflow to Siri, and it's turned into a shortcut. That's broadly what AI tools are already capable of, but instead of creating a rule for the future, they just execute the task at hand immediately. With AI in Shortcuts, things should go like: 'Hey Siri, create a shortcut that automatically replies to all messages I get on weekends regarding my unavailability, and tell them to reach me again on Monday. Trigger the action when I say the words I'm out.' With natural language processing on AI models, that's feasible. Look no further than how Gemini works on Android devices, especially those with on-device Gemini Nano processing. With a voice command, Gemini can dip into your workspace data and get work done across Gmail, Docs, and more connected apps. It can even handle workflows across third-party apps such as WhatsApp and Spotify. The list keeps on growing, and as capabilities like Project Mariner and Astra are rolled out through Gemini Live, newer possibilities will open. With a revamped Shortcuts app, Apple just needs to get the voice processing right and convert the prompts into actionable commands. Apple's partner, OpenAI, already offers a feature called Operator that can autonomously handle tasks on the web. Creating a chain of commands across mobile apps that are running locally should be easier and less risky compared to browsing websites. With ChatGPT's language chops already baked at the heart of Apple Intelligence, I won't be surprised if the next-gen Shortcuts app exploits it to the fullest. Oh hey, here's a sample Talking about ChatGPT and its integration with iOS, there's already an open-source project out there that can give a rough idea of how voice commands turn into actions on an iPhone. Rounak Jain, an iOS engineer at OpenAI, has created an AI agent that transforms audio prompts into actions on an iPhone. 🚨🤖 Today, I'm launching an AI agent that gets things done across iPhone apps. It's powered by OpenAI GPT 4.1 and is open source. Try it out! — Rounak Jain (@r0unak) June 1, 2025 Jain says the demo video is built atop OpenAI's GPT-4.1 AI model, and it can get work done across multiple apps with a single voice command. For example, users can control the flashlight after sliding down the control center, click and send a picture to one of their contacts, or text travel details and book a cab. Jain's demo is a clear sign that integrating an AI model at the system level, or having it perform tasks across apps, is feasible. A similar pipeline can be integrated to turn those voice commands into shortcuts, instead of executing them immediately. I am just hoping that when Apple implements AI within Shortcuts and lets users create their own routines with natural language commands, it offers a flow where users have the flexibility to modify them at will. I believe the best approach would be to show users the chain of commands and let them make adjustments before the prompt is turned into a shortcut.

Chatbot platform Character.AI unveils video generation, social feeds
Chatbot platform Character.AI unveils video generation, social feeds

Yahoo

time36 minutes ago

  • Yahoo

Chatbot platform Character.AI unveils video generation, social feeds

A platform for chatting and role-playing with AI-generated characters, announced in a blog post on Monday that it is rolling out a slate of multimedia features. These features include AvatarFX, video-generation model, plus Scenes and Streams, which allow users to create videos featuring their characters and then share them on a new social feed. " started as 1:1 text chat and today we're evolving to do so much more, inspired by what our users have told [us] they want to see on the platform," the company wrote in the blog post. began rolling out AvatarFX to subscribers last month, but now all users can create up to five videos each day. When creating a video with AvatarFX, users can upload a photo to serve as the basis for the video clip, choose a voice, and write out dialogue for the character. There's an option to upload an audio clip to inform the sound of the voice, though this feature was not working well enough to test upon rollout. Users can turn these videos into Scenes, where their characters can slip into pre-populated storylines that other users create. Scenes are currently available on the mobile app, but Streams, which allows users to create "dynamic moments between any two Characters," is coming this week on both web and mobile. These Scenes and Streams can be shared to a new community feed, which is coming soon in the mobile app. has a track record of abuse on its platform; parents have filed lawsuits against the company, claiming chatbots attempted to convince their children to self-harm, to kill themselves, or to kill their parents. One 14-year-old boy died by suicide after he was encouraged to do so by a bot, with whom he had developed an unhealthy, obsessive relationship. As expands its multimedia offerings, it also expands the potential for these products to be abused. As told TechCrunch when it announced AvatarFX, the platform blocks users from uploading photographs of real people -- whether they're celebrities or not -- and obscures their likeness into something less recognizable. For example, here's uncanny valley version of Mark Zuckerberg: But when it comes to artwork depicting celebrities, does not flag the images as representing real people -- however, these sorts of depictions would be less likely to deceive someone into believing that a deepfake is real. Plus, watermarks each video, though it is possible for bad actors to navigate around that safeguard. Here is an example of an attempted deepfake based on an illustration of Elon Musk: testing the anti-deepfake guardrails on character ai's avatarfx Even if this video had been generated with Elon Musk's actual voice, it would still be relatively clear that this is an animated version of an illustration -- but the possibility for abuse remains evident. "Our goal is to provide an engaging space that fosters creativity while maintaining a safe environment for all," said in its blog post. This article originally appeared on TechCrunch at

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store