logo
#

Latest news with #FamilyLink

Google revamps Family Link with better parental controls
Google revamps Family Link with better parental controls

Phone Arena

time20-05-2025

  • Phone Arena

Google revamps Family Link with better parental controls

Google has just updated its Family Link app with a redesigned interface and new supervision features, giving parents more powerful tools to manage their children's digital activity and screen time. The update is coming to all Android phones and tablets that support the Family Link app and it is already live. The new layout is more intuitive and easier to use. | Image credit – Google The Family Link app now presents three central sections which include Screen Time, Controls and Location. The Screen Time section provides all tools needed to track screen activities while enabling parents to set time restrictions for individual apps and configure Downtime and School Time functionality. Through the Controls section, parents gain straightforward access to manage Google Play app approvals together with privacy settings and web restrictions for Chrome, YouTube and Search. The Location tab allows parents to view their child's device location together with their battery status in real time. For parents with more than one child, Family Link now allows easy profile switching, making it simpler to manage multiple accounts from one device. Improved control during school hours. | Image credit – Google The School Time feature, which used to be limited to Fitbit ACE LTE smartwatches and Galaxy Watch for Kids, now extends its functionality to Android smartphones and tablets. The feature enables parents to select time periods when their child's device becomes restricted or muted during school hours. During School Time the system limits app usage and notification access yet keeps educational tools accessible to students. By using this feature, parents can establish breaks during school hours like at lunch or recess time, and determine which apps remain accessible during School Time and other custom modes. Parents decide who can contact their child. | Image credit – Google Another upcoming improvement is parent-managed contacts. Parents can manage their child's contact permissions directly through the Family Link app. The system allows children to request contact addition, but only verified contacts will get access. The feature draws from existing capabilities in Google's child-oriented smartwatches, which will become available for all Android phone users. Google is also enabling message protection by default for users under 18, which will filter sensitive content while keeping communication safer. Users can opt out of this feature, though. Google is also preparing to launch Google Wallet for kids in the coming months. Parents will be able to load cards to their child's account, set permissions, and track spending, giving kids the ability to make NFC purchases or carry digital tickets and gift cards this year, Google plans to bring educational tools like Gen AI Lab and NotebookLM to teens using Family Link, signaling an ongoing investment in learning and responsible digital experiences. Whether it's managing screen time, contacts, or school schedules, these updates make Family Link a more robust and flexible tool for Android families—no third-party apps required.

Google's Gemini AI Kids Edition Is Here: What It Means For Parents
Google's Gemini AI Kids Edition Is Here: What It Means For Parents

Forbes

time12-05-2025

  • Forbes

Google's Gemini AI Kids Edition Is Here: What It Means For Parents

A child using a system AI chatbot on a mobile application to do his homework. AI systems impact children's lives even when those children are not directly engaging with the tools. In theory, AI has the potential to diagnose and treat illness, process vast datasets to advance research, and accelerate vaccine development. Unfortunately, AI also carries a well-documented set of risks. These include digital harms such as abuse, exploitation, discrimination, misinformation, and challenges to mental health and well-being. These competing realities have recently spilled into the inboxes of parents using Google's Family Link controls. Many have begun receiving emails informing them that Gemini, Google's AI chatbot, will soon be available on their child's device. As first reported by The New York Times, Google is allowing children under 13 to access Gemini through supervised accounts managed via Family Link. That's a notable change, especially considering Bard, Gemini's precursor, was only opened up to teens in 2023. This update, rolling out gradually, enables children to explore Gemini's capabilities across a range of activities. These include support with homework, creative writing, and general inquiries. Parents can choose whether Gemini appears on Android, iOS, or the web, and configure it as their child's default assistant. Gemini is being positioned as a tool to support learning, creativity, and exploration. Google's earlier messaging around Bard leaned into this idea, emphasizing AI as a study companion, not a homework doer. Bard was offered to teenagers for a wide range of use cases, including finding inspiration, exploring new hobbies, and solving everyday challenges such as researching universities for college applications. It was also pitched as a learning tool, offering help with math problems or brainstorming for science projects. The original messaging was clear: Bard wouldn't do all the work, but it would help with generating ideas and locating information. However, recent surveys on ChatGPT use in universities suggest that ideal isn't always upheld in practice. It turns out that when given the chance, humans, teenagers in particular, often take the shortcut. And while the educational potential of generative AI is being more widely acknowledged, research indicates that digital tools are most effective when integrated into the school system. As UNICEF notes, for students to thrive, digital tools must support rather than replace teachers. Abandoning mainstream education in favor of AI isn't a viable path. UNICEF's report ''How Can Generative AI Better Serve Children's Rights?'' reminds us that real risks run parallel to AI's potential. Using the Convention on the Rights of the Child as a lens, the report outlines four principles: non-discrimination, respect for the child's views, the child's best interests, and the right to life, survival, and development. These should be the criteria for assessing whether children's rights are genuinely being protected, respected, and fulfilled in relation to AI. The first major issue highlighted by the report is unequal access, referred to as "digital poverty." Not all kids have equal access to high-speed internet, smart devices, or educational AI. So while some children gain a learning edge, others are left behind, again. Bias in training data is another major challenge. AI systems mirror the biases present in society, which means that children may encounter the same kinds of discrimination online as they do offline. The issue of data consent is particularly thorny. What does meaningful consent look like for a 9-year-old when it comes to personal data collection and usage? Their evolving capacity makes this a legal and ethical minefield. It's even more complicated when that data feeds commercial models. Misinformation is also a growing concern. Kids are less likely to spot a fake, and some studies suggest they're more prone to trust digital entities. The line between chatbot and human isn't always clear, especially for children who are imaginative, socially isolated, or simply online too much. Some users have already struggled to tell the difference, and at least a few bots have encouraged the illusion. There is also an environmental dimension. AI's infrastructure depends on data hubs that consume massive amounts of energy and water. If left unchecked, AI's carbon footprint will disproportionately affect children, particularly in the Global South. So what is Google doing to offer reassurances to parents? Parents using Family Link have been given more information by Google about available guardrails and suggested best practices. The most important one: Google says it won't use children's data to train its AI models. There are also content filters in place, though Google admits they're not foolproof. Parents can also set screen time limits, restrict certain apps, and block questionable material. But here's the twist: kids can still activate Gemini AI themselves. What rubbed many parents the wrong way, however, was the fact that Gemini is opt-out, not opt-in. As one parent put it, 'I received one of these emails last week. Note that I'm not being asked whether I'd like to opt my child in to using Gemini. I'm being warned that if I don't want it, I have to opt out. Not cool.' Google also suggests a few best practices. These include reminding children that Gemini is not a person, teaching them how to verify information, and encouraging them to avoid sharing personal details. If Gemini follows Bard's model, we may see further responsible AI efforts soon. These could include tailored onboarding experiences, AI literacy guides, and educational videos that promote safe and thoughtful use. The uncomfortable reality is that much of the responsibility for managing generative AI has shifted to parents. Even assuming, generously, that AI is a net positive for child development, many unanswered questions remain. A responsible rollout of generative AI should involve shared responsibility across sectors. That is not yet evident in practice. Tech companies need to do more to make these tools genuinely safe and constructive. Skill-building around safe navigation should be a priority for users of all ages. Governments also have an educational role to play: raising awareness among children and helping them distinguish between AI-generated and human-generated interaction and content. But for now, most of that support structure is either missing or undercooked. The dilemma, it seems, is unchanged: if AI holds promise for parents, the energy required to navigate its traps might cancel out the benefits entirely. So, when should kids start using AI tools? How much is too much? And who decides when it's time to step in? These may well be the new questions keeping modern parents up at night, and they don't come with chatbot-friendly answers.

Google is rolling out its Gemini AI chatbot to kids under 13. It's a risky move
Google is rolling out its Gemini AI chatbot to kids under 13. It's a risky move

The Advertiser

time09-05-2025

  • The Advertiser

Google is rolling out its Gemini AI chatbot to kids under 13. It's a risky move

Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13. While the launch starts within the next week in the United States and Canada, it will launch in Australia later this year. The chatbot will only be available to people via Google's Family Link accounts. But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe. A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google. Google's Family Link accounts allow parents to control access to content and apps, such as YouTube. To create a child's account, parents provide personal details, including the child's name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children's data when using the system will not be used to train the AI system. Chatbot access will be "on" by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system. Google acknowledges the system may "make mistakes". So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as "hallucinating"), so if children use the chatbot for homework help, they need to check facts with reliable sources. Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment. Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query - or "prompt" - a person provides. A child could ask the system to "draw a cat" and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details. Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools. And even highly skilled professionals - such as lawyers - have reportedly been fooled into using fake content generated by ChatGPT and other chatbots. Google says the system will include "built-in safeguards designed to prevent the generation of inappropriate or unsafe content". However, these safeguards could create new problems. For example, if particular words (such as "breasts") are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty. Many children are also very tech-savvy, often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate. The eSafety Commission has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children. The eSafety advisory explains AI companions can "share harmful content, distort reality and give advice that is dangerous". The advisory highlights the risks for young children, in particular, who "are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it". My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa. We found these systems mirror people's interactions based on the many unwritten rules that govern social behaviour - or, what are known as "feeling rules". These rules are what lead us to say "thank you" when someone holds the door open for us, or "I'm sorry!" when you bump into someone on the street. By mimicking these and other social niceties, these systems are designed to gain our trust. These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine. This rollout is happening at a crucial time in Australia, as children under 16 will be banned from holding social media accounts in December this year. While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children - and parents - must be educated in how all types of digital tools can be used appropriately and safely. As Gemini's AI chatbot is not a social media tool, it will fall outside Australia's ban. This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm. This highlights the urgent need to revisit Australia's proposed digital duty of care legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia's has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone. Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University This article is republished from The Conversation under a Creative Commons license. Read the original article. Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13. While the launch starts within the next week in the United States and Canada, it will launch in Australia later this year. The chatbot will only be available to people via Google's Family Link accounts. But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe. A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google. Google's Family Link accounts allow parents to control access to content and apps, such as YouTube. To create a child's account, parents provide personal details, including the child's name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children's data when using the system will not be used to train the AI system. Chatbot access will be "on" by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system. Google acknowledges the system may "make mistakes". So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as "hallucinating"), so if children use the chatbot for homework help, they need to check facts with reliable sources. Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment. Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query - or "prompt" - a person provides. A child could ask the system to "draw a cat" and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details. Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools. And even highly skilled professionals - such as lawyers - have reportedly been fooled into using fake content generated by ChatGPT and other chatbots. Google says the system will include "built-in safeguards designed to prevent the generation of inappropriate or unsafe content". However, these safeguards could create new problems. For example, if particular words (such as "breasts") are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty. Many children are also very tech-savvy, often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate. The eSafety Commission has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children. The eSafety advisory explains AI companions can "share harmful content, distort reality and give advice that is dangerous". The advisory highlights the risks for young children, in particular, who "are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it". My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa. We found these systems mirror people's interactions based on the many unwritten rules that govern social behaviour - or, what are known as "feeling rules". These rules are what lead us to say "thank you" when someone holds the door open for us, or "I'm sorry!" when you bump into someone on the street. By mimicking these and other social niceties, these systems are designed to gain our trust. These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine. This rollout is happening at a crucial time in Australia, as children under 16 will be banned from holding social media accounts in December this year. While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children - and parents - must be educated in how all types of digital tools can be used appropriately and safely. As Gemini's AI chatbot is not a social media tool, it will fall outside Australia's ban. This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm. This highlights the urgent need to revisit Australia's proposed digital duty of care legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia's has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone. Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University This article is republished from The Conversation under a Creative Commons license. Read the original article. Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13. While the launch starts within the next week in the United States and Canada, it will launch in Australia later this year. The chatbot will only be available to people via Google's Family Link accounts. But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe. A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google. Google's Family Link accounts allow parents to control access to content and apps, such as YouTube. To create a child's account, parents provide personal details, including the child's name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children's data when using the system will not be used to train the AI system. Chatbot access will be "on" by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system. Google acknowledges the system may "make mistakes". So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as "hallucinating"), so if children use the chatbot for homework help, they need to check facts with reliable sources. Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment. Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query - or "prompt" - a person provides. A child could ask the system to "draw a cat" and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details. Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools. And even highly skilled professionals - such as lawyers - have reportedly been fooled into using fake content generated by ChatGPT and other chatbots. Google says the system will include "built-in safeguards designed to prevent the generation of inappropriate or unsafe content". However, these safeguards could create new problems. For example, if particular words (such as "breasts") are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty. Many children are also very tech-savvy, often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate. The eSafety Commission has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children. The eSafety advisory explains AI companions can "share harmful content, distort reality and give advice that is dangerous". The advisory highlights the risks for young children, in particular, who "are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it". My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa. We found these systems mirror people's interactions based on the many unwritten rules that govern social behaviour - or, what are known as "feeling rules". These rules are what lead us to say "thank you" when someone holds the door open for us, or "I'm sorry!" when you bump into someone on the street. By mimicking these and other social niceties, these systems are designed to gain our trust. These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine. This rollout is happening at a crucial time in Australia, as children under 16 will be banned from holding social media accounts in December this year. While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children - and parents - must be educated in how all types of digital tools can be used appropriately and safely. As Gemini's AI chatbot is not a social media tool, it will fall outside Australia's ban. This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm. This highlights the urgent need to revisit Australia's proposed digital duty of care legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia's has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone. Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University This article is republished from The Conversation under a Creative Commons license. Read the original article. Google has announced it will roll out its Gemini artificial intelligence (AI) chatbot to children under the age of 13. While the launch starts within the next week in the United States and Canada, it will launch in Australia later this year. The chatbot will only be available to people via Google's Family Link accounts. But this development comes with major risks. It also highlights how, even if children are banned from social media, parents will still have to play a game of whack-a-mole with new technologies as they try to keep their children safe. A good way to address this would be to urgently implement a digital duty of care for big tech companies such as Google. Google's Family Link accounts allow parents to control access to content and apps, such as YouTube. To create a child's account, parents provide personal details, including the child's name and date of birth. This may raise privacy concerns for parents concerned about data breaches, but Google says children's data when using the system will not be used to train the AI system. Chatbot access will be "on" by default, so parents need to actively turn the feature off to restrict access. Young children will be able to prompt the chatbot for text responses, or to create images, which are generated by the system. Google acknowledges the system may "make mistakes". So assessment of the quality and trustworthiness of content is needed. Chatbots can make up information (known as "hallucinating"), so if children use the chatbot for homework help, they need to check facts with reliable sources. Google and other search engines retrieve original materials for people to review. A student can read news articles, magazines and other sources when writing up an assignment. Generative AI tools are not the same as search engines. AI tools look for patterns in source material and create new text responses (or images) based on the query - or "prompt" - a person provides. A child could ask the system to "draw a cat" and the system will scan for patterns in the data of what a cat looks like (such as whiskers, pointy ears, and a long tail) and generate an image that includes those cat-like details. Understanding the differences between materials retrieved in a Google search and content generated by an AI tool will be challenging for young children. Studies show even adults can be deceived by AI tools. And even highly skilled professionals - such as lawyers - have reportedly been fooled into using fake content generated by ChatGPT and other chatbots. Google says the system will include "built-in safeguards designed to prevent the generation of inappropriate or unsafe content". However, these safeguards could create new problems. For example, if particular words (such as "breasts") are restricted to protect children from accessing inappropriate sexual content, this could mistakenly also exclude children from accessing age-appropriate content about bodily changes during puberty. Many children are also very tech-savvy, often with well-developed skills for navigating apps and getting around system controls. Parents cannot rely exclusively on inbuilt safeguards. They need to review generated content and help their children understand how the system works, and assess whether content is accurate. The eSafety Commission has issued an online safety advisory on the potential risk of AI chatbots, including those designed to simulate personal relationships, particularly for young children. The eSafety advisory explains AI companions can "share harmful content, distort reality and give advice that is dangerous". The advisory highlights the risks for young children, in particular, who "are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it". My research team has recently examined a range of AI chatbots, such as ChatGPT, Replika, and Tessa. We found these systems mirror people's interactions based on the many unwritten rules that govern social behaviour - or, what are known as "feeling rules". These rules are what lead us to say "thank you" when someone holds the door open for us, or "I'm sorry!" when you bump into someone on the street. By mimicking these and other social niceties, these systems are designed to gain our trust. These human-like interactions will be confusing, and potentially risky, for young children. They may believe content can be trusted, even when the chatbot is responding with fake information. And, they may believe they are engaging with a real person, rather than a machine. This rollout is happening at a crucial time in Australia, as children under 16 will be banned from holding social media accounts in December this year. While some parents may believe this will keep their children safe from harm, generative AI chatbots show the risks of online engagement extend far beyond social media. Children - and parents - must be educated in how all types of digital tools can be used appropriately and safely. As Gemini's AI chatbot is not a social media tool, it will fall outside Australia's ban. This leaves Australian parents playing a game of whack-a-mole with new technologies as they try to keep their children safe. Parents must keep up with new tool developments and understand the potential risks their children face. They must also understand the limitations of the social media ban in protecting children from harm. This highlights the urgent need to revisit Australia's proposed digital duty of care legislation. While the European Union and United Kingdom launched digital duty of care legislation in 2023, Australia's has been on hold since November 2024. This legislation would hold technology companies to account by legislating that they deal with harmful content, at source, to protect everyone. Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University This article is republished from The Conversation under a Creative Commons license. Read the original article.

Google's Gemini AI will soon be accessible to kids under 13 — here's how that could look
Google's Gemini AI will soon be accessible to kids under 13 — here's how that could look

Tom's Guide

time06-05-2025

  • Business
  • Tom's Guide

Google's Gemini AI will soon be accessible to kids under 13 — here's how that could look

(Image credit: Getty Images) Google is looking to gain some younger users of its AI tools with the company confirming Gemini will soon roll out to children under the age of 13, as per the New York Times. This comes at a time when AI companies are all looking to seize some extra traffic in a crowded marketplace. However, there will — thankfully — be rules in place for kids planning to start using Gemini to help them with their homework. Most importantly, Gemini will only be available to children whose parents use Family Link. This is a parental control system made by Google. Through the platform, parents can manage how long children spend on certain apps and manage the settings of what they can access. You may like What will this look like? (Image credit: Shutterstock) While it is not immediately clear what rules have been put in place for these children using Gemini, Google has already said that it won't use their activity to train its models. Google has previously outlined its position of child safety and AI, publishing a blog post in late 2023. At this time, Google was using its AI model, Bard. While things have changed since then, the focus was on identifying topics that were inappropriate for children and adding safety guardrails to this. The AI model also utilised a double check feature, where questions that were factual were reanalysed before giving the answer. With an even younger crowd, these types of safety measures will be even more important. 'Gemini Apps will soon be available for your child,' the company said in an email this week to the parent of an 8-year-old, reported in the New York Times. 'That means your child will be able to use Gemini to ask questions, get homework help, and make up stories." Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Google acknowledged some risks in its email to families this week, alerting parents that 'Gemini can make mistakes' and suggesting they 'help [their] child think critically' about the chatbot. (Image credit: Shutterstock) Going on to recommend how to use it, Google encouraged parents to teach their children how to fact-check Gemini's answers, highlighting that Gemini isn't human and that children should not give sensitive or personal information to Gemini. While Gemini will attempt to filter inappropriate material, this remains the biggest concern with this kind of update. AI can still accidentally offer content that is deemed inappropriate, or as Google puts it, your children 'may encounter content you don't want them to see'. While Gemini will be automatically available to these children under 13, parents will be notified when they start using it. From here, they can decide how much access is granted, including turning it off completely. More from Tom's Guide

Google plans to roll out its AI chatbot to children under 13
Google plans to roll out its AI chatbot to children under 13

Indian Express

time05-05-2025

  • Indian Express

Google plans to roll out its AI chatbot to children under 13

Written by Natasha Singer Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 who have parent-managed Google accounts, as tech companies vie to attract young users with AI products. 'Gemini Apps will soon be available for your child,' the company said in an email this week to the parent of an 8-year-old. 'That means your child will be able to use Gemini' to ask questions, get homework help and make up stories. The chatbot will be available to children whose parents use Family Link, a Google service that enables families to set up Gmail and opt into services such as YouTube for their child. To sign up for a child account, parents provide the tech company with personal data such as their child's name and birth date. Gemini has specific guard rails for younger users to hinder the chatbot from producing certain unsafe content, said Karl Ryan, a Google spokesperson. When a child with a Family Link account uses Gemini, he added, the company will not use that data to train its AI. Introducing Gemini for children could accelerate the use of chatbots among a vulnerable population as schools, colleges, companies and others grapple with the effects of popular generative AI technologies. Trained on huge amounts of data, these systems can produce humanlike text and realistic-looking images and videos. Google and other AI chatbot developers are locked in a fierce competition to capture young users. President Donald Trump recently urged schools to adopt the tools for teaching and learning. Millions of teenagers are already using chatbots as study aids, writing coaches and virtual companions. Children's groups warn that the chatbots could pose serious risks to child safety. The bots also sometimes make stuff up. UNICEF, the United Nation's children's agency, and other children's groups have noted that the AI systems could confuse, misinform and manipulate young children who may have difficulty understanding that the chatbots are not human. 'Generative AI has produced dangerous content,' UNICEF's global research office said in a post about AI risks and opportunities for children. Google acknowledged some risks in its email to families this week, alerting parents that 'Gemini can make mistakes' and suggesting they 'help your child think critically' about the chatbot. The email also recommended that parents teach their child how to fact-check Gemini's answers. And the company suggested parents remind their child that 'Gemini isn't human' and 'not to enter sensitive or personal info in Gemini.' Despite the company's efforts to filter inappropriate material, the email added, children 'may encounter content you don't want them to see.' Over the years, tech giants have developed a variety of products, features and safeguards for teens and children. In 2015, Google introduced YouTube Kids, a stand-alone video app for children that is popular among families with toddlers. Other efforts to attract children online have prompted concerns from government officials and children's advocates. In 2021, Meta halted plans to introduce an Instagram Kids service — a version of its Instagram app intended for those under the age of 13 — after the attorneys general of several dozen states sent a letter to the company saying the firm had 'historically failed to protect the welfare of children on its platforms.' Some prominent tech companies — including Google, Amazon and Microsoft — have also paid multimillion-dollar fines to settle government complaints that they violated the Children's Online Privacy Protection Act. That federal law requires online services aimed at children to obtain a parent's permission before collecting personal information, like a home address or a selfie, from a child younger than 13. Under the Gemini rollout, children with family-managed Google accounts would initially be able to access the chatbot on their own. But the company said it would alert parents and that parents could then manage their child's chatbot settings, 'including turning access off.' 'Your child will be able to access Gemini Apps soon,' the company's email to parents said. 'We'll also let you know when your child accesses Gemini for the first time.' Ryan, the Google spokesperson, said the approach to providing Gemini for young users complied with the federal children's online privacy law.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store