Perspective: Apple's new protections for kids don't go far enough
After years of applied pressure and even begging from parents, advocates and lawmakers, Apple has suddenly decided to fix failures in its child safety features. Why now?
Simple. Utah and other states are moving to enact legislation that requires age verification and parental consent for all app downloads and purchases. What we call the App Store Accountability Act appears poised to become law in the Beehive State and will likely follow in others, and Apple is paying close attention. The company has released a set of reforms it headlined 'Helping Protect Kids Online,' and its strategy of 'shock and awe' seems to be working.
As word of these updates tears through state capitals across the country, lawmakers are rightly wondering, do these updates answer the issues that our legislation identifies and seeks to fix?
After getting the cold shoulder from Apple for years, we admit that we are pleased by some of these updates. But given what's at stake, they are not enough. Here's why.
Apple's eight-page announcement outlines a number of updates to be rolled out by the year's end. Promised features include making it easier to set up child accounts, a new 'age range' application programming interface (API) that allows app stores and app developers to share age category data to better ensure age-appropriate experiences, more granular app age ratings and better app descriptions and removing apps that exceed a child's age range from the app store.
The 'age range API' seems particularly well-done. According to Tech Crunch:
'Instead of asking kids to input their birthdays, as many social apps do today, developers will have the option to use a new Declared Age Range API that allows them to access the age range information the parent input during the child account setup. (Parents can also correct this information at any time if it was originally entered incorrectly).'
These updates are needed. But why in the world has it taken so long?
For years, shareholders and child-safety advocates have been asking the company to better protect kids online. In 2018, almost 11 years after the iPhone's release, Apple shareholders wrote a letter to the board of directors demanding that the company give parents more resources and tools to protect children. A 2022 Canadian Centre for Child Protection report detailed Apple's failure to enforce app age ratings for younger users and labeled its parental controls as 'inadequate' and 'unusable.' A Screen Time parental control bug causing protective settings to disengage on child accounts has plagued iPhones and iPads for more than two years, unsolved. And a scathing Wall Street Journal investigation published in December exposed Apple for misrepresenting up to a quarter of its apps as 'safe for kids' when many were rife with sexual exploitation and bullying.
Between December and now, Apple's technological capabilities did not change. To put it positively: Apple already possessed the technological and financial capacity to institute these changes. Modern APIs have been around for years. And Apple has every financial resource it needs to develop and effectively implement these safeguards, raking in $26 billion in revenue from its app store alone during FY 2023. These are good changes, but from a moral, financial and technical standpoint, there is no reason why it should have taken this long.
But are Apple's new features enough? No, deep issues remain unaddressed.
A few years ago, the young son of one of the authors of this column was served ads for sexual role play apps — including a graphic strip show — and ads for apps that focus on gambling and marijuana cultivation, all while playing a cartoon game that the app store rated safe for children. The mother had only stepped away for a few minutes to fold laundry, assuming that the age rating accurately represented what her son would experience while using the app.
Such anecdotes are not the exception; they are the rule. There is a systemic failure that leaves parents and children who access the internet through the app store totally vulnerable, because Apple allows developers to operate on an honor system, without any meaningful enforcement. Apple's recent safety update doesn't change this; its app store will still rely on the honor system, allowing developers to self-report content with little oversight and few consequences for misrepresenting age ratings or content warnings.
As mentioned, Apple's announcement comes just days before Utah is expected to become the first state to pass the App Store Accountability Act. The bill would require Apple, and other app store providers like Google, to perform age verification and get parental consent before minors can download apps, purchase apps or make in-app purchases. Using the account holders' age-category status and a parental-consent status collected by the app store, app developers would be required to enforce any developer-created age-related restrictions. Additionally, the legislation would require all minors to link to a parent's account before using the app store in the first place.
Apple's proposed changes will not solve the core issue, which is that minors need parental consent when it comes to entering binding terms of service agreements. Currently, the app store routinely allows known minors to download apps, accept complex terms of service, and make in-app purchases without any parental consent. This loophole exposes children to privacy risks, financial harm and dangerous digital environments.
Only in app stores do we allow minors to enter into terms of service agreements with trillion-dollar companies that determine how their personal data can be used, often giving such companies full access to extremely sensitive information like photographs and videos of children, their exact location and their contact lists. Our legislative model ends that practice.
Apple's new updates, by contrast, will not stop any of this. Apple will still treat teens as digital adults, allowing minors to agree to complex terms of service contracts without parental consent, despite the fact that in the real world, one has to be 18 to enter into a binding contract.
Furthermore, Apple will only enforce the proposed app store protections for parents who figure out how to enable content controls, leaving parents without any meaningful backstop when their kids try to circumvent the controls. (And they will try.)
The stakes are high. If Apple can convince lawmakers that its updates are adequate to the task, then it will continue to prioritize its profit over protecting kids online without real consequences. We welcome better labels and controls, but parents and their kids need much more than that. They need a bill that provides app store accountability. And they need it now.
Melissa McKay is the chairman of the Digital Childhood Alliance. Chris McKenna is the Founder and CEO of Protect Young Eyes. Michael Toscano is the executive director of the Institute for Family Studies and director of the Family First Technology Initiative. Jared Hayden is Policy Analyst, Family First Technology Initiative, the Institute for Family Studies.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
16 minutes ago
- Yahoo
Apple Tops China's Market After Massive iPhone Discounts
June 13 - Apple (NASDAQ:AAPL) led China's smartphone market in May as iPhone sales surged, driven by discounts and strong demand, according to new data from Counterpoint Research. The iPhone maker recorded its best two-month performance in the country since the start of the COVID-19 pandemic. Global iPhone sales climbed 15% year over year across April and May, supported by strength in China and the U.S., with additional growth in Japan, India, and the Middle East. Aggressive price cuts helped lift Apple's sales in China. The company reduced prices on its iPhone 16 lineup by up to 2,530 yuan (about $351), in a bid to compete with local players such as Huawei. Analysts note the discounts helped restore momentum but may not be a sustainable strategy over the longer term. Data from China's telecom regulator also showed a slight rise in foreign-brand smartphone shipments in April, signaling steady consumer interest despite rising competition. Counterpoint's Ivan Lam said iPhone shipments for the second quarter appear promising, although future performance may hinge on sustained demand in Apple's two largest markets. Based on the one year price targets offered by 41 analysts, the average target price for Apple Inc is $231.02 with a high estimate of $300.00 and a low estimate of $141.00. The average target implies a upside of +15.97% from the current price of $199.20. Based on GuruFocus estimates, the estimated GF Value for Apple Inc in one year is $209.54, suggesting a upside of +5.19% from the current price of $199.20. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
19 minutes ago
- Yahoo
Police issue mosh pit warning ahead of Download festival in Leicestershire this weekend
Police are warning rock fans at this weekend's Download festival about the potential for accidental calls while in the mosh pit. Leicestershire Police issued an alert ahead of the event at Donington Park, which begins on Friday. Posting on Facebook, the force said, in previous years, they received "a rise of nearly 700 extra 999 calls" caused by energetic dancing at Download. "The tech assumed that people in mosh pits had been in a collision, causing 999 contacts and abandoned 999 calls," said the post. Police have blamed mobiles and wearable gadgets featuring technology which alerts emergency services when it suspects there has been an accident. The technology can inadvertently activate when hardcore rock, heavy metal, or punk fans are involved in aggressive styles of dancing - sometimes called "thrashing", "slam dancing" or "pogoing". Certain iPhone and Apple Watch models operate a "crash detection" feature - turned on by default - which is designed to identify a severe car crash and connect people to emergency services. Even if people are unaware that the feature has been activated, the device will call the emergency services automatically after a 30-second countdown. Some Android mobiles operate a similar service called "car crash detection". "All those calls had to be assessed, with three outbound call attempts completed to ensure there is no threat, risk or harm, taking our contact handlers away from answering true emergency calls," police added. They recommended switching on airplane mode or disabling emergency alerts on wearable tech, as well as answering any callbacks so officers know fans are safe. Read more from Sky News: Some of the reactions to the post included "don't mosh too hard" and "be wary of the mosh pit". But another user commented: "Avoid false alarms and allow police to focus on crime." Headline acts at Download are Green Day on Friday, Sleep Tolken on Saturday and Korn on Sunday.
Yahoo
29 minutes ago
- Yahoo
Meta AI searches made public - but do all its users realise?
How would you feel if your internet search history was put online for others to see? That may be happening to some users of Meta AI without them realising, as people's prompts to the artificial intelligence tool - and the results - are posted on a public feed. One internet safety expert said it was "a huge user experience and security problem" as some posts are easily traceable, through usernames and profile pictures, to social media accounts. This means some people may be unwittingly telling the world about things they may not want others to know they have searched for - such as asking the AI to generate scantily-clad characters or help them cheat on tests. Meta has been contacted for comment. It is not clear if the users know that their searches are being posted into a public feed on the Meta AI app and website, though the process is not automatic. If people choose to share a post, a message pops up which says: "Prompts you post are public and visible to everyone... Avoid sharing personal or sensitive information." The BBC found several examples of people uploading photos of school or university test questions, and asking Meta AI for answers. One of the chats is titled "Generative AI tackles math problems with ease". There were also searches for women and anthropomorphic animal characters wearing very little clothing. One search, which could be traced back to a person's Instagram account because of their username and profile picture, asked Meta AI to generate an image of an animated character lying outside wearing only underwear. Meanwhile, tech news outlet TechCrunch has reported examples of people posting intimate medical questions - such as how to deal with an inner thigh rash. Meta AI, launched earlier this year, can be accessed through its social media platforms Facebook, Instagram and Whatsapp. It is also available as a standalone product which has a public "Discover" feed. Users can opt to make their searches private in their account settings. Meta AI is currently available in the UK through a browser, while in the US it can be used through an app. In a press release from April which announced Meta AI, the company said there would be "a Discover feed, a place to share and explore how others are using AI". "You're in control: nothing is shared to your feed unless you choose to post it," it said. But Rachel Tobac, chief executive of US cyber security company Social Proof Security, posted on X saying: "If a user's expectations about how a tool functions don't match reality, you've got yourself a huge user experience and security problem." She added that people do not expect their AI chatbot interactions to be made public on a feed normally associated with social media. "Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked," she said. Meta urged to go further in crackdown on 'nudify' apps WhatsApp defends 'optional' AI tool that cannot be turned off Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.