A look at how Apple’s new child protection features work, what their limitations are


At the beginning of this year, Apple unveiled a range of new initiatives aimed at fostering a safer environment for young children and teenagers using its devices. In addition to streamlining the setup process for kids’ accounts, the company plans to allow parents to share their children’s age with app developers to better manage the content displayed to them.

Apple states these updates will be accessible to parents and developers later this year. In this article, we analyze the advantages and disadvantages of the new measures, as well as the involvement of Instagram, Facebook (and the rest of Meta), and discuss how tech giants are attempting to shift the responsibility for young users’ mental health.

Before discussing the updates, let’s quickly review how Apple currently safeguards children. The company first introduced parental controls in June 2009 alongside the iPhone 3.0 release and has gradually improved these features since then.

Currently, users under the age of 13 must create a special Child Account, which enables parents to utilize the built-in parental control options available in Apple’s operating systems. Teenagers can continue using a Child Account until they turn 18, depending on parental discretion.

Now, on to the new developments…

The company has revealed a series of modifications to its Child Account system concerning parental status verification. Furthermore, parents will soon have the ability to correct a child’s age if it was entered incorrectly. In the past, for accounts belonging to users under 13, the option was nonexistent; Apple advised to simply wait for the account to age naturally. In borderline cases (for accounts of kids just below 13), there was a workaround for changing the birth date, but that will soon be unnecessary.

Perhaps the most notable enhancement involves simplifying the setup process for these Child Accounts. From now on, if parents have not set up the device before their under-13-year-old begins using it, the child will be able to do it independently. In this situation, Apple will automatically impose age-appropriate web content filters and limit access to pre-installed apps such as Notes, Pages, and Keynote.

When a child first visits the App Store to download an app, they will be prompted to ask a parent to finalize the setup. Until parental consent is granted, app developers and Apple itself are barred from collecting data on the child.

At this point, even the most technologically challenged parent might reasonably wonder: what happens if my child enters an incorrect age during the setup process? For instance, if they enter 18 instead of 10, won’t they gain access to the most unsavory parts of the internet?

How Apple plans to address the age verification concern

Apple’s most significant initiative announced in early 2025 aims to tackle the online age verification issue. The company suggests a solution where parents can choose an age category and approve sharing this information with app developers during installation or registration.

By doing this, rather than depending on young users to accurately input their date of birth, developers will be able to use the new Declared Age Range API. Theoretically, app creators will also use the age information to redirect their recommendation algorithms away from unsuitable content.

Through the API, developers will only gain access to the child’s age category, not their specific date of birth. Apple has also confirmed that parents will have the ability to withdraw permission to share age information at any time.

In practice, access to the age category will become another permission that young users can grant (or more frequently, deny) to apps, similar to permissions for camera and microphone access, or for tracking user activity across applications.

This is where the main flaw of the proposed solution becomes apparent. Currently, Apple has not assured that if a user denies permission for age-category access, they will not be permitted to use a downloaded app. This decision is left to app developers, who face no legal repercussions for allowing children to access inappropriate content. Additionally, many companies actively seek to expand their young audience, as young kids and teens spend a substantial amount of time online (further details on this below).

Lastly, let’s highlight Apple’s latest innovation: a revision of its age-rating system, which will now comprise five categories: 4+, 9+, 13+, 16+, and 18+. According to the company, “This will provide users with a more detailed understanding of an app’s appropriateness, and allow developers to rate their apps more accurately.”

Apple and Meta have differing opinions on who should take responsibility for ensuring children’s online safety.

The challenge of confirming a young person’s age online has been a point of contention for quite some time.

Naturally, the notion of having to present ID each time one wishes to use an app is unlikely to be well-received.

Conversely, depending solely on user-reported ages poses significant risks.

After all, even an 11-year-old can easily alter their age to sign up for platforms like TikTok, Instagram, or Facebook.

App creators and app stores are keen to shift the age verification responsibility to others.

Meta, among app developers, is especially outspoken in claiming that age verification falls under the responsibilities of app stores.

On the flip side, app stores, particularly Apple’s, assert that the responsibility lies with app developers.

Many consider Apple’s recent efforts in this area as a middle ground.

Meta has commented:

“Parents express a desire to have the final decision regarding the apps their teens utilize, which is why we back legislation mandating app stores to confirm a child’s age and secure parental approval prior to app downloads.”

That sounds good in theory—but can it actually be trusted?

Relying on tech giants for child safety doesn’t seem like the most prudent strategy, especially when these companies profit from the engaging nature of their offerings.

Leaks from Meta, referencing their comments on Apple’s initiative, have repeatedly indicated that the company intentionally targets younger audiences.

For instance, in her book “Careless People,” Sarah Wynne-Williams, former global public policy director at Facebook (now Meta), recounts how, in 2017, she discovered that the company was inviting advertisers to focus on teens aged 13 to 17 across all its platforms, including Instagram.

At that time, Facebook was providing opportunities to showcase ads to teens during their most vulnerable moments—when they were feeling “worthless,” “insecure,” “stressed,” “defeated,” “anxious,” “stupid,” “useless,” or “like a failure.”

In practice, this meant that the company would monitor when teenage girls deleted selfies in order to display ads for beauty products.

Another leak indicated that Facebook was actively recruiting new personnel to design products aimed at children as young as six to broaden its consumer base.

This approach seems reminiscent of tobacco companies’ best practices from the 1960s.

Apple has also not made child online safety a top priority.

For a long time, its parental control options were severely limited, and children quickly found ways to exploit their shortcomings.

It wasn’t until 2024 that Apple finally addressed a flaw that allowed kids to bypass controls simply by entering a particular nonsensical phrase in the Safari address bar.

That’s all it took to disable Screen Time settings for Safari, granting kids unrestricted website access.

This vulnerability was first identified in 2021, but it took the company three years to respond.

Experts in child psychology agree that uncontrolled consumption of digital content can be detrimental to children’s mental and physical well-being.

In his 2024 book “The Anxious Generation,” US psychologist Jonathan Haidt discusses how teenage girls’ use of smartphones and social media can result in depression, anxiety, and even self-harm.

Regarding boys, Haidt emphasizes the risks of excessive exposure to video games and pornography during their developmental years. While Apple may be making progress, it will be ineffective if third-party app developers choose not to comply. The case of Meta highlights that depending on their honesty and integrity seems premature. Thus, despite Apple’s advancements, if you require assistance, it’s best to look within your own resources.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *