Apple has introduced age verification requirements in iOS 26.4 for UK users, requiring them to prove they are 18 years old or above before accessing certain services or features on their accounts. Users can verify their ages by linking a credit card, scanning an ID, or through Account Settings. The rollout is notable not because the UK government mandates age verification for app stores, but because Apple chose to implement it anyway, operating system-wide.
For those with established accounts, Apple will check whether they already have a payment method on file that can prove their age. The company can confirm a user is an adult by checking any payment method, reasoning that a valid credit card confirms you are at least 18 because you must be an adult to open a credit card account. For newer users without a payment history, the process requires scanning a card or photo ID.
What happens if you don't verify? Apple will automatically switch on Web Content Filter and Communication Safety features for everyone under 18 and those who haven't verified their ages. These tools restrict access to specific websites on Safari and third-party browsers, and warn users when they receive or send images containing nudity.
The Regulatory Context
Since 25 July 2025, all sites and apps that allow pornography have needed strong age checks in place under the Online Safety Act, marking a significant change to how adults access such content and representing a key step in protecting children from harmful online material. But Apple's rollout goes beyond what the law strictly requires. Apple's implementation is not part of how the UK uses the Act to mandate that adult sites verify users' ages; it is instead another aspect of how parents can already limit their children to age-appropriate apps on the iPhone.
Ofcom, the UK regulator, approved the move. In a statement, the regulator said Apple's decision was "a real win for children and families," praising the company for moving ahead of legal requirements to implement child safety protections.
User Response: Privacy Concerns and Workarounds
The reception has been mixed. There are reports of UK people circumventing age verification by showing photos of someone older, and verified reports that VPN use has surged to bypass checks; NordVPN reported a 1,000% increase in purchases from the UK, while Proton VPN saw 1,400% more signups minutes after the Act came into effect. Some users have simply chosen not to update.
The privacy angle cuts deeper. The Online Safety Act requires online services to implement "highly effective" age verification to prevent children from accessing harmful content, but many adults do not want to share their personal information to access websites, concerned that doing so may compromise their online privacy. Apple has said any credit card or ID information shared will not be saved unless users specifically choose to keep it for something else like setting up payments, but the transparency of that process remains a point of contention.
The Broader Problem
The Online Safety Act applies to any online service that enables users to post content or interact, and many had underestimated how widely it would apply. Spotify, for example, now requires UK users to verify their age for music videos and song lyrics tagged for 18 and older, Reddit added age verification for discussion boards about hard cider and cigars, and other services have shut down completely out of concern about compliance.
The real question is whether the framework is working. Age verification sounds straightforward in theory; in practice, it either works too well (locking out people who shouldn't be locked out) or doesn't work at all (anyone determined enough finds a way around it). Apple's approach, at least, tries to make the friction minimal for existing users—a Face ID scan for those with long-standing accounts took most testers less than 30 seconds.
But the broader pattern suggests regulators are pushing solutions that shift responsibility onto platforms while actual harm prevention remains uncertain. The law has succeeded in one thing: making tech companies take child safety seriously. Whether it has actually prevented children from seeing harmful content remains, at best, unclear.