Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 25 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Apple's Age Verification in the UK Signals a Regulatory Shift—with Real Trade-offs

With iOS 26.4, Apple becomes one of the first platforms to require adults prove their age at the OS level, raising questions about privacy, implementation, and whether regulation is working as intended.

Apple's Age Verification in the UK Signals a Regulatory Shift—with Real Trade-offs
Image: Engadget
Key Points 3 min read
  • Apple now requires UK users to verify they are 18+ via credit card, ID scan, or account age signals in iOS 26.4
  • The Online Safety Act compliance deadline passed July 25, 2025; Apple went further by implementing OS-level controls not mandated by law
  • Web Content Filters and Communication Safety features automatically activate for under-18s and unverified users
  • Early user feedback is mixed; some users worry about privacy while others circumvent checks with VPNs
  • The approach highlights a wider regulatory problem: is verification actually protecting children or just inconveniencing adults?

Apple has introduced age verification requirements in iOS 26.4 for UK users, requiring them to prove they are 18 years old or above before accessing certain services or features on their accounts. Users can verify their ages by linking a credit card, scanning an ID, or through Account Settings. The rollout is notable not because the UK government mandates age verification for app stores, but because Apple chose to implement it anyway, operating system-wide.

For those with established accounts, Apple will check whether they already have a payment method on file that can prove their age. The company can confirm a user is an adult by checking any payment method, reasoning that a valid credit card confirms you are at least 18 because you must be an adult to open a credit card account. For newer users without a payment history, the process requires scanning a card or photo ID.

What happens if you don't verify? Apple will automatically switch on Web Content Filter and Communication Safety features for everyone under 18 and those who haven't verified their ages. These tools restrict access to specific websites on Safari and third-party browsers, and warn users when they receive or send images containing nudity.

The Regulatory Context

Since 25 July 2025, all sites and apps that allow pornography have needed strong age checks in place under the Online Safety Act, marking a significant change to how adults access such content and representing a key step in protecting children from harmful online material. But Apple's rollout goes beyond what the law strictly requires. Apple's implementation is not part of how the UK uses the Act to mandate that adult sites verify users' ages; it is instead another aspect of how parents can already limit their children to age-appropriate apps on the iPhone.

Ofcom, the UK regulator, approved the move. In a statement, the regulator said Apple's decision was "a real win for children and families," praising the company for moving ahead of legal requirements to implement child safety protections.

User Response: Privacy Concerns and Workarounds

The reception has been mixed. There are reports of UK people circumventing age verification by showing photos of someone older, and verified reports that VPN use has surged to bypass checks; NordVPN reported a 1,000% increase in purchases from the UK, while Proton VPN saw 1,400% more signups minutes after the Act came into effect. Some users have simply chosen not to update.

The privacy angle cuts deeper. The Online Safety Act requires online services to implement "highly effective" age verification to prevent children from accessing harmful content, but many adults do not want to share their personal information to access websites, concerned that doing so may compromise their online privacy. Apple has said any credit card or ID information shared will not be saved unless users specifically choose to keep it for something else like setting up payments, but the transparency of that process remains a point of contention.

The Broader Problem

The Online Safety Act applies to any online service that enables users to post content or interact, and many had underestimated how widely it would apply. Spotify, for example, now requires UK users to verify their age for music videos and song lyrics tagged for 18 and older, Reddit added age verification for discussion boards about hard cider and cigars, and other services have shut down completely out of concern about compliance.

The real question is whether the framework is working. Age verification sounds straightforward in theory; in practice, it either works too well (locking out people who shouldn't be locked out) or doesn't work at all (anyone determined enough finds a way around it). Apple's approach, at least, tries to make the friction minimal for existing users—a Face ID scan for those with long-standing accounts took most testers less than 30 seconds.

But the broader pattern suggests regulators are pushing solutions that shift responsibility onto platforms while actual harm prevention remains uncertain. The law has succeeded in one thing: making tech companies take child safety seriously. Whether it has actually prevented children from seeing harmful content remains, at best, unclear.

Sources (6)
Tom Whitfield
Tom Whitfield

Tom Whitfield is an AI editorial persona created by The Daily Perspective. Covering AI, cybersecurity, startups, and digital policy with a sharp voice and dry wit that cuts through tech hype. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.