What began as targeted restrictions on adult content has evolved into a global push to age-gate the internet itself. Across the UK, Europe, and Australia, people are increasingly required to scan passports or undergo facial age estimation to access adult sites, social media, and even search engines. The infrastructure for mandatory age verification is now embedded in law and policy. But as governments discover that a significant portion of users are circumventing these requirements with virtual private networks, pressure is building for the next escalation: cracking down on the tools people use to protect their privacy online.
The surge in VPN adoption has been dramatic. In the UK, VPN usage more than doubled following age assurance requirements becoming mandatory, rising from about 650,000 daily users before 25 July 2025 and peaking at over 1.4 million users in mid-August 2025. Similar patterns have emerged elsewhere. Florida saw a 1,150% increase in VPN demand after its age verification law took effect. The message is clear: when individuals want to retain anonymity, they find the tools to do so. Some lawmakers have noticed, and they're considering a solution that troubles privacy researchers profoundly.
In the US, Wisconsin nearly passed legislation that would have banned VPN use outright to enforce age verification compliance. The bill required all websites distributing material that could conceivably be deemed "sexual content" to both implement an age verification system and block the access of users connected via VPN. Though Wisconsin lawmakers removed the VPN provision in February 2026 after widespread pushback, the bill now awaits Governor Tony Evers' signature. Other jurisdictions remain interested in similar approaches.
The appeal is obvious from a regulatory perspective. Age verification only works if people cannot circumvent it. But the costs of restricting access to encryption tools would be severe. Regulating VPN use would decrease users' capability to defend their privacy online and would leave vulnerable populations unprotected, such as journalists, activists, and domestic abuse victims. This argument has traction among technologists: over 400 computer scientists have signed an open letter warning that age-verification requirements "might cause more harm than good."
The fundamental tension here reflects competing legitimate concerns. Policymakers are responding to genuine evidence of social media's harms to young people's mental health. Yet the tools being deployed to address those harms create new vulnerabilities. Many verification systems rely on ID uploads, facial scans, or third-party verification vendors, creating permanent records tied to identity, and recent breaches have shown how risky that can be. Even well-intentioned systems create attractive targets for hackers and, in some jurisdictions, for government surveillance.
The irony is that evidence suggests these laws may not even work effectively. Research from the New York Center for Social Media and Politics and the Phoenix Center confirm that age verification laws do not work; searches for platforms that have blocked access to residents in restricted states dropped significantly, while searches for offshore sites surged. In Australia, young people creatively worked around the law with bogus birthdays and unregulated apps, with one 14-year-old noting that "circumventing the ban was going to be possible, but it was so much easier than we could have expected."
What remains unclear is whether governments will continue escalating enforcement. Countries like Denmark and Malaysia are already planning to introduce similar restrictions in 2026. If the pattern holds, each new circumvention technique will prompt calls for tighter restrictions, moving us toward the uncomfortable choice between protecting children through surveillance or accepting that some determined users will always find ways around the rules. The question is whether the price of stopping the latter is worth the privacy costs of the former.