Online Age Verification Makes Sense — but Thoughtful Implementation Is Essential

UK Law_ QT (1)
July 23, 2025

Starting this week, under the UK’s Online Safety Act, websites and apps that host adult content — which includes not only pornography sites but also mainstream social media and gaming platforms — will be required to implement “highly effective” age checks to prevent users under 18 from gaining access. In anticipation of the rule’s enforcement, platforms like Bluesky, Reddit, and Roblox have begun rolling out age verification features.

On some level, age verification makes common sense. After all, we expect people to show IDs to buy alcohol or enter a nightclub — why should digital spaces be any different? Research shows that early and repeated exposure to certain types of content — particularly sexually explicit or self-harm-promoting material — can have serious psychological and developmental consequences for young users. It’s reasonable to want to keep children out of certain online environments, and age verification seems like the most straightforward and enforceable way to do so.

But the digital world is not a perfect analog to the physical one. Depending on the requirements imposed and methods chosen, age verification online can introduce serious privacy, security, and access risks — not only for children but for all users. In some cases, the systems employed are so flawed that they fail to protect minors while also excluding adults who should have lawful access. Policymakers must understand and carefully weigh these tradeoffs before mandating age verification at scale.

Ofcom, the UK’s online safety regulator, has approved several methods of age verification. Yet each comes with significant drawbacks. Photo ID checks, among the more accurate methods, carry risks of data breaches, surveillance, and exclusion — particularly for those without formal documentation. Government-issued IDs contain sensitive data beyond just age, and requiring users to share that data routinely with numerous apps and websites opens the door to invasive tracking, especially if verification data is linked to browsing behavior and shared across platforms.

Alternative options, like obtaining verification via credit cards and mobile network operators, are easier for minors to circumvent, and they exclude people who lack access to those services. Storage of such data also creates a tempting target for hackers, increasing the risk of identity theft. Methods like facial scanning and behavioral profiling also raise their own ethical and privacy concerns while remaining far from reliable.

These shortcomings shouldn’t lead us to abandon the idea of age verification altogether. Promising, privacy-preserving approaches are emerging, such as “double-blind” systems that verify age without any single entity acquiring too much personal information. Under such systems, a trusted third party — which could be a government agency or private company — first verifies a user’s age based on an identification document. Then, when a website or app requests permission to give a user access, the trusted party responds with a simple yes or no, without sharing any additional personal information. Still in the pilot phase, this method deserves regulatory support and further development.

In the meantime, regulators should avoid setting vague standards like “commercially reasonable” verification and instead push for solutions that meet rigorous privacy and security benchmarks.

But even the most advanced verification system won’t eliminate all risks. Children may still access adult content through shared family accounts or devices. Moreover, age verification relies on a binary view of online spaces — as either safe for children or not — that doesn’t always reflect reality. Many digital platforms, especially games and social apps, host a mix of content that varies widely in appropriateness depending on a child’s age or maturity. These platforms should be encouraged to continue investing in parental controls and age-appropriate experiences, regardless of whether verification protocols are in place.

Few would argue that the internet should be a free-for-all when it comes to kids and harmful content. And given how long design-based reforms and procedural safeguards can take to implement, it’s easy to see why age restrictions appeal to policymakers and parents looking for quick solutions. But effective regulation requires more than speed. The real challenge — and the real opportunity — is to develop systems that are not just enforceable, but also inclusive, accurate, and protective of users’ rights. That’s where our attention should be.

Related

See all