AI Chatbots’ Coming Age-Verification Headache

Chat Quick Take
October 27, 2025

OpenAI has been vacillating on user safety over the past two months.

In August, the company had to change its much-hyped GPT-5 model after users complained that its conversational warmth had turned  “muted” and “emotionless.” Later that month, the parents of 16-year-old Adam Raine filed a lawsuit in California claiming that same “conversational warmth” had encouraged their son to commit suicide.

On September 16, OpenAI then published an update on user safety, saying the company would “prioritize safety ahead of privacy and freedom for teens,” adding new parental control options and announcing that it was building an “age-prediction system” to limit teens’ exposure to risky suggestions from the chatbot involving sexual content or other problematic activities.

OpenAI has not specified how this system would work or when it will be implemented, though the company suggests a hybrid approach. It has admitted that “in some cases or countries we may also ask for ID,” implying a potential hybrid setup between traditional verification (which OpenAI currently uses) and new predictive techniques.

But barely a month later, OpenAI CEO Sam Altman posted that the company was satisfied with ChatGPT’s current ability to mitigate “serious mental health issues” and would be rolling out a version of the chatbot that would “respond in a very human-like way.” He added that the company would allow ChatGPT to create erotica for verified adult users starting in December.

Much has been made of the contrast between ChatGPT’s initial lofty goals and its current planned move into erotica, and the high levels of engagement such a move would likely create among users. This shift underscores the urgent need for scalable age verification systems for AI chatbots—whether through traditional verification methods or OpenAI’s upcoming age prediction technology.

Companies creating AI chatbots need some sort of age assurance system to be effective, scalable, and in-keeping with regional regulations—such as the UK’s Online Safety Act, passed in July, or California’s SB 243, passed earlier in October. Simply ticking a box confirming that a user is over 18 won’t do.

OpenAI currently uses a third-party service, Yoti, which can ask users for a picture or government identification card. While the use of government-approved IDs and/or biometrics makes age verification more reliable, it also gifts the platform a significant amount of sensitive personal information. The company says it deletes verification data immediately after the process is complete, and the spotlight placed on it by media, regulators, and even foreign powers means it has multiple incentives to keep this data as secure as possible.

But these incentives do not necessarily apply to smaller AI firms. The wildly popular Character.AI, for instance, states that it “might share your data with vendors, affiliates, advertising partners,” but doesn’t specify what exact third parties have access to said personal data. It also notes that the company “may in the future disclose or make available some of your information… to serve advertisements on our behalf across the internet.”

If these smaller firms are asked to abide by new age assurance laws it could create another valuable trove of data for ad targeting, which they might be tempted to monetize—bearing in mind the massive financial gulf they have with the likes of major firms like OpenAI and Anthropic.

Some regulators seem to recognize the potential pitfalls surrounding age verification. In a February 2025 plenary, the European Data Protection Board noted that “Age assurance… must be the least intrusive possible and that the personal data of children must be protected.”

US laws on age verification, however, remain a patchwork of state efforts, with contrasting approaches to data protection safeguards. California’s Age Appropriate Design Code, for instance, treats infringement on minors’ privacy as a form of harm to be prevented, whereas Texas’s multiple pieces of age verification legislation (which extend from apps to pornographic websites) have been criticized for unnecessary data collection.

This patchwork approach is now insufficient, bearing in mind the stakes at play with AI chatbots. Their access to troves of highly sensitive user data (including but not limited to data needed for age verification), their potentially dangerous ability to mimic an intimacy for engagement, and the relentless pace of development mean that new, more proactive legislation is required. Open AI’s contradictory messaging—from prioritizing teen safety one month to promising adult content the next—demonstrates why self-regulation is insufficient.

Big Tech firms and regulators should work to create a multi-layered strategy that combines proactive safety design, governmental oversight, and civil society education. Firstly, AI companies should be required to demonstrate that their products err on the side of caution when it comes to underaged users. For instance, OpenAI’s new age-prediction system will need to be rigorously tested and audited by both the company itself and external auditors.  

Secondly, states should start to cooperate in forming a unified approach to age-verification on chatbots—ideally on a federal level, although multi-state experiments could also prove to be potentially useful. This could be a bipartisan issue. The current patchwork approach to verification results in tech firms having to navigate a maze of regulatory compliances. Unifying the approach would not only make it easier to create compliant products but also provide a level of additional oversight to Big Tech. 

Finally, the recently launched inquiry by the Federal Trade Commission into AI Chatbots could provide useful starting points for parents, educators, and teenagers to further inoculate themselves to the dangers posed by AI companions. The inquiry, which launched on September 11, 2025 seeks information “on the potentially negative impacts of this technology on children and teens.”

Until policymakers recognize that age verification is a starting point rather than a catch-all solution, we will continue seeing companies like OpenAI cycle between safety promises and feature rollbacks—while young users remain caught in the middle.

Related

See all