Efforts to enforce age restrictions online are now reshaping how major tech platforms monitor their users. OpenAI’s latest addition to ChatGPT, a system that predicts whether someone is under 18 by studying how they use the app, shows how child-safety rules and surveillance-based data collection are becoming closely linked.
The company says its new “age prediction model” analyzes a combination of behavioral and account-level data. That includes when a person logs in, how long their account has existed, usage frequency, and their stated age. From those signals, the system estimates whether an account likely belongs to a minor.
If that prediction is positive, ChatGPT automatically applies content restrictions designed to limit exposure to material such as self-harm discussions.
To regain unrestricted access, flagged users must verify their identity through Persona, an external ID verification company.
More: From Roblox To The IRS: The Great Biometric Data Grab
Persona’s privacy policy allows it to collect not only information provided directly by users but also data from outside sources, including brokers, marketing partners, and “publicly available sources…such as open government databases.” The company may also gather device identifiers and geolocation details.
This arrangement effectively extends surveillance from OpenAI’s internal monitoring to a larger commercial network that links people’s AI activity with personal and location data.
In the process of proving age, companies are building detailed behavioral profiles that make constant observation an ordinary part of digital life.
OpenAI describes this approach as a step toward safer experiences for younger users. Yet the method of classifying individuals through behavioral analysis and then requiring identification to override errors establishes a structure that can easily deepen ongoing monitoring. Once collected, these data points can be combined and retained in ways that go beyond the stated goal of protecting minors.
This trend is unfolding across the wider tech industry.
The Federal Trade Commission is investigating how AI chatbots may affect children and teens, and OpenAI has been named in lawsuits, including one related to a teenager’s death.
Lawmakers have also pressured other platforms, such as Roblox, which uses Persona, to demonstrate stronger safeguards for minors.
Over the past year, OpenAI has introduced parental controls and set up a mental health advisory group to study how AI influences users’ emotions and motivation.
The company says its age prediction system will expand to the European Union “to account for regional requirements” and that it plans to refine its accuracy over time.
The push for age verification is evolving into a new model of behavioral tracking, where AI companies quietly build internal profiles of how people interact online.
These systems are presented as safety features, yet they depend on the same continuous observation and data aggregation that define modern digital surveillance.








