The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.
The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.
The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.
The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”
The actual policy tools being considered are a different matter.
Age verification, as a mechanism, works by proving identity. Every user proves who they are.
A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.
The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.
Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.
The government, through a campaign website, is now actively encouraging parents to funnel reports of “hate speech” to the same private companies that define what hate speech is. There’s no independent standard, no legal definition that applies consistently, and no oversight of what platforms do with those reports. Just a government directing citizen complaints into Big Tech’s moderation queues and presenting that as a safety feature.
“Hate speech” is one of those categories that sounds precise until you ask who decides. Platforms decide. They always have. What the government has done here is lend its authority to that process, making Big Tech’s internal moderation systems look like public infrastructure. They are not public infrastructure. They are corporate policies, applied inconsistently, without appeal, and with no democratic accountability.
The broader consultation asks whether the “digital age of consent” should be raised, whether mobile phone guidance for schools should become statutory, and how parental controls should be simplified.
Education Secretary Bridget Phillipson said: “Technology is fundamentally changing childhood. Used well, it can open up new opportunities for learning, creativity, and connection, but only if we get the balance right.”
The balance the government is currently striking tilts heavily toward control. Mandatory curfews would let the government decide when young people can be online. Age verification would require platforms to know who everyone is. A reporting infrastructure has already been built to direct public complaints toward private censorship tools. The consultation is running in parallel with the architecture that doesn’t need it.
The chilling effect starts well before any of this becomes law. Teenagers already know these restrictions are coming. Parents are already being encouraged to report their children’s online interactions to platforms. Publishers and platforms, watching the legal powers that now allow ministers to act without fresh legislation, are starting to think about what they’ll need to do before they’re told to.
That’s how it works. The threat is often enough.

