It started as a plan to keep kids off Snapchat. Then it crept into TikTok. Now it’s knocking on YouTube’s front door with a clipboard, a demand for identification, and a wet government stamp of approval. In Australia, the age-old solution of “parent your child” is being swapped out for a bold new model: “scan your face to watch cat videos.”
The eSafety Commissioner, Julie Inman Grant, has had her sights set on sanitizing the internet. And like any modern bureaucrat on a mission, she’s discovered that nothing fuels a safety crusade quite like a child protection panic wrapped in a policy initiative. Her latest move is to push YouTube into the same regulatory sandbox as Snapchat and Instagram by revoking its exemption from an under-16 social media ban.
This shift wouldn’t just affect kids. It could mean that everyone who wants to watch YouTube; yes, even that 45-year-old watching how-to videos on fixing a leaking faucet, might soon have to verify (and reveal) their age or identity just to press play.
Grant unveiled her rationale with a round of internal research dropped late on a Thursday like an off-brand Netflix show, hoping to avoid bad reviews.
The study showed that 76% of kids aged 10 to 15 use YouTube and that 37% of those who encountered “harmful” content said it happened there. For the younger subset, 10 to 12 years old, the number jumps to 46%.
But inconveniently for the crackdown crowd, the more severe online offenses, grooming, harassment, and image-based abuse, were overwhelmingly reported on Snapchat. This didn’t stop the Commissioner from dragging YouTube onto the regulatory altar anyway.
YouTube’s not exactly letting the house burn down, either. In messages to creators last week, the platform made it clear: this move is more than policy tweaking; it’s a wrecking ball. It “could impact you, your channel, your audience, and the broader creator community,” and “send a message that YouTube isn’t safe for younger Australians.”
Of course, if the point is to make the internet so safe that it no longer functions, that’s not a bug, it’s a feature.
YouTube, for its part, has long insisted it isn’t “social media” in the sense politicians pretend to understand. According to Rachel Lord, YouTube’s Public Policy and Government Relations lead for Australia and New Zealand, it’s “a video streaming platform with a library of free, high-quality content.”
She went further: “The eSafety Commissioner’s advice for younger people to use YouTube in a ‘logged out’ state deprives them of the age-appropriate experiences and additional safety guardrails we specifically designed for younger people.”
This isn’t stopping pro-censorship figurehead Grant, who has hinted that real-time age verification, meaning digital ID systems, could soon be standard protocol.
And it means that accessing YouTube, a site most Australians think of as a library of how-to videos and talking heads, could soon require the same security clearance as boarding a domestic flight.
The eSafety office wants teens “logged out” by default, but in practice, that just means no algorithmic safeguards, no parental controls, and no user-specific filters. Because, of course, the way to protect children is to remove the very tools that allow their parents to supervise them. Genius.
It’s not lost on anyone that this crusade against YouTube has deeper roots. Julie Inman Grant isn’t exactly a newcomer to the “content moderation at scale” game.
She’s made waves before, demanding the removal of videos, pressuring platforms over posts, and being the focus of international ridicule for her lack of tech prowess.
The line between “safety regulation” and state-sanctioned content policing is getting blurrier by the week.
The digital ID proposal is not being rolled out in a vacuum. It follows a familiar script: start with a noble cause (protecting children), find a problem with broad emotional appeal (bad content online), and push for sweeping regulatory infrastructure (age checks, identity gates, access logs). Suddenly, the same government that can’t figure out how to digitize healthcare records is determining which videos a 15-year-old can watch about sea turtles.
It doesn’t matter that children under 13 aren’t even allowed YouTube accounts. Most kids watch using family logins, with curated settings, under adult supervision. But the government plan even punishes that model by treating every user like a potential threat, unless they pass the digital ID check.
Google’s next big Canberra event on July 30 will feature creators making the case to lawmakers directly. Expect mentions of “choice,” “parental oversight,” and “not treating every viewer like a criminal.”
The reality, though, is darker. Privacy advocates see a scenario where the digital ID measures become infrastructure: permanent, embedded, and normalized. Today it’s about under-16s. Tomorrow, it’s about misinformation. Then it’s terrorism. Then it’s “public health.”
What this proposal really asks is: should the internet remain a place where users browse freely, or become a gated network of licensed viewers, segmented by age and identity, supervised by bureaucrats who can’t tell Twitch from Reddit?
The idea that every Australian might soon need to verify their identity to watch DIY, music, or kids’ programming isn’t a slippery slope. It’s a step down, a concrete one. With Grant’s proposals, a country that used to pride itself on rugged independence is pioneering a future where surfing the web requires a permission slip.
And not the kind your mom signs.