The UK Parliament’s Science, Innovation and Technology Committee has spoken. And what it wants, in no uncertain terms, is an internet where opinions are shrink-wrapped, inspected, and potentially vaporized for being slightly off-script.
Its latest report, published with all the gravitas of a white paper on national survival, is framed as a response to the “Southport unrest” of 2024; a kerfuffle of confused narratives and bottle-throwing that apparently requires rethinking the entire relationship between the state, the internet, and the British people’s right to say something online.
We obtained a copy of the report of you here.
The Committee’s proposal is a legal downgrading of content they don’t like, and mass surveillance of users.
But don’t worry, it’s all in the name of “public safety” and “combating misinformation.” Which is the modern policy equivalent of “just trust us.”
Despite the ink barely being dry on the Online Safety Act, a law so sprawling and riddled with ambiguity it makes War and Peace look like a pamphlet, the Committee wastes no time throwing it under the bus.
“The Act is already out of date..”
“…fails to adequately address generative AI”
And the most interesting:
“many parts of the long-awaited Online Safety Act were not fully in force at the time of the unrest, but we found little evidence that they would have made a difference if they were.”
One would think that would lead to a moment of self-reflection. Maybe even a re-evaluation of whether legislation is the best tool for policing digital discourse. But no. The Committee charges forward, demanding even more power.
Let’s talk about how they propose to handle so-called “misinformation,” that glorious elastic term which now includes everything from poorly phrased tweets to opinions not cleared by an NGO intern.
“Platforms should algorithmically demote fact-checked misinformation, with established processes setting out more stringent measures to take during crises.”
So content that’s perfectly legal, arguably true, or just unpopular now gets algorithmically flushed because a “fact-checker” said so. And who are these digital arbiters of truth?
Mostly third-party organizations funded by, you guessed it, Big Tech.
Still, the report says:
“proportionate restrictions on the spread of fact-checked misinformation”
should be enforced, and that platforms must be:
“held accountable for the impact from amplification of harmful content.”
Here’s where the bureaucratic wizardry goes full throttle. What is “harmful content,” exactly?
Good question.
The report offers a definition so broad you could drive a convoy of policy goals through it, sideways. Harm includes:
- “hate and abuse”
- “manipulative or misleading content”
…even when that content is not illegal.
Yes, they’re openly backing censorship of what they call “legal but harmful” speech. Which is like saying, “You’re allowed to speak, but not in a way that we find inconvenient.”
Rather than proposing openness, debate, or public education, the Committee wants to:
“compel platforms to put in place minimum standards for addressing the spread of misleading content online”
and also force them to:
“undertake risk assessments and report on content that is legal but harmful.”
What happens if they don’t? Oh, just a polite little financial death sentence for some platforms.
“Ofcom should be given the power to serve penalty notices to services that fail to comply, either 10% of the company’s worldwide revenue, or £18 million, whichever is higher.”
Try running a platform with that particular sword of Damocles hovering above your quarterly earnings call.
Perhaps the most jaw-dropping section is where the Committee enthusiastically promotes turning the internet into a sort of digital DMV.
“The government should mandate ‘Know Your Customer’ checks for participants in the programmatic advertising supply chain”
and to top it off:
“prevent children from accessing inappropriate or harmful outputs” from generative AI.
All of this likely means biometric ID checks for using basic AI tools, and enough surveillance infrastructure to make East Germany look underfunded.
They also want mandatory watermarking of AI content that:
“should be ‘visible’ and ‘cannot be removed’”
Nothing says innovation like tagging every piece of machine-generated creativity with a permanent warning label.
The report positively salivates at the idea of expanding the government’s surveillance muscle.
Here comes NSOIT, or the “National Security Online Information Team, ”a rebranding of the Counter Disinformation Unit that previously earned fame for monitoring citizens during the pandemic like a nosy neighbor with binoculars and a clipboard.
Now the Committee recommends:
“The government should consider consolidating responsibility for tracking foreign disinformation campaigns within the National Security Online Information Team (NSOIT).”
Ah yes. Consolidation. The perfect word to make the expansion of state power sound like administrative efficiency.