Clicky

Join the pushback against online censorship, cancel culture, and surveillance.

UK Regulator and “Kick It Out” Report Calls for Policing and Censorship of Legal Online Speech Under Censorship Law

What counts as harm is no longer measured by law, but by how much it offends the ear of a regulator.

Starmer wearing glasses, a suit, and tie in front of a textured British flag background with two empty speech bubbles on either side of his head.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

A new report by Ofcom and anti-discrimination group Kick It Out has thrown its weight behind a growing campaign to restrict online speech in the UK, even when that speech breaks no laws.

Backed by the government and tied to powers granted under the Online Safety Act, the document marks a significant moment in the institutional push to transform what is currently legal expression into material subject to moderation, suppression, and even criminal investigation.

It’s a move that would have been politically unthinkable a decade ago. Today, it’s being packaged as harm reduction.

The most striking aspect of the report is how explicitly it acknowledges the legality of much of the content it targets. It doesn’t claim that current laws are inadequate in tackling actual hate speech or criminal abuse. Instead, it argues that lawful content, opinions, statements, or commentary that some users might find offensive, ought to be suppressed anyway.

The authors present this as a moral imperative. Content that might “normalize” harmful behavior or “offend” certain communities, even when fully within legal bounds, is framed as dangerous.

Cover page of a report titled "Online hate and abuse in sport" by Ofcom in partnership with Kick it Out, published 16 May 2025, showing diverse sports fans, one using a smartphone, with Ofcom and Kick it Out logos at the top.

The report calls for stronger moderation tools, more aggressive enforcement, and deeper intervention from tech platforms. The implication is that legality is no longer the standard, perceived harm is.

This shift is more than semantic. It reflects an ideological transformation in how speech is treated online: less as a right, and more as a privilege to be conditioned on social acceptability.

Throughout the report, there’s a persistent conflation between criminal activity and mere controversy.

The distinction between unlawful hate speech and legally protected, albeit unpleasant, commentary is muddied. The end result is a framework that sees all negative speech, particularly speech involving race, religion, or sexuality, as inherently harmful and in need of control.

“Some participants hoped that if the police started taking action against hate and abuse online, this would send a message,” the report states, in what reads more like a policy recommendation than a casual observation.

That this so-called “hate and abuse” is often legal does not appear to be a problem for the authors. Instead, it’s a justification for expanding law enforcement’s role in the digital realm.

What the report ultimately calls for is a system in which speech that is legally protected is nevertheless treated as if it were criminal, not by the courts, but by private companies under regulatory pressure.

Among the recommendations are platform-level interventions: turning off comment sections, using third-party moderation services, and automating the identification and removal of “harmful” content.

These interventions are framed not as options, but as obligations for companies that wish to comply with emerging standards under the Online Safety Act.

Crucially, Ofcom, now the lead regulator under that Act, signals its willingness to develop codes of practice that would pressure platforms into enforcing rules beyond what the law requires. In effect, this turns Ofcom from a regulatory body into a speech governance authority, with no clear limits.

The broader implications of this report cannot be ignored. What began as an effort to address clear-cut online harms, harassment, abuse, and criminal threats, is now morphing into a campaign to govern tone, attitude, and political expression.

Kick It Out, an organization once focused on combating racial abuse in football, has become a key advocate for expanding these new powers. It has lobbied for censorship mechanisms to be enshrined in law, and this report forms part of that larger campaign.

Far from being a neutral study of online behavior, it reads as a policy document meant to entrench a new norm: one in which platforms are expected to police public discourse according to criteria that go far beyond the law.

This is not a slippery slope argument. It’s already happening.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Logo with a red shield enclosing a stylized globe and three red arrows pointing upward to the right, next to the text 'RECLAIM THE NET' with 'RECLAIM' in gray and 'THE NET' in red

Join the pushback against online censorship, cancel culture, and surveillance.

Reclaim The Net Logo

Defend free speech and individual liberty online. 

Share this post