Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

UK Expands Online Safety Act to Enforce Preemptive Censorship For “Priority” Offenses

Britain’s push for online "safety" drifts into a realm of digital pre-crime, where algorithms decide guilt before anything gets seen.

Starmer with gray hair and glasses wearing a dark suit and patterned blue tie, seated and speaking in a dim auditorium in front of a bright red stage backdrop with a blurred podium and part of a red-white-blue flag,

Stand against censorship and surveillance, join Reclaim The Net.

The UK government is preparing to expand the reach of its already controversial censorship law, the Online Safety Act (OSA), with a new set of rules that push platforms toward preemptive censorship.

The changes would compel tech companies to block material before users can even see it, under the claim of stopping “cyberflashing” and content “encouraging or assisting serious self-harm.”

On October 21, the government laid before Parliament a Statutory Instrument titled The Online Safety Act 2023 (Priority Offences) (Amendment) Regulations 2025.

This legal mechanism, used to amend existing legislation without requiring a full new Act, adds two additional “priority offences” to Schedule 7 of the OSA:

By classifying these as “priority illegal content” under Section 59 of the OSA, the government triggers the law’s strictest obligations for online platforms.

Section 10 of the Act lays out the steps companies must take to remain compliant, steps that go far beyond traditional moderation.

Platforms will be required to employ preemptive censorship systems designed to “prevent individuals from encountering priority illegal content” and to “mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence.”

In reality, this means social networks, forums, and messaging services will need to automatically block or filter posts that algorithms believe might fall under these categories before they are even visible to the public. This would require increased surveillance of people’s online communication.

They will also have to implement rapid takedown procedures for any content reported by users as potentially illegal.

Failure to comply can result in massive penalties, fines of up to 10% of a company’s global revenue or £18 million ($23M), whichever is greater, and potential service blocking by internet providers.

Such broad obligations virtually guarantee that companies will err on the side of over-censorship.

To avoid multimillion-pound fines, many will likely suppress borderline or even entirely lawful speech.

Automated moderation systems, in particular, are prone to misidentifying context, making it easy to imagine cases where posts offering support or suicide prevention advice could be flagged as “encouraging self-harm.”

What’s emerging is a model of online governance where private platforms are deputized as preemptive censors under threat of severe financial punishment.

While the stated intent may be to protect users from harm, the result is a legal framework that risks silencing legitimate discussion and turning the UK’s digital public square into a heavily filtered environment dictated by government-defined categories of acceptable speech.

If you’re tired of censorship and surveillance, join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

More you should know:

Share this post