There is something magnificently British about building a bureaucracy so brazen that it thinks it can police speech across the internet and then calmly posting the invoice to the very companies being policed.
This is the latest chapter of the UK’s Online Safety Act, where Ofcom has flicked the switch on its fee machine and told the world’s biggest tech firms to cough up. The deadline to register has come and gone. The meter is running from April 1, 2026 to March 31, 2027. Bills land in September.
The law requires that “Ofcom’s operating costs for the online safety regime are recovered through fees imposed on certain providers of regulated services.” Which is to say: the referee is paid by the players, except the referee also writes the rules, rewrites them when bored, and can send you off the pitch permanently if you argue.
Pay Up, Then Shut Up
At first glance, the fee sounds almost polite. Somewhere between 0.02% and 0.03% of qualifying worldwide revenue. Pocket change, right? The sort of rounding error a Silicon Valley accountant might miss while reaching for another oat latte.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
But then you notice the threshold. Any company pulling in at least £250 million globally from regulated services gets tapped, unless its UK slice is under £10 million. Social networks, search engines, file-sharing platforms are included.
And then comes the part where the polite rounding error quietly grows teeth. Ofcom’s online safety budget has already climbed from £71 million to £92 million in a single year. That is a 30% jump. The system is designed so that every pound Ofcom spends is recovered from the industry. If the regulator expands, hires more staff, launches more investigations, and makes more censorship demands, the bill follows along like a loyal Labrador.
Now, what exactly are these companies paying for? A few leaflets about kindness on the internet? A helpline staffed by polite people offering tea and sympathy?
Not quite.
Ofcom can investigate, fine companies up to 10% of their global revenue, and in extreme cases ask courts to block services entirely.
Then there are the so-called “technology notices,” which could require platforms to scan private, encrypted messages.
Messaging service Signal has already made it clear it would rather leave the UK than comply with that sort of demand. The government says it will only use this power when it becomes “technically feasible.”
If you were hoping Ofcom might treat these powers like a decorative sword hanging on the wall, think again. By late 2025, it had already opened 21 investigations and launched five enforcement programs.
A Belize-based operator of adult websites was fined £1 million, plus another £50,000 for not replying to information requests. Then in March 2026, the famously unruly image board 4chan was hit with a £520,000 penalty.
This is a regulator that has been caught reaching across borders, leaning outside of its jurisdiction, planting flags, and telling companies thousands of miles away that British speech rules apply to them.
US-based platforms have already challenged this reach in court. The outcome will determine whether Ofcom is merely ambitious or something closer to a global hall monitor with legal muscle.
The Elastic Meaning of “Harm”
Here’s where things get properly slippery. The Act allows intervention against content that “risks significant harm.”
What counts as harm? Political speech? Satire? Journalism that annoys the wrong people on a Tuesday afternoon?
Ofcom decides what to regulate, how aggressively to enforce it, and how much it needs to spend doing so. The industry then pays exactly that amount. Not more, not less. A perfect loop.
Hire more staff? The bill rises. Open more investigations? The bill rises. Expand the scope of what counts as harmful? You guessed it.

