Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

UK Crime Agency Backs “Upload Prevention” Plan to Scan Encrypted Messages

The campaign recasts surveillance as safety, turning encryption from a shield of freedom into a checkpoint of trust.

Phillips in a dark blazer speaking with her mouth open into a microphone while seated on green leather benches in a parliamentary chamber, with papers and another person partially visible behind her.

Stand against censorship and surveillance, join Reclaim The Net.

Britain’s Internet Watch Foundation (IWF) has decided that privacy needs a chaperone.

The group has launched a campaign urging tech companies to install client-side scanning in encrypted apps, a proposal that would make every private message pass through a local checkpoint before being sent.

The IWF calls it an “upload prevention” system. Critics might call it the end of private communication disguised as a safety feature.

Under the plan, every file or image shared on a messaging app would be checked for sexual abuse material (CSAM).

The database would be maintained by what the IWF describes as a “trusted body.” If a match is found, the upload is blocked before encryption can hide it. The pitch is that nothing leaves the device unless it’s cleared, but that is like claiming a home search is fine as long as the police do not take anything.

As has been shown in Germany, this technology would not only catch criminals. Hashing errors and false positives happen, which means lawful material could be stopped before it ever leaves a phone.

And once the scanning infrastructure is built, there is nothing stopping it from being redirected toward new categories of “harmful” or “illegal” content. The precedent would be set: your phone would no longer be a private space.

Although the IWF is running this show, it has plenty of political muscle cheering it on.

Safeguarding Minister Jess Phillips praised the IWF campaign, saying: “It is clear that the British public want greater protections for children online and we are working with technology companies so more can be done to keep children safer. The design choices of platforms cannot be an excuse for failing to respond to the most horrific crimes…If companies don’t comply with the Online Safety Act they will face enforcement from the regulator. Through our action we now have an opportunity to make the online world safer for children, and I urge all technology companies to invest in safeguards so that children’s safety comes first.”

That endorsement matters. It signals that the government is ready to use the already-controversial Online Safety Act to pressure companies into surveillance compliance.

Ofcom, armed with new regulatory powers under that Act, can make “voluntary” ideas mandatory with little more than a memo.

The UK’s approach to online regulation is becoming increasingly invasive. The government recently tried to compel Apple to install a back door into its encrypted iCloud backups under the Investigatory Powers Act. Apple refused and instead pulled its most secure backup option from British users, leaving the country with weaker privacy than nearly anywhere else in the developed world.

At the same time, police are arresting roughly 30 people every day for what are described as “offensive” communications. These cases often involve online comments or social media posts, not threats or crimes of violence. The same mindset runs through the Online Safety Act, which hands Ofcom broad powers to pressure tech companies into monitoring what people say and share in private channels.

The direction is unmistakable. Britain is building a system where communication is permitted only on official terms. Encryption, once celebrated as a safeguard against overreach, is being reframed as a tool of defiance. Each new measure arrives with the language of protection and ends with another layer of control. Civil liberties are not vanishing all at once, but they are being trimmed into compliance under the banner of safety.

Helen Rance of the National Crime Agency added her approval, saying: “While encryption offers significant benefits for user security, the rapid and widespread adoption of end-to-end encryption by major tech companies is occurring without adequate consideration for public safety…Tech giants must not use end-to-end encryption as a shield to avoid responsibility for preventing illegal activity on their platforms, particularly in spaces accessed by children. The broad implementation of privacy-enhancing technologies, without a balanced approach to user safety, undermines platforms’ ability to detect and prevent harm and hampers law enforcement efforts to investigate child sexual abuse.”

“Balanced approach” has become the polite term for lowering the walls of privacy. The IWF insists this is not surveillance, only a safety measure. But a scanner that checks your private messages for forbidden material before you send them does not need to report to the government to erode trust; it only needs to exist.

Client-side scanning is a dream for anyone who dislikes encryption’s stubborn independence. It lets platforms pretend they still “respect privacy” while quietly inserting a compliance filter into every device.

Once installed, it will not go away. The IWF calls the plan a “technical fix.” In reality, it is a cultural shift, a step toward normalizing the idea that private communication must first pass inspection.

The British proposal from the IWF mirrors the same logic that nearly tore the European Union apart.

The EU tried it first, and it did not end well.

Brussels spent two years pushing “Chat Control,” a major attempt to force private message scanning across the continent. Marketed as a child-protection measure, it quickly became one of the most divisive digital policy fights in the bloc’s history.

When the European Commission unveiled its plan in May 2022, it called the proposal a landmark effort “to prevent and combat child sexual abuse.”

Similarly, the regulation would have required online service providers, including encrypted messaging apps, to scan communications for known abuse imagery and either block or report it.

If regular scanning proved impossible, governments could order companies to install client-side scanning, software that inspects messages on the user’s own device before encryption.

The backlash was instant. Legal experts warned that the plan violated Articles 7 and 8 of the EU Charter of Fundamental Rights, which guarantee privacy and data protection.

Engineers pointed out that any system capable of scanning encrypted messages could just as easily be used for mass surveillance.

Digital rights groups called it reckless, saying it would make every citizen a potential suspect.

As opposition spread, governments began backing away. Germany was the first major country to declare it would not support the regulation, citing constitutional protections for private correspondence.

Other member states followed. The Council of the European Union quietly postponed a vote, admitting there was no majority to push the measure forward.

The Commission’s defenders claimed that privacy and child protection could be balanced with the right technology. But technologists know that was wishful thinking. A system that scans users’ devices cannot remain limited to one purpose. Once it exists, the temptation to expand it is constant.

The proposal also underestimated how often scanning systems make mistakes. Innocent photos are flagged. Mislabeling is common. When the results can trigger police investigations, even a tiny error rate becomes unacceptable.

No one disputes the need to stop child exploitation. The question is how far governments will go to claim success. Encryption has never been the obstacle they describe; it is the reason journalists, dissidents, and citizens can speak without fear of reprisal.

Client-side scanning turns that protection into a conditional right.

Once devices are built to inspect their users, the potential for abuse is permanent. Europe has paused its experiment with message scanning, but the same logic is being revived in Britain under a new label.

The IWF calls it “upload prevention.” Brussels called it “Chat Control.” Both rely on the same illusion: that freedom and surveillance can coexist inside the same app.

If you’re tired of censorship and surveillance, join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

More you should know:

Share this post