Plans to implement sweeping content moderation powers for tech companies have been put on hold by the UK government, as concerns grow that reintroducing speech controls could disrupt sensitive trade discussions with President Donald Trump’s allies.
The British Government had been exploring a return to the abandoned “legal but harmful” proposal, a measure that would have forced online platforms to purge content deemed “harmful” yet not unlawful. But after internal pushback and a wary eye on Washington’s stance, the idea has been quietly dropped.
The original measure, introduced under Conservative leadership in 2022, triggered significant dissent, including from within the party itself. Conservative leader Kemi Badenoch, then serving as business secretary, dismissed the idea, warning it could mean “legislating for hurt feelings.” The proposal was ultimately replaced with tools that give individuals more choice over the material they encounter online rather than imposing top-down restrictions.
According to reports, the recent move to distance the government from any revival of the censorship clause comes amid Labour’s review of the Online Safety Act, launched after riots last summer linked to false claims about a Southport attacker. While that review sparked fresh debate over “misinformation,” officials have opted not to revisit the “legal but harmful” language, choosing instead to emphasize online protections for children.
Labour appears focused on building upon new safety measures coming into force this summer, including mandatory age checks for adult content. Technology Secretary Peter Kyle is working on a package aimed at strengthening youth safeguards, though these proposals stop well short of any return to compelled content takedowns.
“We are really committed to keeping children safe,” a government insider said. “Finally, the Online Safety Act is starting to have an impact, and we will see some enforcement action shortly. Age assurance will also be a massive step forward when it comes in the summer, but we’re actively exploring other ways of protecting children.”
While the UK government’s removal of the “legal but harmful” provision from the Online Safety Act was intended to address concerns over free speech and censorship, significant issues remain. The Act still imposes broad duties on online platforms to assess and mitigate risks associated with user-generated content.
This approach necessitates stringent content moderation practices, which may lead platforms to over-censor in an effort to avoid substantial fines or legal repercussions. Consequently, lawful content could be unjustly restricted, posing a continued threat to freedom of expression.
Despite the apparent retreat from the “legal but harmful” censorship idea, its ongoing commitment to “age assurance” raises serious red flags. While framed as a tool to protect children from harmful content online, age assurance often translates to mandatory digital identification systems. These systems would require users to verify their age through personal data, potentially including facial recognition, government-issued IDs, or other biometric markers. In practice, this means building a digital infrastructure that links individuals’ online activity to their real-world identity, drastically undermining privacy and anonymity on the internet.
The introduction of digital ID systems for age verification creates a centralized repository of highly sensitive personal data; data that will become a prime target for hackers, corporations, and state surveillance. Once users are forced to verify their identity to access certain platforms or content, the web loses its foundational openness and neutrality. “Age assurance” risks creating a surveillance-by-default internet, where every click and view is traceable. It may start with certain content, but once the infrastructure is in place, the temptation to expand its scope, to combat “misinformation,” enforce speech codes, or track dissidents, becomes all too real.
Officials are intent on pursuing only narrow legal adjustments, wary of reigniting a national debate about online speech rights. Pushing tech firms to police ambiguous categories of “harmful” material would risk backlash not only at home but also in the US, where prominent figures in Trump’s circle have already voiced strong objections.
While ministers maintain that keeping children safe is a priority, they appear unwilling to jeopardize international negotiations by reigniting contentious speech restrictions. For now, the government is steering clear of legislation that could be seen as curbing legal expression, focusing instead on technical safeguards and user empowerment.
Anonymous sources close to the negotiations are quoted as saying that Keir Starmer’s government intends to carry out enforcement review of this law, as well as of the Digital Markets and Competition Act.
But one of the sources downplayed this as a promise of “just a regulatory review of the implementation (…) not a do-over.”
While the Online Safety Act is being criticized – including by the White House – both for the negative effect on online free speech and its possible impact on US tech companies, there are also those who vigorously defend it.
The law is promoted by the UK government, which has thus far ignored all criticism and instead asserts that the intent is to protect children on the internet, and improve “digital health.”
This is why the law has been able to garner support from groups like the Molly Rose Foundation (MRF), which are now “dismayed and appalled” by the news that the Online Safety Act is on the table for the trade talks.
To express their dissatisfaction, MRF sent a letter to Secretary of State for Business and Trade Jonathan Reynolds, referring to the inclusion of the law as “an appalling sellout of children’s safety.”
Others who share this stance and criticize the government for allowing the Online Safety Act to be up for any kind of review are House of Lords member Beeban Kidron, and children’s charity NSPCC.
In early March, the law’s champion, Kyle, was still saying what they wanted to hear.
“None of our protections for children and vulnerable people are up for negotiation,” Kyle told LBC after US tech firms said they might be forced to leave the UK because of the law.
And then at the start of April, reports emerged that a visiting US State Department delegation that met with UK’s regulator and Online Safety Act’s enforcer Ofcom expressed concerns about the legislation’s potential to suppress free speech.
Just three days ago, the Guardian reported that Secretary Reynolds “denied that concerns over free speech had featured in tariff negotiations with the US.”