Clicky

UK wants to ban speech that is “knowingly false,” could cause “non-trivial” “emotional harm”

A vaguely worded dangerous proposal to kill what's left of free speech in the country.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

The UK government has confirmed that the recommendations made by the Law Commission regarding the proposed controversial Online Safety Bill, that, among other things, criminalize communications sending “knowingly false information” have been added to the updated piece of upcoming legislation.

The bill is being presented as an effort to improve anti-harassment regulation online, but critics are consistently warning that many of its provisions, very often because of overly broad language that is open to interpretation, might have the consequence of harming freedom of speech in the country.

The latest recommendation is no different, as groups monitoring developments around the bill that is still being drafted point out that using the term “knowingly false” is a vague definition that can result in unwarranted removal of content on the internet. Another new definition being introduced is “genuinely threatening.”

Ironically, when the commission made the recommendation – which in full reads that a new offense should be introduce in case a person sends communication they know to be false with the intention to cause “non-trivial emotional, psychological or physical harm” – the goal was purportedly to clarify previously insufficiently well-defined terms such as grossly offensive, obscene and indecent behavior.

“(…) This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offenses for content to fit within proscribed yet ambiguous categories such as ‘grossly offensive,’ ‘obscene’ or ‘indecent’,” a government department said in a statement, adding:

“Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused. The new offenses will address the technical limitations of the old offenses and ensure that harmful communications posted to a likely audience are captured.”

The bill’s massive scope and continued ambiguous wording are causing free speech advocates to keep expressing their concern about how this proposed new law would be implemented, and how that implementation would affect free speech – to eventually change the face of online communications as we have known it since the inception of the internet.

But the authorities march on, and the latest set of recommendations incorporated into the bill on February 4, announced by the Department for Digital, Culture, Media and Sport, represent an expansion of the already hugely broad in scope legislation to include content designated as illegal that concerns fraud, revenge porn, racial hatred, weapons and people trafficking, prostitution and suicide promotion, criminalizing at the same time communications interpreted as domestic violence and rape and murder threats.

This adds to the existing illegal offenses envisaged in the bill and used to strongly promote it, such as child sexual abuse and terrorism.

The government touts the bill and introduction of new offenses as a way to protect people and force social media companies to more vigorously and quickly censor content on their platforms. It also equally vaguely promises that free speech will at the same time be “enshrined” in the future law, should it pass. A major concern is that tech companies would choose “over-moderation” i.e., even more rampant online censorship over their users’ rights, just to avoid any hint of legal liability.

The key change now is that instead of waiting for users to flag content that they think fits any of these reasons for removal, and then act, big tech social media owners would be expected to act “proactively” – and, according to a department’s press release, be the ones preventing people being exposed to such harms “in the first place.”

Failure to do so would cost the likes of Facebook, Twitter, YouTube and TikTok fines up to ten percent of their annual global turnover. Another measure the UK wants to be able to take is the ability to block non-compliant social media – critics say, a provision reminiscent of how the internet is controlled in autocratic regimes. In addition, company executives would be held accountable.

Judging by reports, the “proactive” bit mostly concerns ways to protect celebrities such as footballers from racist abuse, as well as deal with revenge porn and similar offenses; Covid “disinformation” could not possibly have been left out, and it is presented as one of “key safety concerns” that platforms must continue to tackle by means of content policing and removal.

Culture Secretary Nadine Dorries recently addressed the “proactive” component in a statement to the BBC, when she said that online platforms should not wait for the bill to pass and become a legal obligation – but start aligning their policies with it immediately.

It may sound strange that in a democracy, a government official appears to be certain that a draft will become a law even before parliamentary debate and a vote – nevertheless, Dorries went on to advise tech companies to remove what she said were harmful algorithms (presumably referring to recommendation algorithms).

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.