The first phase of the UK’s Online Safety Act, a sweeping censorship law, has come into force.
The requirements imposed on online platforms include identifying and deleting illegal content, but also taking steps to reduce the risk of such content being posted.
The law includes truthful and non-violent speech in the “illegal” category that platforms must remove. The first phase rules cover a huge list of “priority offenses” – 130 types of content in total, which can be grouped into 17 categories.
The stated goals of the legislation are one thing, but its many critics have been consistently warning that the interpretation and implementation poses an unacceptable level of risk of stifling lawful speech.
The “Foreign Interference” category is where truthful speech is also targeted for removal. This stems from the definition of “misinterpretation” in the National Security Act 2023, on which the Online Safety Act bases its rules.
That definition, among other things, outlaws “presenting information in a way which amounts to a misrepresentation, even if some or all of the information is true.”
Another priority offense is “fear” of violence – such as “fear or provocation of violence” and “putting people in fear of violence.”
Given how prone the UK’s top former and current officials and lawmakers are to conflate non-violent speech with violence, this is another cause for concern.
In the wake of the Southport riots, there were repeated instances of high-ranking officials considering incitement to violence to be the same as “misinformation,” while PM Keir Starmer accused supporters of activist and journalist Tommy Robinson of their stance being equal to seeking “vicarious thrill from street violence.”
The category of racial hatred is among the priority offenses in the law, and this relies on the Public Order Act 1986 and the way it deals with stirring up racial hatred – but also instances when racial hatred is “likely” to be stirred up.
And this can be done not only through behavior but also words, including those that are threatening, abusive, and, “insulting.”
This type of definition leaves plenty of room for interpretation, and even before this first phase of the Online Safety Act’s implementation, many UK citizens would get arrested or interrogated for allegedly “stirring racial hatred” with their social media posts.
Presented with the choice of paying huge fines, or erring on the side of over-removal, tech companies are expected to choose the latter.