Influential regulators from the UK and the European Commission (EC), the executive branch of the European Union (EU), and the CEO of TikTok will discuss how to “tackle online harms” via “evolving regulation and industry technological innovations” at the World Economic Forum’s (WEF) 2023 annual meeting.
The WEF’s annual meetings are often attended by powerful business leaders and government officials who have the power to push the ideas that are discussed at these meetings onto the mass population and this panel is no exception.
The panelists include:
- TikTok CEO Shou Zi Chew (the head of an app that has a billion monthly active users and often censors content)
- The Chief Executive of the United Kingdom’s (UK’s) communications regulator, the Office of Communications (Ofcom), Melanie Dawes (the head of a UK regulator that’s on the verge of being granted new powers if the Online Safety Bill, a sweeping censorship law, passes)
- The EC’s Vice-President for Values and Transparency, Věra Jourová (who is in charge of a far-reaching EU-wide “anti-disinformation” code where signatories agree to curb online speech)
The panel is titled “Tackling Harm in the Digital Era” and is scheduled for January 18, 2023 at 11:30 am Eastern Standard Time (EST).
According to the panel description, panelists will discuss how to “build safer digital spaces and tackle online harms” with “evolving regulation and industry technological innovations.” Additionally, the panelist will talk about how to tackle “online abuse” and “other harms” on social media, in the cloud, and in gaming.
The description doesn’t define or provide any examples of what the WEF or the panelists deem to be abuse or harm. However, it does state that the session is “directly linked to the ongoing Global Coalition for Digital Safety Initiative of the World Economic Forum.”
This initiative targets a wide range of legal content that’s branded as harmful including “health misinformation” and “anti-vaccine content.” Members of the initiative include officials from the governments or government regulators in Australia, the UK, Indonesia, Ukraine, Bangladesh, and Singapore, an executive from the tech giant Microsoft, and the founder of the artificial intelligence (AI) powered content moderation and profanity filter platform Two Hat Security.
Some of the panelists have also indicated in previous statements that they have an expansive view of the terms abuse and harm and believe in censoring content that they deem to be abusive or harmful.
When tech platforms started blocking Russian state media after an EU order in March 2022, Jourová supported the censorship and claimed that freedom of speech was being “abused” to “spread war propaganda.” The EU’s “anti-disinformation” code, which Jourová presides over, also uses the term harm to justify the censorship of content related to Covid-19 and Russia’s invasion of Ukraine.
Dawes has previously threatened to use the new regulatory powers Ofcom could receive under the Online Safety Bill to pressure companies into making changes that suppress harmful content that goes viral. And Ofcom has previously thrown its support behind the idea of tech platforms censoring “potential harms” — a type of content that Ofcom admits may not cause any actual harm.
TikTok has also used the term harm to justify its censorship.