Clicky

TikTok CEO, UK, and EU to discuss regulating “harmful content” at World Economic Forum 2023 annual meeting

The discussion will focus on tackling "online harms" on social media, in the cloud, and in gaming.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Influential regulators from the UK and the European Commission (EC), the executive branch of the European Union (EU), and the CEO of TikTok will discuss how to “tackle online harms” via “evolving regulation and industry technological innovations” at the World Economic Forum’s (WEF) 2023 annual meeting.

The WEF’s annual meetings are often attended by powerful business leaders and government officials who have the power to push the ideas that are discussed at these meetings onto the mass population and this panel is no exception.

The panelists include:

The panel is titled “Tackling Harm in the Digital Era” and is scheduled for January 18, 2023 at 11:30 am Eastern Standard Time (EST).

According to the panel description, panelists will discuss how to “build safer digital spaces and tackle online harms” with “evolving regulation and industry technological innovations.” Additionally, the panelist will talk about how to tackle “online abuse” and “other harms” on social media, in the cloud, and in gaming.

The description doesn’t define or provide any examples of what the WEF or the panelists deem to be abuse or harm. However, it does state that the session is “directly linked to the ongoing Global Coalition for Digital Safety Initiative of the World Economic Forum.”

This initiative targets a wide range of legal content that’s branded as harmful including “health misinformation” and “anti-vaccine content.” Members of the initiative include officials from the governments or government regulators in Australia, the UK, Indonesia, Ukraine, Bangladesh, and Singapore, an executive from the tech giant Microsoft, and the founder of the artificial intelligence (AI) powered content moderation and profanity filter platform Two Hat Security.

Some of the panelists have also indicated in previous statements that they have an expansive view of the terms abuse and harm and believe in censoring content that they deem to be abusive or harmful.

When tech platforms started blocking Russian state media after an EU order in March 2022, Jourová supported the censorship and claimed that freedom of speech was being “abused” to “spread war propaganda.” The EU’s “anti-disinformation” code, which Jourová presides over, also uses the term harm to justify the censorship of content related to Covid-19 and Russia’s invasion of Ukraine.

Dawes has previously threatened to use the new regulatory powers Ofcom could receive under the Online Safety Bill to pressure companies into making changes that suppress harmful content that goes viral. And Ofcom has previously thrown its support behind the idea of tech platforms censoring “potential harms” — a type of content that Ofcom admits may not cause any actual harm.

TikTok has also used the term harm to justify its censorship.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share