Clicky

Facebook admits its moderation tools are a “blunt instrument”, can’t understand context

The way the giant is now arguing its case against the ECJ decision sheds more light on the way it rates the quality of its own technology.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

With some insight into how artificial intelligence-powered algorithmic solutions work today, and what their current limitations are, it quickly becomes clear that Facebook’s “moderation” and content policing based on these automated filtering tools is imperfect, to say the least.

And if it was a secret, Facebook has now made it public, in an attempt to challenge a European Court of Justice (ECJ) ruling. Experience teaches us that there are scenarios where Facebook’s content moderation system can produce mistakes that result in harming the social media giant’s users, by getting them banned or suspended. And yet, the company would routinely defend the system.

But when the top EU court decided earlier in the month that Facebook must use automated filters to detect “defamatory content” – the company responded by saying its tech was simply not good enough. More specifically, Facebook this time called it a “blunt instrument” unable to properly understand the context, and therefore, make correct decisions.

“Determining a post’s message is often complicated, requiring complex assessments around intent and an understanding of how certain words are being used,” said Facebook global policy management VP, Monika Bickert.

The ECJ ruling came in a case brought against Facebook Ireland by Austria’s Green Party, who sued the company before domestic courts for not removing a comment seen as defamatory from the platform – and for not providing the party with the user’s real-world identity.

Eventually, the case reached the ECJ, who ordered Facebook to delete the post and all verbatim reposts everywhere – not just in the EU. That’s controversial in its own right, but the ruling also wants posts ‘identical’ or ‘equivalent in content” to be identified going forward. Something that may completely ignore the context in which a post is shared.

And the way Facebook would have to go about identifying such duplicates is algorithmic, given the volume of content that it’s trying to police. The way the giant is now arguing its case against the ECJ decision sheds more light on the way it rates the quality of its own technology. Apparently, not very highly.

Article 19, a free-speech group, agrees, with Executive Director Thomas Hughes saying earlier that the judgment would harm online freedoms, as it “does not take into account the limitations of technology when it comes to automated filters.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share