Defend free speech and individual liberty online. 

Push back against big tech and media gatekeepers.

Facebook is finding it impossible to police content due to so many global languages

The company points out that machine learning requires very large data sets - that are unavailable in many languages.
If you're tired of censorship, cancel culture, and the erosion of privacy and civil liberties subscribe to Reclaim The Net.

With 2.3 billion users around the world, and its service officially localized into 111 languages, may have created a monster – one it is finding increasingly difficult to control.

And since attempts to exert more control over social networks and debates about how to go about this have been all the rage lately, this trend has perhaps inevitably led to the discovery that content in English is not the only one out there to moderate – or indeed, police.

Reuters has now investigated this point, to find that the staggering number of human languages throws a giant wrench into the process – and the sentiment is that Facebook is not “doing enough.”

The report makes a connection between all manner of tragedies and large-scale crimes – such as ethnic cleansing in Myanmar and violence in Somalia – and Facebook’s lack of full language support, i.e., a “pro-active” approach to content policing in those parts of the world.

In addition – “social media has the ability to completely derail an election,” the agency quoted Mohammed Saneem, the supervisor of elections in Fiji.

Reuters further found that Facebook’s nearly 9,500-word community standards document has been translated to 41 of the 111 officially supported languages, and makes a case of many users being unaware of the existence of any rules prohibiting hate speech and promotion in the first place, which may be one of the reasons the platform’s role in crisis hotspots is seen as problematic.

In trying to deal with the complexities of human language – and of content moderation – Facebook is turning to machines. But at least for now, that seems to be a losing battle. The company points out that machine learning requires very large data sets – that are unavailable in many languages.

As of now, AI is being used to single out “hate speech in about 30 languages and ‘terrorist propaganda’ in 19,” the report said, citing company representatives.

Currently, the social media giant employs 15,000 people tasked with hunting down unwanted content, and they cover 50 languages.

But Facebook appears to be reacting to some of the criticism by adding more human moderators into the mix – with about 100 to be hired in sub-Saharan Africa.

If you're tired of censorship, cancel culture, and the erosion of civil liberties subscribe to Reclaim The Net.

Defend free speech and individual liberty online. 

Push back against big tech and media gatekeepers.

Share

TruthMindsGettrGab