Clicky

Subscribe for premier reporting on free speech, privacy, Big Tech, media gatekeepers, and individual liberty online.

Moderation AI is doing a bad job considering language, culture, and context

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

“Speech” and “language” are two quite separate phenomena, though often used interchangeably.

“Speech” is the ability of humans to physically articulate sounds. “Speech” is also often used as code for “free speech” – the right and ability to express one’s thoughts, through language, without repressive consequences.

Language, on the other hand, is one of the most complex, complicated, and mysterious properties out there in the known universe – and that’s an understatement. The vast diversity of our languages is one of the most unique, if not the most unique traits of human beings, and human beings alone.

Just think about it: there are today 193 legitimate, recognized countries in the world – but no less than 6,000+ languages spoken across them.

As human beings, language is our strength, and sometimes our weakness, as an inability to communicate directly – but mostly, it’s our strength – as is evidenced in the latest struggles with restricting “speech” faced by social media giants.

TechCrunch reports about Google now being hoisted on its own petard of promoting and implementing “politically correct speech.” And that concerns only in one of those thousands of languages – English.

The report cites a University of Washington study, which references “African-American English” and “white-aligned English” – and looks into what’s “offensive and hurtful” in the context of who said it, and how they said it.

Of course, artificial intelligence algorithms can’t do it – human intelligence can hardly do it – the task is near impossible.

And so the paper found that human language is too complex to be effectively and fairly policed by machines – or even humans. Well, color us surprised.

According to the article: “This isn’t to say necessarily that annotators are all racist or anything like that. But the job of determining what is and isn’t offensive is a complex one socially and linguistically, and obviously aware of the speaker’s identity is important in some cases, especially in cases where terms once used derisively to refer to that identity have been reclaimed.”

In this context – surprisingly to no one – Google’s Jigsaw Perspective API, designed to produce “a toxicity score,” is found to be lacking.

The findings suggest that Perspective was “way more likely to label black speech as toxic, and white speech otherwise” – noting, “remember, this isn’t a model thrown together on the back of a few thousand tweets – it’s an attempt at a commercial moderation product.”

At this point in our understanding both of language and the maturity of artificial intelligence-powered algorithms, not to mention of “commercial moderation products” – projects such as Jigsaw might as well be described as snake oil.

Or – as the more mildly-worded Maarten Sap, the lead author of the cited paper, said in his email response to TechCrunch:

“We have a very limited understanding of offendedness mechanisms, and how that relates to the speakers, listener, or annotators demographic/group identity, and yet we’re pushing full steam ahead with computational modeling as if we knew how to create a gold standard dataset. I think right now is the right time to start teaming up with political scientists, social psychologists, and other social scientists that will help us make sense of existing hate speech behavior.”

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.

Share