Clicky

Headlines suggesting smart assistants are racially biased aren’t entirely accurate

In a bid to claim racial bias, the headlines failed to check the details of the study.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

A recent study by Stanford University, published in the journal Proceedings of the National Academy of Sciences, has suggested that speech recognition systems by tech giants such as Apple and Google may have “racial disparities”.

What’s more, the study proposes that Apple’s Siri is the worst, as it has the greatest racial bias when compared to Alexa and Google Assistant.

Based on this finding, in the usual bid to determine everything is racist, many outlets and individuals alike are concluding that Siri and other voice assistants have racial bias.

The study says that Apple’s voice recognition engine failed to identify only 23% of the words spoken by white individuals, whereas it failed 45% of the time when it came to black individuals.

However, digging deeper and taking a closer look at the study simply reveals that Siri or any other speech recognition systems, for that matter, may not “racist” after all.

There are two concrete explanations to explain why.

Firstly, the results published by Stanford University weren’t taken from the actual voice assistant systems that are available to users.

The New York Times article on the topic has the more important information deep within the report:

“The study tested five publicly available tools from Apple, Amazon, Google, IBM and Microsoft that anyone can use to build speech recognition services. These tools are not necessarily what Apple uses to build Siri or Amazon uses to build Alexa. But they may share underlying technology and practices with services like Siri and Alexa.”

Instead, they relied on speech recognition tools that were offered by tech giants for general use.

Systems like Apple’s voice recognition technology is not the same technology that is used by Siri.

These publicly available tools don’t represent the voice recognition tools such as Siri and Alexa and they don’t fully share the same underlying software implementations.

Moreover, the tools were tested in May and June last year, meaning that the results are one-year-old.

But voice recognition systems and voice assistant technology is being updated all the time, and both Amazon, Google, and Apple have pushed several software updates since the study’s tested time period.

Next, here’s the other important consideration: the African-American speech that was used to test the systems contain heavy slang. Naturally, identifying slang words can be a difficult process for any publicly available voice recognition system, regardless of race.

The voice recognition systems failed even when white people spoke the same slang, making it clear that the bias was not for accents or voice, but for the slang.

If the same test was conducted for commonly spoken English with no tricky slangs or dialects, a more accurate estimation of the racial bias could have been obtained.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share