Facial recognition software and its deployment out in the wild has been brewing as a major controversy for a very long time – at least where digital rights advocates are concerned, who say it can (or is being) (ab)used across the world.
But the way it seems to be deployed in the US at this particular moment in time, forces corporate media not to look very hard into the nature of this tech and why it may or may not be harmful to people anywhere – instead limiting the focus of their reporting to a race or a minority. But at least some attention is better than none.
To be clear: mass, invasive, and pervasive surveillance of an entire population isn’t one limited to any race or minority. It harms everyone in its way and wake. And the fact this tech relies on (at this point in time) sub-par, highly limited subsection of Artificial Intelligence (AI) known as Machine Learning (MN) is not encouraging, either.
Yet instead of digging deep – a human interest story was pretty much how the New York Times went about exploring this subject.
Double your web browsing speed with today's sponsor. Get Brave.
The story centers around Robert Julian-Borchak Williams, a black man in Michigan, US, who – last January – went through an ordeal with the local police for being misidentified as a thief in an “upscale” store.
According to the NYT, the facial recognition tech “works relatively well on white men” while the results are “less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases.”
What does “a lack of diversity in the images” – even imply, though? Laziness, or some kind of racist intent on the part of companies developing it?
We don’t get a clear answer to that question from the NYT piece.
But we do learn that “this month, Amazon, Microsoft, and IBM announced they would stop or pause their facial recognition offerings for law enforcement.”
Microsoft has faced heavy accusations in the past for its face ID technology that is helping China run digital concentration camps. So, you might be forgiven if you told yourself immediately – no way is Microsoft giving up on this facial recognition goldmine, however tainted it may appear.
Turns out you would be right – Microsoft, and others mentioned here, like to talk that opportunistic talk, but are highly unlikely to actually walk that walk.
These giants’ facial recognition programs are merely suspended, in other words – until you all calm down. Then Microsoft will be back again, brow-beating developers on GitHub and other properties it owns into largely inconsequential, not to mention nonsensical PC language, while its cash-cows, like cloud storage and facial recognition software, continue to rake in money.
As for Mr. Williams, the case has thankfully been dismissed.