The concept of ensuring fairness as Google sees it, through the use of machine learning, seemed to have followed hot on the heels of deciding to combat “online hate” after the 2016 US election.
Google’s struggles with “ethical AI” are nothing new, though. The past Spring, the company scrapped its newly sent up artificial intelligence ethics council one week after setting it up. The decision was made following an outcry over the appointment of a conservative figure, Kay Coles James, whose think-tank has close ties with President Donald Trump.
An anonymous whistleblower interviewed by the project said he came to Veritas to let others know about the goings-on inside Google, and warn them the tech giant is not an objective source of information.
Instead, he described them as a highly biased political machine “bent on never letting somebody like Trump in office again.”
With this investigation, the Project Veritas report seems to provide proof of what the conservatives have long been saying about Google’s ideological and political bias.
Namely, it exists, and to their detriment.
In one of the videos, a hidden camera feature, Jen Gennai, Head of Responsible Innovation – who determines policy and ethics of machine learning – talks about the best-known secret in tech – that Google search is powered by AI.
And not only that, but that AI will now also powering “fairness.”
And the reason why? People themselves fell short in saying what was fair and what not, so Google, as a big company, wants to take it upon itself to speak for the people.
And the way it speaks is through its artificial intelligence algorithms. That would also mean these algorithms would have to take on a role they are even less well equipped to perform at this stage of their development, than recognizing extremists content or true hate speech.
“My definition of fairness and bias specifically talks about historically marginalized communities,” Gennai is heard saying in the undercover video.
She added she had little interest in ensuring fairness for those who have either known power or are now in power.
Gennai further revealed that the company thought their own definition of fairness would be “obvious” to everyone, and everyone would agree to it- “and it wasn’t.” Gennai then explained that’s because Trump voters don’t agree with Google’s definition of “fairness.”
Meanwhile, the anonymous insider is heard saying in the Project Veritas video that “doublethink” has to be used in order to understand what Google really means when it goes for the term “fairness.”
According to him, the true meaning is the manipulation of search results to suit and reflect Google’s political bias.
And considering the reliance on machine learning algorithms, what needs to happen is to retrain, or “rebias” as he put it, these algorithms, describing the “fairness” as merely a dog whistle used by Google.
He said he discovered that Google has a policy known as “ML Fairness” (machine learning fairness – or algorithmic fairness) which is a way turns the idea of “algorithmic unfairness” on its head. The latter signifies prejudice based on race, income, sexual orientation or gender.
This type of unfairness, though, he continued, could also be considered as fair in itself.
That’s because, “it’s taking as input the clicks people are making then figuring out which signals are being generated from these clicks, and which signals it wants to amply and then also dampen.”
Google software engineer Gaurav Gite confirmed the content of one of the confidential documents Project Veritas said it had obtained, giving as an example of ML fairness as “balancing out” the data for women CEOs, even if in reality the figures happen to be low.
As for the document, anyone who mentioned it publicly would be branded as a conspiracy theorist, the anonymous insider said during his interview.
The document also deals with the way machine learning acts in an editorial filtering role within the news media ecosystem, allegedly arranging the content that agrees with Google’s bias to get closer to the top and therefore be much more monetizable, versus the other kind that would sit near the bottom.
Another ambition of the global tech behemoth is having a “single point of truth” for all their many products. The report cited a document called “Fake News Letter” as containing this intent.
Yet more documents lay out this policy, concluding that end users are eventually “programmed” – apparently by bringing “unconscious bias” to work with algorithms.
Project Veritas said this was more akin to “social engineering” that web searching.
Google failed to officially react to the report at press time.