Biologist and evolutionary theorist Bret Weinstein has issued a chilling warning on the harmful effects of Google’s artificial intelligence (AI), saying that the people wielding these algorithms have set us on a very dangerous course.
He made the comments in a discussion with Evergreen State College graduate Benjamin Boyce where they talked about the recent demonetization of Boyce’s YouTube channel and the subsequent suppression of his content in Google Search. Weinstein suggested that these actions against Boyce’s content may have been triggered by Google’s machine learning (ML) “fairness” algorithm and that this algorithm could have decided that Boyce’s content should disappear.
During the discussion, Weinstein focused on how AI deciding that it knows best when it comes to the information people should see is very troubling:
“We missed the boat with respect to the fears about AI. That we were expecting robots and that we are actually now living the very early stages of the AI apocalypse and we don’t even know it because the robots aren’t an important factor. That what it is is the algorithms and in some sense, the algorithms have started to think for us.”
He also suggested that the way Google is using this AI will end poorly:
“Google is behaving like a little totalitarian state except it happens to be one that is now in some amorphous way sitting right in the center of our ability to collectively think. That’s a very dangerous process and it’s not going to end well.”
Weinstein added that “the algorithms are inevitably going to confuse people at Google who are programming the algorithms” and said that he believes there’s now no way to convince the people leading Google’s Responsible Innovation team to change course, saying: “These are maniacs who do not realize that that’s what they have become.”
Despite Weinstein’s bleak analysis, he does not believe that all hope is lost and proposed a two-step solution for overcoming the harmful effects of Google’s AI.
First, he suggested that we retool the internet so that we can easily leave behind technologies such as Google’s AI which attempt to think on our behalf. Then he recommended finding an effective way to reject the stigma that’s often attached to the ideas that are suppressed by this AI and said that responding to the stigma without becoming enraged is often an effective way to achieve this goal.