During a House Judiciary Committee hearing on antitrust laws, Facebook CEO Mark Zuckerberg said he wants 99% of the content that’s flagged for “hate speech” on his platform to be taken down before anyone sees it.
Zuckerberg made the comments when he was asked about Stop Hate For Profit – an ad boycott campaign that’s pressuring Facebook to censor more hate speech and “misinformation” on its platform.
Congressman Jamie Raskin remarked that Zuckerberg didn’t seem “that moved by their campaign” and asked for his thoughts on Stop Hate For Profit’s demands.
“We’re very focused on fighting against election interference and we’re also very focused on fighting against hate speech,” Zuckerberg said in response.
“In terms of fighting hate, we’ve built really sophisticated systems. Our goal is to identify it before anyone even sees it on the platform,” Zuckerberg said. “And we’ve built AI [artificial intelligence] systems and as I’ve mentioned have tens of thousands of people working on safety and security with the goal of getting this stuff down, so that way, before people even see it. And right now, we’re able to proactively identify 89% of the hate speech that we take down before I think it’s even, even seen by other people. So, you know, I want to do better than 89%, I’d like to get that to, to 99% but we have a massive investment here.”
Facebook has faced mounting pressure from activists, the mainstream media, and politicians to regulate content based on the vague and subjective term hate speech.
In response to this pressure, Facebook has increased its reliance on the AI systems that Zuckerberg described and these systems, which are prone to algorithmic errors, are now responsible for removing most of the content that’s flagged as hate speech on Facebook’s platforms.
In the first quarter of 2020, Facebook took down 9.6 million pieces of content for hate speech and as Zuckerberg noted in his testimony, 89% of this content is already removed proactively by Facebook’s AI.
When Facebook released these figures in May, the company also announced that it was training its AI to censor “hateful memes” and launched a “Hateful Memes Challenge” where participants were challenged to develop an algorithm that identified “multimodal hate speech in internet memes.”
The hate speech and “harassment” policies deployed by Facebook and other Big Tech companies have been criticized for the way they’re often used as a vehicle to censor jokes, criticism, and other types of innocuous content.