Facebook has revealed that its automated systems are so advanced that they can now correlate spikes in what it calls “hate speech” in near-real time across all 50 states in the US.
Facebook wrote that it monitors these spikes through its Crisis Assessment Dashboard (CAD). Once its CAD detects a spike, it alerts Facebook’s operational teams which review the content for “risk trends or potential violations” and then remove the content if it’s deemed to violate the rules.
This CAD was one of several content tracking and censorship tools that Facebook discussed as part of its preparations for election day.
Another tool that Facebook discussed was its “viral content review system” which flags posts that “may be going viral – no matter what type of content it is” and then takes them down if they’re deemed to be rule-breaking. Facebook added that this system “helps us catch content that our traditional systems may not pick up” and confirmed that it had already been used during this election.
This viral content review system appears to be an iteration of a tool that Facebook mentioned in August which would automatically halt viral dissenting coronavirus content.
These updates from Facebook are reflective of its increased embrace of automated censorship tools throughout 2020.
Related: ? Big Tech’s shift to preemptive censorship
In July, Facebook CEO Mark Zuckerberg said he wants artificial intelligence (AI) to preemptively censor 99% of hate speech on the platform.
One month later, Facebook revealed that it was on track to achieving Zuckerberg’s vision with 95% of the 22.5 million posts it removed for hate speech in Q2 2020 being taken down automatically without anyone having to report them.
Outside of hate speech, Facebook is also working on a deep learning system that automatically detects what it deems to be misinformation. The company said this system can contribute to “enhancing the integrity” of its platform.