Clicky

Facebook announces new deep learning system that automatically detects “misinformation”

The system could lead to even more automated censorship on the platform.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Facebook has published information on a new deep-learning system that analyzes interactions on its platforms and then uses this data to automatically detect what it deems to be “misinformation.”

According to Facebook’s research, “misinformation posts elicit different types of reactions from other accounts than do normal/benign accounts and posts.”

This new system, which is called “Temporal Interaction Embeddings” (TIES), automatically monitors and analyzes post interactions along with additional information about the sources and targets of these interactions. TIES can then use this information to learn and detect the types of reactions that are most commonly associated with what Facebook deems to be misinformation.

As part of its research, Facebook used 2.5 million accounts (80% of which were real and 20% of which were fake) and 130,000 posts (of which 10% were labeled as misinformation) to train its TIES system and tested its performance in various scenarios, including misinformation detection.

Facebook wrote that it observed “statistically significant gains” in its ability to detect misinformation and that this TIES system can contribute to “enhancing the integrity” of its platform.

The announcement of TIES follows Facebook’s announcement last week that it will be using its automated systems to halt viral dissenting coronavirus content by looking at “misinformation markers.”

Facebook has drastically increased the amount of content it censors for violating its misinformation rules this year and recently revealed that more than 100 million posts were censored for coronavirus misinformation in Q2 2020.

It also automatically sends users articles from the World Health Organization (WHO) if they interact with what the platform deems to be “harmful” misinformation.

But despite this mass censorship and funneling of its users to the WHO, news media outlets have complained that Facebook doesn’t censor popular videos that oppose the mainstream coronavirus talking points fast enough.

As a result, Facebook’s recent efforts have focused on automated tools such as this TIES system which detect content that Facebook deems to be misinformation early and prevent any opposition to the tech overlords’ preferred narrative on the coronavirus or other topics from ever gaining any traction.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share