Clicky

Google-sponsored report recommends subverting online “conspiracy” groups from within

A new way for tech giants to suppress content they deem to be a conspiracy.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Google-owned YouTube already purges large amounts of content for containing what it deems to be “harmful conspiracy theories” and boosts what it says are “authoritative” mainstream media outlets to push what it deems to be the “right information.”

Now, a report that was sponsored by Google’s Jigsaw unit (a unit that “explores threats to open societies, and builds technology that inspires scalable solutions”) has recommended a new approach – infiltrating and subverting online conspiracy groups from within by targeting “moderate members” of these groups in the hope that they “exert influence on the broader community.”

The report was published by the research organization RAND which “develops solutions to public policy challenges to help make communities throughout the world safer and more secure.”

RAND receives funding from Google and several US government departments including the Department of Defense, the Department of Homeland Security, the Department of Justice, the Department of State, and the Office of the Director of National Intelligence.

This specific report, “Detecting Conspiracy Theories on Social Media: Improving Machine Learning to Detect and Understand Online Conspiracy Theories,” was sponsored by Google’s Jigsaw unit and was conducted within the International Security and Defense Policy (ISDP) Center of the RAND National Security Research Division (NSRD) which conducts research and analysis for the Office of the Secretary of Defense, the US Intelligence Community, US State Department, allied foreign governments, and foundations.

Google Jigsaw asked RAND’s researchers to answer the question: “How can we better detect the spread of conspiracy theories at scale?”

This led to RAND conducting a two-part research process:

  1. A review of existing scholarly literature on conspiracy theories, followed by a text-mining analysis to try to understand “how various conspiracies function rhetorically”
  2. Building improved machine learning (ML) models to detect conspiracy theories at scale

RAND’s researchers pulled data from Twitter that “characterized four separate conspiracy theories” about “the existence of alien visitation,” “the danger of vaccinations,” “the origin of coronavirus disease 2019 (COVID-19),” and “the possibility of white genocide (WG).”

While it wasn’t part of the research, the researchers also referred to the “proliferation” of new conspiracy theories in the introduction to their report and cited “anti-fascist activists in the Antifa movement started fires in Oregon in summer 2020” as an example.

In addition to developing an ML model to detect conspiracy theories at scale, the RAND researchers also provided four “recommendations for mitigating the spread and harm from online conspiracy theories based on their conspiracy detection”:

  1. Engage transparently and empathetically with conspiracists
  2. Correct conspiracy-related false news
  3. Engage with moderate members of conspiracy groups
  4. Address fears and existential threats

“Conspiracists have their own experts on whom they lean to support and strengthen their views, and their reliance on these experts might limit the impact of formal outreach by public health professionals,” the researchers wrote when explaining how to implement its third recommendation. “Our review of the literature shows that one alternative approach could be to direct outreach toward moderate members of those groups who could, in turn, exert influence on the broader community.”

RAND’s researchers likened this recommendation to commercial marketing programs that “engage social media influencers (or brand ambassadors), who can then credibly communicate advantages of a commercial brand to their own audiences on social media.”

The researchers also specifically suggested that it might be possible to “convey key messages to those who are only “vaccine hesitant,” and these individuals might, in turn, relay such messages to those on antivaccination social media channels.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share