YouTube, the world’s dominant video sharing platform, has already removed over one million videos for violating its strict and controversial “misinformation” rules. But in a new announcement, the tech giant has revealed that it’s going to be getting even stricter and suppressing “new misinformation” preemptively before it has the chance to gain traction.
YouTube’s Chief Product Officer Neal Mohan described how the video-sharing platform will start “catching new misinformation before it goes viral” in a blog post. The process will involve continuously training YouTube’s machine learning systems with “an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts” to identify “narratives” that YouTube’s main classifier doesn’t catch.
Mohan added: “Over time, this will make us faster and more accurate at catching these viral misinfo narratives.”
When YouTube does catch what it calls “viral misinfo narratives,” it will reduce the reach of some videos and push viewers towards “authoritative” videos (videos from brands, mainstream media outlets, and health authorities that YouTube has deemed to be authoritative) in search and recommendations.
For topics where there’s no authoritative content, YouTube is considering using developing news panels (which direct viewers to text articles for major news events), “fact check” boxes (which direct viewers to content from fact-checkers), and new types of labels that add “a disclaimer warning viewers there’s a lack of high quality information.”
However, YouTube has yet to finalize how these labels will work because “surfacing a label could unintentionally put a spotlight on a topic that might not otherwise gain traction.”
Mohan justified these new censorship measures by claiming that “the fresher the misinfo, the fewer examples we have to train our systems” and noted that new narratives often “quickly crop up and gain views.” He added: “Narratives can slide from one topic to another—for example, some general wellness content can lead to vaccine hesitancy.”
YouTube has been proactively targeting “emerging” misinformation since at least 2020 via its “Intelligence Desk.” The Intelligence Desk initiative launched in 2018 to proactively police “inappropriate or offensive content” and in a 2020 interview, Mohan revealed that it was also being used to look “over the horizon” and “stay ahead of” emerging “conspiracy” and misinformation content before it “becomes a challenge” on YouTube.