A new study from The Oxford Internet Institute and Reuters Institute for the Study of Journalism at the University of Oxford has blamed a “failure of fact-checking” on Facebook for amplifying what they deem to be “coronavirus misinformation” videos on YouTube and is calling for more aggressive fact-checking to ensure that users receive “accurate and trustworthy” information.
The study looked at over one million coronavirus YouTube videos that had been shared on Facebook, Reddit, and Twitter between October 2019 and June 2020 and found that 8,105 of those videos had been removed for violating YouTube’s community guidelines. It then found that Facebook placed warning labels on 55 of these videos (0.7%) before they were removed by YouTube.
Additionally, the researchers found that the engagement on coronavirus “misinformation” videos is much higher on Facebook than it is on Twitter with these videos generating an average of 11,000 reactions on Facebook compared with an average of 63 retweets on Twitter before being removed by YouTube.
Based on these findings, the study suggests that “Facebook’s network of independent fact-checkers do not focus on YouTube videos in their work” and that “the reach of Facebook’s network of third-party fact checking organizations is insufficient.”
The study’s press release adds that: “Covid-related misinformation videos do not find their audience through YouTube itself, but largely by being shared on Facebook.”
While most of the study focuses on Facebook’s amplification of these videos, the press release also laments that YouTube took an average of 41 days to remove what it deems to be coronavirus misinformation videos.
Interestingly, the first issue that’s raised in the press release for the study is that coronavirus misinformation videos are outperforming mainstream news broadcasters and getting more social media shares than the videos of ABC News, Al Jazeera, BBC, CNN, Fox News combined.
And Dr. Aleksi Knuutila, a postdoctoral researcher who co-authored the study, praises YouTube for filling the search results for coronavirus-related searches with videos from “credible sources” but frames Facebook’s amplification of these videos as a “problem”:
“People searching for Covid-related information on YouTube will see credible sources, because the company has improved its algorithm. The problem, however, is that misinformation videos will spread by going viral on other platforms, above all Facebook.”
Knuutila’s proposed solution to this problem is for social media platforms to ensure that they push users to “accurate and trustworthy” information.
Facebook and other platforms have already made major changes that direct their users to what they deem to be trustworthy content amid the coronavirus outbreak and in the process, made finding alternative and independent perspectives on the coronavirus increasingly difficult to find.
On Facebook, these changes include applying warning labels to 98 million “misleading” coronavirus posts (a change that cuts their viewership by 95%), sending users articles from the World Health Organization (WHO) if they interact with misinformation, and blocking all health-related groups from recommendations.
As Facebook and other platforms ramp up their censorship of coronavirus related content, they’ve faced strong criticism for censoring doctors, biased fact-checking, and for some of the fact-checkers being funded by those that they fact-check.
But despite these criticisms, Big Tech platforms face continued pressure to censor under the guise of misinformation. And more often than not, they yield to this pressure.