Social media companies should not censor and remove misinformation, according to the senior scientific academy of the UK.
The Royal Society investigated the sources and consequences of online “misinformation.” It concluded that removing misinformation and the accounts of the offenders does little. In fact, it makes it worse because bans could lead the content to move to “harder-to-address corners of the internet and exacerbate feelings of distrust in authorities,” the report states, via the Financial Times.
The report agrees that illegal content such as child sex abuse, racism, and content that incite violence should be removed. But legal content that is misleading or contradicts mainstream scientific consensus should not be removed.
“We need new strategies to ensure high quality information can compete in the online attention economy,” said Gina Neff, professor of technology and society at the University of Oxford, and a co-author of the report. “This means investing in life-long information literacy programs, provenance-enhancing technologies and mechanisms for data sharing between platforms and researchers.”
Across the political spectrum in the UK and around the world, there have been calls for social media platforms to remove vaccine skeptic comments. According to the chair of the Royal Society inquiry Frank Kelly, while censoring such posts seems a good solution “it can hamper the scientific process and force genuinely malicious content underground.”
He reiterated that censoring misinformation and banning accounts leads it away from mainstream platforms where scientists cannot try and debunk it. Kelly said that “A more nuanced, sustainable and focused approach is needed.”
The investigation also found out that the fears that the internet was amplifying misinformation by creating filter bubbles and echo chambers are an exaggeration.