Nobody said wholesale censorship on internet platforms used by billions of people would be easy, and this is something that is now becoming apparent, almost six months into giant social media networks’ attempts to tightly control information, and the narrative around the coronavirus pandemic.
But censorship of this magnitude is not seen as a problem in itself; a major headache emerging now for Twitter, Facebook, and others, is that it doesn’t actually work. Instead, banning content that has already gained wide exposure means its reach could grow almost exponentially, as the ban itself becomes a news story.
Reports are now recognizing this, treating it as a novel phenomenon (though it’s unclear why – censorship is nothing new, and it’s well documented that in the pre-internet era authoritarian regimes banned print books and they would quickly become a hot commodity.)
Be that as it may, researchers and analysts quoted are not merely acknowledging the difficulty in effectively suppressing “misinformation,” such as a recent banned video showing the America’s Frontline Doctors group promoting the use of the drug hydroxychloroquine.
They are also looking at “what went wrong” and why centralized social media platforms aren’t doing a better job of blocking information they don’t want their users to see.
It appears to be nothing more than the nature of these networks itself, and how they propel any message to visibility: content is posted at a small scale, gains momentum, travels from one platform to another, such as from Facebook to Twitter, where users with a large number of followers further accelerate its dissemination.
And if a ban comes at this stage, the media pick that up – while social media websites are left playing the role of facilitators of the flow of information and online communication (which is what they should be doing anyway, instead of struggling to editorialize the internet).
“They’re trying to do the right thing, but addressing something that is already viral is a really hard problem,” says Annie Klomhaus of social media research company Yonder, referring to (mis) information suppression and how that tends to fail.
Twitter, Facebook, and others are advised to act more quickly and not allow “several hours” to pass before they take content down – and also, “improve their technical and human content moderation methods.”