Clicky

Social media censorship is hindering investigations, researchers say

Data is being deleted and distorting the record of truth that any researchers rely on.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

According to YouTube itself, as many as 6.1 million videos have been deleted from the platform since the start of this year alone, mostly coinciding with an unprecedented wave of online censorship launched at the start of the coronavirus pandemic.

YouTube is not alone in this, as other major social networks started strictly policing speech of users in order to allow only information about the disease favored by governments and the World Health Organization (WHO), from whom most governments get their cues.

Things deteriorated even further when racial and social unrest hit the US later in the year, causing a surge in online cancel culture and grandstanding by big brands, who demanded even more censorship, this time of “hate speech”.

As things began to go from bad to worse, traditional corporate media didn’t seem to mind very much at all. Now, however, some of them are looking at the price that the ramping up of censorship.

But still, the likes of the Washington Post don’t seem to care much about “ordinary” users entrusting their speech and data to social media giants: the concern is focused on what removal of such massive quantities of content, and the process by which this is done, cold end up doing to some activists and NGOs (at least, ones they like). And they argue in favor of “deleted” data to be publicly unavailable but retained – for study.

There’s the example of the Syrian Archive, which is said to be dedicated to collecting information about human rights abuses in Syria and other countries. The group says that they and others are getting caught in the censorship dragnet that is supposed to be removing misinformation, as well as opposition voiced by users who disagree with the way these crises are being handled.

It’s not exactly news, but tech giant’s overreliance on automated, machine learning-powered algorithms to get the job done is not working well. These algorithms are still effectively basic and really bad with understanding context, therefore resulting in “unintended” censorship.

The author of the report seems to think that more involvement from human moderators would fix the problem (and also, that they “had to be sent home” during the epidemic and were for that reason working less than usual?) However, it’s amply clear that moderators come with a set of their own problems, unique to humans: such as bias. So, maybe the answer is to pump the brakes on rampant censorship and not rely so heavily on either machines or moderators?

That, of course, is not an idea the Washington Post is willing to entertain. Instead, the Covid and civil upheaval era censorship is viewed as fully justified – if only it could somehow bypass the Syrian Archive and other activists and journalists in war-torn regions.

But it isn’t, since Facebook’s moderation is apparently poor at telling apart documented war crimes and atrocities from users posting such content to promote it. However, Facebook says that in cases when accounts are deleted for this type of “offense,” they are also restored.

Syria-focused activists are not having a great time on YouTube either this year, saying that the number of deleted uploads has doubled. For its part, YouTube cited its infamous policy of allowing users to issue counter takedown notices, and claimed that human reviewers actually deal with this (not a statement many creators who have been burned in the process, without ever receiving even an explanation of what it was they had done wrong, would necessarily agree with.)

The Syrian Archive and more that 40 other groups have pleaded with social media giants not to permanently delete content related to human rights activism, and that “data on content removed during the pandemic will be invaluable to those working in public health, human rights, science and academia.”

Others are worried about YouTube’s lack of transparency that leads to guesswork as to “the overall extent of automated moderation’s effect on legitimate content.”

The argument in favor of preserving data instead of deleting it was heard in April and boils down to keeping this data for future research into “how online information can affect health outcomes and to evaluate the consequences of specific moderation practices like using heavy automation.”

Another request from a letter sent to giant social networks was to be transparent about the way content is removed, how successful any appeals are, and the like. The signatories acknowledge privacy implications of long data retention and making it only available to a select group of researchers – but add that “the need for immediate preservation is urgent​.”

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.