In its latest response to the mass shootings in New Zealand, Facebook has promised to place stronger restrictions on who can use Facebook Live, invest in better technology for removing content, and take a stronger stance against hate on the platform. While these efforts may sound good on paper, Facebook’s history suggests that they’ll lead to even censorship on the social media platform.
The announcements were made in an open letter from Facebook’s COO (Chief Operating Officer) Sheryl Sandberg which was first published on Instagram’s Info Center and then later sent to The New Zealand Herald.
Here are the main points from the open letter:
- Facebook is exploring restrictions on who can go live based on factors which include previous Community Standard violations.
- Facebook is investing in better technology which will help it identify edited versions of violent videos and images more quickly and then prevent this content being shared on the platform.
- Facebook will be taking stronger steps to remove hate groups and hate speech from the platform.
These points all sound perfectly reasonable when viewed in the context of a violent terrorist attack like the New Zealand mass shootings. However, Facebook often applies policies like these in an overly aggressive manner which leads to many innocent users being censored on its platforms.
Facebook has previously accused users of violating its community guidelines for posting memes or reporting the news. Under these proposed rules, this could lead to users that post the wrong meme or post news that Facebook doesn’t agree with having their live streaming privileges removed.
Using technology to automatically take down content also rarely works as intended. YouTube’s AI (artificial intelligence) has previously taken down news and documentary coverage of war crimes and other violent events when attempting to prevent the spread of violent content. The EFF also warned in the wake of the New Zealand shootings that stricter content moderation often leads to the suppression of content that is exposing police brutality, war crimes, and other human rights violations.
Finally, Facebook and other social media platforms cannot be trusted to correctly judge hateful content. Hate is a subjective term and when you consider that Facebook employees are paid bonuses to tackle so-called “hate speech,” the definition of hate is likely to become increasingly broad since the employees have a financial incentive to flag as much “hateful content” as possible.
When you take a step back and look at these policies from a wider perspective, it’s almost inevitable that they’ll lead to more censorship on Facebook.
Unfortunately, mass censorship without considering the wider consequences it will have on innocent users has become the default response to tragic events.