Twitter has announced that its link sharing rules will be getting stricter and that it will start blocking some links that violate its “hateful conduct” rules from July 30.
Twitter’s hateful conduct policy prohibits a wide range of content including “content that degrades someone,” “dehumanizing speech” against groups of people based on four “protected” categories (age, disability, religion, or serious disease), “inciting fear” against these protected categories, “asserting that protected categories are more likely to take part in dangerous or illegal activities,” reinforcing “negative or harmful stereotypes about a protected category,” and “targeted misgendering or deadnaming of transgender individuals.”
The company tweeted that its goal with this updated policy is to “block links in a way that’s consistent with how we remove Tweets that violate our rules.”
Policies that prohibit content based on “hate” are often criticized because their vague, subjective nature can be used as a vehicle for censorship and because they often open the door to enforcement errors.
Before this announcement, Twitter had started to block some coronavirus links.
Twitter’s announcement follows several other Big Tech companies introducing controversial hate speech rules that use vague, subjective terms to further restrict what users are allowed to post on these platforms.
Earlier this month, Reddit announced new rules that allowed hate against “majority” groups only before quickly walking them back after facing mass backlash. However, the updated rules were then criticized for their vagueness.
And last month, Facebook announced that it would be censoring more hate speech after bowing to pressure from “Stop Hate for Profit” – an ad boycott campaign that called for Facebook to censor more hate speech and “misinformation” on its platforms.