Twitter has revealed that its controversial “hateful conduct” rules are used to take down 50.5% of all the tweets that get removed for rule violations and that more than 1.4 million tweets were removed under these rules between July 2019 and December 2019.
The number represents a 39% increase in the number of tweets that were removed for hateful conduct between January 2019 and June 2019.
170,994 accounts were also suspended for hateful conduct during the second half of 2019 and the total number of accounts that were suspended or had content removed during this period was 2.3 million – a 47% increase compared with the first half of the year.
Twitter’s hateful conduct policy prohibits “content that degrades someone,” “dehumanizing speech” against groups of people based on four “protected” categories (age, disability, religion, or serious disease), “inciting fear” against these protected categories, “asserting that protected categories are more likely to take part in dangerous or illegal activities,” reinforcing “negative or harmful stereotypes about a protected category,” and “targeted misgendering or deadnaming of transgender individuals.”
Twitter cited its “increased focus on proactively surfacing violative content for human review, more granular policies, better reporting tools, and also the introduction of more data across twelve distinct policy areas” as the reason for the 47% increase in accounts that were suspended or had content removed.
It also noted that its hateful conduct rules were expanded in July 2019 to ban “dehumanizing” speech against “protected” categories.
Additionally, Twitter wrote that it would be stepping up its “level of proactive enforcement” and investing in “technological solutions to respond to the changing characteristics of bad-faith activity on our service.”
Twitter’s announcement follows Facebook announcing a substantial increase in the amount of content it deleted for “hate speech” in the second quarter of 2020 with 22.5 million posts being taken down – more than double the amount that it removed under its hate speech rules in Q1 2020.
Undercover video from inside Cognizant, a contractor that’s used by Facebook and Twitter for content moderation, has revealed that bias often seeps into the enforcement of hate speech rules with certain pieces of content being given hate speech policy exceptions on Facebook while other similar pieces of content are censored.
These hate speech rules are also prone to enforcement errors, especially with the increased reliance on proactive enforcement technologies. Twitter’s recent removal of the Star of David which was marked as “hateful imagery” is one example of such an error.
While rules and laws based around the vague, subjective term “hate” are proliferating, people are pushing back with authors, comedians, and actors warning of the dangers such rules and laws pose to freedom of expression.