Twitter has announced in its Q3 earnings report that for the first time in 2019, more than half of what it deems to be “abusive” tweets are taken down by automated tools.
“We improved our ability to proactively identify and remove abusive content, with more than 50% of the Tweets removed for abusive content in Q3 taken down without a bystander or first-person report.”
In the earnings report, Twitter highlighted that these Q3 stats are up from 43% in Q2 (a 7% increase) and 38% in Q1 (a 12% increase).
Twitter says that it’s removing an increased proportion of what it deems to be abusive tweets via automated tools in order to “improve the health of the public conversation on Twitter” – a goal that Twitter describes as one of its highest priorities and that CEO Jack Dorsey has championed in meetings with world leaders.
However, policies that target what tech platforms classify as “abusive” or “hateful” content, especially those that use automated tools, have been criticized for sweeping up innocent users and posts when they’re applied.
For example, Twitter recently suspended the popular parody account Titania McGrath for seven days after she tweeted out a satirical insult – a decision that suggests Twitter’s algorithms don’t understand satire.
On YouTube, the platform’s controversial “hate speech” policies have resulted in model makers, independent journalists, and other innocuous content creators having their content removed or demonetized. In its latest report on removals for hate speech, YouTube said that 87% of the videos it removed were first flagged by automated systems.
Despite these types of rules often impacting innocent users and the increased use of these automated tools coinciding with an almost 20% drop in Twitter’s market cap, Twitter said in its earnings report that it plans to continue using automated tools to remove content it identifies as abusive:
“Going forward, we will continue our work to proactively reduce abuse on Twitter, with the goal of reducing the burden on victims of abuse and, increasingly, taking action before abuse is reported.“
This use of automated tools to remove or demonetize content is a trend that’s on the rise among tech giants. Recently Facebook used “ranking signals” to automatically remove negative reactions to CEO Mark Zuckerberg’s recent livestream on the dangers of censorship. YouTube is also using blacklists to automatically demonetize creators when they cover certain topics in their videos. Like with many of the other applications of automated tools, this YouTube blacklist is causing creators who use innocuous keywords such as “Brazil,” “female,” or “restaurants” to get demonetized.