In a bid to “do better” in terms of content moderation and censorship, Twitter has confirmed that it is working on a new, crowdsourcing tool.
That’s right – Twitter wants its users to get involved in mass flagging each other’s tweets as misinformation, make these complaints publicly visible – in theory allowing for every tweet to be editorialized with other users’ opinions – while at the same time reporting these posts to moderators for possible removal.
It’s a process that on the face of it looks like a recipe for chaos and disaster, especially on a divisive platform like Twitter that lends itself so well to trolling and flamewars.
Currently, though, not much more than “the face of it” is really known about this tool, as its development is reported to be in initial phases, and considering that it may never even see the light of day; what does seem certain is that even if the social media heavyweight goes ahead and makes this available to users, the feature, codenamed “Birdwatch,” will not be out before the US election.
But Twitter’s message is certainly out and the message is simple: that Twitter is aware more effort is still expected, if not required of it, in suppressing anything that is labeled as misinformation appearing on the platform.
This message might be particularly useful to stress now, as the company’s been under pressure these last days. Twitter’s decision to suspend users wishing death to US President Trump has enraged those who are said to often be on the receiving end of similar messages.
However, it looks like they would be happy to see more, not less of the same – if the target is right. Twitter at first said that the decision simply meant it was enforcing its own rules equally to everyone – but that didn’t last long, because a tweet from Twitter Safety on October 3 said they recognized “inconsistencies” in the way policies are implemented – and, of course, “agreed” that they “must do better.”
In this frenzied atmosphere, switching gears to talk about “Birdwatching” might sound like a good idea, and so Twitter’s product lead Kayvon Beykpour posted, also on October 3, in response to a researcher who first spotted the code back in August, (…) On Birdwatch, excited to share more about our plans here soon.”
If it ever makes it to market, Birdwatch will allow users to add “notes” to tweets, that will then be visible to anyone clicking a new, “binoculars” icon added to the interface. The purpose of these notes will be to flag posts for moderation, but the notes would be publicly visible to everyone.
To make matters more confusing but no doubt make users feel more involved in the process, the latest batch of features includes “a survey” that allows people to vote on whether the flagged post is in fact misinformation (effectively, to engage in heated debates with others within the confines of “Birdwatch”), as well as go into the weeds of rating how bad the (dis)information in question really is.
Sounds like a dedicated Twitter user could end up spending hours of their life every day just attempting to win “Birdwatch notes” arguments with others. However, along with many other things, it remains unclear if this will be a case of true crowdsourcing or something limited to an “approved” class of users. (Early reports seemed to suggest the tool would be available to moderators.)
It’s also unknown if human moderators or algorithms will make the final decision, once the “notes” have been attached to tweets, questionnaires filled out, and detailed opinions about how much harm a tweet is believed to be causing submitted to Twitter.
As of October 6, the code for “Birdwatch” shows that Twitter’s user interface would add a new tab to navigation, where “Birdwatch Notes” would join Lists, Bookmarks, etc.
Mentions of “contributions” (and “community”) clearly suggest Twitter has a crowdsourcing scheme in mind, which it must hope will further cement is as a company policing content while providing “context” and fighting “misinformation.” Not to mention that, if the experiment were to be rolled out and work out the way Twitter would hope, the company would be saving, and making a lot of money by acquiring legions of unpaid “moderators,” and on the other hand ensuring more engagement.