One way the Global Internet Forum to Counter Terrorism (GIFCT) has been implemented across the online industry is by stopping videos showing a Nazi attack on a synagogue – and that's how Slate chooses to go about “introducing” its power.
But the synagogue attack in Germany that killed two people last fall, and the GIFCT's role in preventing dissemination of the content showing this act, seems to be as positive as the story gets. “The GIFCT is making some of the most consequential decisions in online speech governance without the scrutiny of the public,” the report asserts.
And this “standard” has been produced and put to use way before the 2019 Halle, Germany attack. It came about in 2017, after a spate of Islamic State attacks in Europe that produced a combined of 162 deaths in Paris and Brussels. The GIFCT brought together “the usual set of suspects” you might expect to put the viability of their business, under regulatory scrutiny and political panic of the day, above everything else: YouTube (Google), Twitter, Facebook, Microsoft. And many more – “at least 11” – other, less known entities.
It was basically one way for Big Tech to keep government (and aligned legacy media) pressure off their back by coordinating content censorship and removal across their different platforms. The problem here is, as ever, not the stated intent, but what all of this does, or could mean in reality.
“Even as the coalition (GIFCT) transitions to an independent organization, we still don't know how individual platforms use the database. Are uploads blocked immediately? Do the platforms check each piece of content? It's unclear,” states the report.
And it may never become clear – because transparency isn't high on the list of priorities here, while researchers have no access to the hash database – the one used by GIFCT members to decide what qualifies as violent terrorist imagery and propaganda that is to be blocked.
The GIFCT is said to be “far more impactful” – even if “not so different from what platforms do every day, free from constitutional or democratic legal restraints.”
So – this couldn't get any worse that it is? Wrong – the danger of something called “extralegal censorship” is also raised here. (One has to wonder though – have Google, Facebook, Twitter, and Microsoft grown tired of implementing far-reaching censorship – or is this just a stage in their standoff with EU regulators, who are cited as the original driving force behind the GIFCT?)
Either way, the message seems to be not to get rid of this “coalition” – but “redeem” it with more transparency. And that's little more than a “wishlist” at this point, articulated by activists – “access to copies of the actual content that the GIFCT blocks, which would then allow researchers to assess bias and mistakes, and to release information about whether a user is notified (and given the chance to appeal) when the hash database blocks their content.”