Facebook, Microsoft, Twitter, and YouTube have announced today that their Global Internet Forum to Counter Terrorism (GIFCT), a consortium that’s dedicated to preventing “extremist” and “terrorist” content online, will be expanding.
According to Facebook, as part of the expansion, GIFCT will become an independent organization with dedicated staff, dedicated technology, and an executive director. Amazon, LinkedIn, and WhatsApp will also be joining the forum.
Going forward, GIFCT’s efforts will be focused on preventing, responding to, and learning from “extremist” and “terrorist” content on digital platforms.
According to the announcement, GIFCT has already reached its 2019 goal of “collectively contributing more than 200,000 hashes, or unique digital fingerprints, of known terrorist content into our shared database, enabling each of us to quickly identify and take action on potential terrorist content on our respective platforms.”
Last week, digital rights group the Electronic Frontier Foundation (EFF) warned that the frantic push to crush “extremist” speech online will hurt innocent users the most. The EFF argued that because tech giants have consistently failed to explain how they define “extremist” or “terrorist” content, their approach creates the unintended consequence of censoring innocent users such as those documenting human rights abuses.
The EFF’s warning about the lack of clarity when big tech companies attempt to moderate “extremist” or “terrorist” content is one of many recent concerns that have been raised about how the companies involved in GIFCT moderate content on their platforms.
Many of these concerns revolve around Facebook and the way bias and subjectivity often seep into its content moderation decisions. For example, Facebook recently admitted in court filings that it subjectively labels users as “dangerous” and then uses this opinion to ban users and their content from its platforms. Last week, Facebook CEO Mark Zuckerberg also admitted that there “clearly was bias” in a recent high profile “fact-check” on the platform.
YouTube creators have also raised concerns over the way YouTube’s “hate speech” policies often impact innocent creators such as model makers, history channels, and independent journalists. Despite the collateral damage caused by these rules, YouTube removed 5x more content for “hate speech” last quarter.
Digital rights groups are concerned that the expansion of GIFCT is going to infringe on speech.