Deepfake technology, which uses artificial intelligence (AI) to replicate a human’s voice or appearance, has been developing at a rapid pace in 2019. Earlier this week, the Chinese deepfake app Zao, which allows users to insert a deepfake representation of themselves into popular movie scenes, went viral and sparked privacy concerns. And last month, a deepfake that simulated the voice of clinical psychologist Jordan Peterson, was taken offline after Peterson suggested he may take legal action against it.
Now Facebook, Microsoft, and others have announced that they’re working together to build a Deepfake Detection Challenge (DFDC) which will “spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.”
The challenge involves using paid actors to create a deepfake data set which will then be made available to people that want to participate in the challenge and use the data set to develop deepfake detection software.
Some of the other companies and institutions participating in this DFDC include The Partnership on AI and academics from College Park, Cornell Tech, Massachusetts Institute of Technology (MIT), University at Albany-SUNY, University of California, Berkeley, University of Maryland, and University of Oxford.
The DFDC will include grants and awards and Facebook will be investing more than $10 million into this challenge. The DFDC will launch in late 2019 and run through to the end of March 2020.
Given the timeframe of this challenge, it’s possible that any deepfake detection tools that are developed could be used by Facebook, Microsoft, and others to identify, suppress, and remove deepfakes on their platforms in the run-up to the 2020 US presidential elections. Lawmakers are already pressuring big tech companies to come up with strategies for handling deepfakes as the 2020 US presidential elections approach.
As with many developments in the deepfake space, this DFDC raises several questions about how this deepfake detection technology will be used and what the best use of this technology will be.
The involvement of Facebook and Microsoft in particular raises the possibility that any deepfake detection tools that are developed as a result of this challenge could be used as an extension of their existing “fake news” and “fact-checking” tools – particularly given that much of the concern around deepfakes stems from them being misleading.
Facebook uses third-party fact-checkers and AI to rate content on its platform. Content that’s labeled “false” gets suppressed. Microsoft also pre-packages NewsGuard, an extension that gives trust ratings to news websites, in the mobile versions of its Microsoft Edge browser.
If Facebook and Microsoft do integrate deepfake detection software into these tools, this raises further questions about how these tools will be applied to different types of content. Facebook’s fact-checkers have come under fire for fact-checking jokes while NewsGuard has been criticised for giving green “trustworthy” ratings to sources that have spread fake news. Similar applications of deepfake detection technology could increase the spread of fake news and censor comedic or satirical deepfakes.
One final concern is the actual definition of deepfake. In this instance, the groups involved in the DFDC are defining a deepfake as media manipulated via AI which is used to mislead others. While this definition keeps the focus on content that’s altered via AI, it’s still a definition that could potentially be applied to memes and satirical content.
The Electronic Frontier Foundation (EFF) has already warned that solutions for regulating deepfakes could lead to the unwanted censorship of comedy or satire and noted that none of the current proposed measures effectively distinguish between malicious deepfakes and satire, parody, or entertainment.