The House of Intelligence Committee is examining the problem of ‘deepfakes’ – false images or videos that have been heavily edited with machine learning to seem real. Law school professor Danielle Citron pointed to the facts involving journalist Rana Ayyub during her hearing testimony. Rana Ayyub had to hide for her safety after an online mob spread a fake pornographic video involving her.
The EFF has recognized the harms of cyber-bullying and online harassment, yet the problem has to be tackled wisely to avoid censoring other lawful and culturally valuable content such as satire, or there could be adverse effects on free expression.
‘Deepfake’ is a general term that can refer to videos manipulated using a generative adversarial network – a machine learning technique that extracts portions of images from existing videos and combines them with a source image or video. It can be used for a number of applications, from special effects to parodies and satires, and of course to create malicious content to harass and defame.
As Motherboard’s Sarah Cole reported, a common use of deepfake technology is to create a fake but convincing pornographic video featuring a celebrity, by pasting the celebrity’s face onto the performer’s body, which is what happened to Ayyub. Deepfakes can also involve people, politicians, and public figures saying things they have not said.
Congress’s new bill, “The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act,” or the DEEPFAKES Accountability Act, requires mandatory labeling, watermarking, or audio disclosures for all “advanced technological false personation records.”
It is not clear how the new measures will contain the harms that malicious deepfakes can cause, especially considering that they do not apply to bad actors outside of the US.
Furthermore, the bill’s width creates many First Amendment problems. It fails to specify who has the burden of the proof, which could result in panic among creators, and imposes civil penalties of up to $150,000 for failing to include a watermark or disclosure. In cases of harassment, violence, interference in an election, fraud, and also humiliation of the person depicted, the bill will impose criminal penalties.
The bill exempts officers and employees of the US government acting in furtherance of public or national safety.
So far lawmakers have been discussing without success how to distinguish malicious deepfakes from satire, parody, or entertainment. The concerns grow when we consider how many lawmakers and experts are suggesting to modify the most important laws protecting internet speech.
During the hearing, policymakers proposed that limiting the protections provided by Section 230 (47 U.S.C. § 230) could solve the deepfakes problem.
Section 230 protects providers and users of that republish other people’s content from being held liable for third-party speech. Social media platforms, for example, are protected against lawsuits to moderate third-party content, or decisions to transmit content without moderation. Individuals who share content created by others enjoy the same protections.
Section 230 protects companies and users from being held liable for things their users say online but does not exonerate them from their own free speech responsibilities.
It is extremely important for small companies, whom would otherwise be unable to defend themselves against costly lawsuits based on the speech of their users: the legal protection provided by Section 230 set the ground for the development of a rich variety of open platforms that support all kinds of speech, and these new platforms should benefit from the same protections that allowed their bigger peers to grow and to make the web what it is today.