Clicky

Join the pushback against online censorship, cancel culture, and surveillance.

Senators Champion NO FAKES and TAKE IT DOWN Acts to Combat “Deepfakes,” Despite First Amendment Alarms

Congress is building a censorship machine disguised as child protection and artist rights.

Digital collage of human silhouettes and profiles connected by red lines on a blue background, suggesting network or social connections.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

During a Senate Judiciary Committee hearing held on May 21, 2025, titled “The Good, the Bad, and the Ugly: AI-Generated Deepfakes in 2025,” lawmakers and invited speakers rallied behind two highly controversial measures: the newly enacted TAKE IT DOWN Act and the pending NO FAKES Act.

Both proposals, under the banner of combatting AI misuse, would significantly expand government and corporate power to unilaterally censor digital content, posing serious risks to free expression.

Senator Marsha Blackburn (R-TN) praised both bills, warning of a “deeply troubling spike” in explicit AI-generated content.

“We’ve got to do something about that,” she said. “And both the No Fakes Act and the Take It Down Act, which President Trump just signed into law this week, go a long way to providing greater protections for our children from these deepfakes.” Citing the misuse of celebrity images and voices in scams, she declared, “Congress has to act,” adding that she and her colleagues aim “to work on the No Fakes Act and get it to President Trump’s desk this year.”

Senator Amy Klobuchar (D-MN) similarly praised the new law as “a first step,” referring to the TAKE IT DOWN Act she co-sponsored with Senator Cruz. “It’s had huge harmful effects, about 20 some suicides a year of young kids,” she said, referencing the impact of non-consensual explicit imagery. Klobuchar emphasized, “We also need rules of the road to ensure that AI technologies empower artists and creators and not undermine them.” She noted that Grammy-nominated artist Cory Wong had warned her that “unauthorized digital replicas threaten artists’ livelihoods and undermine their ability to create art.”

Christen Price, Senior Legal Counsel at the National Center on Sexual Exploitation, claimed that “deepfake technology allows any man to turn any woman into his pornography.”

Quoting Andrea Dworkin, she stated, “One lives inside a nightmare of sexual abuse that is both actual and potential and you have the great joy of knowing that your nightmare is someone else’s freedom someone else’s fun.” Price supported the NO FAKES Act, along with other bills, claiming, “These bills help protect individuals from the harmful effects of image-based sexual abuse and increase pressure on tech companies to manage websites more responsibly.”

Mitch Glazier, CEO of the Recording Industry Association of America, described the TAKE IT DOWN Act as “an incredible model” but insisted that it “only goes so far.” He warned of a “very small window and an unusual window for Congress to get ahead of what is happening before it becomes irreparable.” Pushing for swift passage of the NO FAKES Act, Glazier said platforms must act before content “goes viral very, very quickly,” arguing that these laws will allow content removal “as soon as technically and practically feasible.”

Justin Brookman of Consumer Reports emphasized the misuse of voice and video AI tools in scams and misinformation. He shared that “realistic cloning tools are easily available to the public and very cheap and easy to use.”

After testing six voice-cloning platforms, he reported that “four of the six companies we looked at didn’t employ any technical mechanism or reasonable technical mechanisms to reasonably ensure they had the consent of the person whose voice was being cloned.” Brookman argued that “developers of these tools need to have heightened obligations to try to forestall harmful uses,” adding, “Platforms, too, need to be doing more to proactively get harmful material off their platforms.”

The most expansive testimony on enforcement mechanisms came from Susanna Carlos, Head of Music Policy at YouTube. She highlighted YouTube’s Content ID system, explaining that it helps copyright holders by “creat[ing] digital fingerprints for those works in question and scan[ning] the platform.” She praised the NO FAKES Act, calling it a “smart and thoughtful approach” and said, “We are especially grateful to Chairwoman Blackburn, Senator Coons, Ranking Member Klobuchar, and all the bill sponsors.”

Carlos confirmed YouTube is building a new system dubbed “Likeness ID” that will scan users’ “face and voice” and match them across the platform. According to her, this system “allows individuals to notify us when digital replica content of them is online” and “is smartly mirrored in the No Fakes Act.” In a discussion with Senator Blackburn, Carlos acknowledged that platforms should act on takedown notices “as soon as possible,” but declined to specify an exact timeframe.

Senator Chris Coons asked Carlos why YouTube supported the bill. She replied, “So YouTube sits in a very unique kind of universe… And that is one area where this idea of digital replicas can cause real-world harm.”

Despite the sweeping praise from participants, the NO FAKES Act could easily stifle legal expression. The bill permits lawsuits over any “unauthorized digital replica” and gives platforms powerful incentives to err on the side of takedown, without requiring a counter-notice process. While the bill claims to exempt parody, satire, and documentaries, the Electronic Frontier Foundation has cautioned that “these exemptions are unlikely to work in real life.”

By encouraging rapid, opaque content takedowns, much like the DMCA system the bill seeks to emulate, the NO FAKES Act risks turning platforms into gatekeepers of permissible expression. The TAKE IT DOWN Act, though billed as narrowly tailored to non-consensual imagery, contains vague language and mandates fast removal timelines that could sweep in legitimate speech.

As Washington continues to frame AI as a threat requiring aggressive intervention, the implications for free speech are becoming increasingly dire. What was once the domain of manual moderation and individual judgment is being handed over to automated systems backed by vague laws, political pressure, and corporate lobbying.

The legislative momentum behind the NO FAKES Act and the TAKE IT DOWN Act raises pressing First Amendment concerns. Though marketed as tools to combat digital impersonation and image-based abuse, these bills introduce expansive mechanisms that risk stifling a broad range of protected expression, including satire, parody, documentary work, and political commentary. The vague definitions surrounding “unauthorized digital replicas” create a chilling effect, as artists, journalists, and ordinary users may self-censor out of fear that their content could be swept up in rapid takedown systems.

The lack of a robust counter-notice process, coupled with the threat of hefty fines for platforms, encourages over-removal rather than careful moderation, making lawful expression the collateral damage of legislative overreach.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Logo with a red shield enclosing a stylized globe and three red arrows pointing upward to the right, next to the text 'RECLAIM THE NET' with 'RECLAIM' in gray and 'THE NET' in red

Join the pushback against online censorship, cancel culture, and surveillance.

Reclaim The Net Logo

Defend free speech and individual liberty online. 

Share this post