President Donald Trump has now signed into law the Take It Down Act, a measure designed to address the spread of non-consensual intimate imagery (NCII), including increasingly prevalent AI-generated deepfakes.
While the legislation is being celebrated by both major parties as a victory for online safety, particularly for children and victims of abuse, it has also raised concerns about the potential for overreach, selective enforcement, and the erosion of free speech under the guise of digital protection, particularly because of the broad wording of the bill.
The law’s most prominent advocate within the administration has been First Lady Melania Trump, who campaigned heavily for its passage and made rare public appearances to promote it. During the Rose Garden signing ceremony, President Trump invited her to add her signature beneath his, an unusual but symbolic gesture that underscored her role in pushing the legislation forward.
“This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” Mrs Trump said. In her remarks, she repeated her criticism of AI and social media, calling them “the digital candy for the next generation,” and warned that these technologies “can be weaponized, shaped beliefs, and sadly affect emotions and even be deadly.”
President Trump, for his part, appeared to dismiss constitutional concerns. “People talked about all sorts of First Amendment, Second Amendment. They talked about any amendment they could make up, and we got it through because of some very brave people,” he said.
Earlier in the year, during his March 4 address to Congress, Trump had signaled his intent to sign the bill. “The Senate passed the Take It Down Act…Once it passes the House, I look forward to signing that bill into law. And I’m going to use that bill for myself too if you don’t mind, because nobody gets treated worse than I do online, nobody.”
While made in jest, the remark pointed to an unresolved issue: how this law will be enforced, and who will benefit most from it.
There is no denying the harm caused by NCII. Victims often struggle to remove intimate images, whether real or AI-generated, while the content continues to spread. The Take It Down Act requires websites to remove flagged content within 48 hours of a complaint. But, just like the Digital Millennium Copyright Act (DMCA), platforms have little way of determining if a complaint is legitimate or being used as a censorship mechanism.
That timeline is designed to offer swift recourse to victims. However, the law’s broad wording leaves its applications open to interpretation.
The bill defines a violation as involving an “identifiable individual” engaged in “sexually explicit conduct,” without offering a clear or narrow definition of what that conduct entails. This vagueness creates a gray area that could easily be used to suppress satire, parody, or even critical political speech.
As we previously reported, a deepfake video that circulated recently depicted Trump kissing Elon Musk’s feet. It went viral across platforms and contained no nudity or explicit content. Under the language of the new law, that kind of content could potentially be classified as NCII. Similarly, a meme showing former Vice President Kamala Harris and her then-running mate Tim Walz reimagined as characters from Dumb and Dumber, engaged in exaggerated physical gestures, was removed from Meta for allegedly being sexual in nature.
These examples raise alarms over how the law might be used to erase content not because it is harmful or exploitative, but because it is politically inconvenient or controversial.
The law does not require proof before content is taken down. That means a platform can receive a complaint and must act quickly, even if the complaint is baseless. Content that is clearly satire or investigative reporting could be swept up in takedown requests, and the law makes no mention of any way to protect these forms of speech. The complainant is not obligated to demonstrate actual harm, and there is no defined appeals process. This framework creates an internet environment where accusations alone can silence speech.
The parallels to the DMCA are troubling. That law, meant to protect copyright holders, has been exploited by individuals and corporations to suppress criticism. The Take It Down Act adopts a similar structure, obligating platforms to remove content without delay or independent verification.
The law places enforcement authority with the Federal Trade Commission (FTC). Giving the FTC the power to decide which takedowns are valid raises new concerns. Content moderation will not be shaped by courts or public standards, but by shifting political winds.
The law’s implications for encrypted messaging have received little attention. If platforms are responsible for preventing the spread of NCII, they may be compelled to scan private messages or weaken encryption protocols to comply. This would threaten the security of private communications, including those of journalists, activists, and everyday users.
The Take It Down Act fits into a pattern seen in recent internet legislation. Bills are introduced under the banner of safety, written in expansive language, and enforced by regulatory agencies with little accountability. Proposals like the Kids Online Safety Act followed the same model, claiming to protect children while raising new threats to privacy and speech because the wording of the legislation offers no protections.