The Federal Trade Commission sent letters to 17 major tech companies this week, warning them to comply with the Take It Down Act by May 19 or face fines of $53,088 per violation.
Amazon, Alphabet, Apple, Meta, Microsoft, TikTok, X, Reddit, Discord, Snapchat, Pinterest, Bumble, Match Group, Automattic, and SmugMug all got the same message from Chairman Andrew Ferguson.
We obtained a copy of the letter for you here.
“We stand ready to monitor compliance, investigate violations, and enforce the Take It Down Act,” Ferguson wrote.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
“Protecting the vulnerable, especially children, from this harmful abuse is a top priority for this agency and this administration.”
The law, signed by President Trump in May 2025 with strong backing from First Lady Melania Trump, requires platforms to delete non-consensual intimate imagery (NCII), including AI-generated deepfakes, within 48 hours of receiving a removal request.
Platforms must also find and remove identical copies, provide clear notice about the removal process and let people track their requests. The FTC published a business guidance page alongside the letter spelling all of this out. The definition of “covered platform” is broad enough to capture social media, messaging apps, video sharing, gaming platforms, and essentially any site hosting user-generated content.
Nobody wants revenge porn circulating online. But the law Congress passed is far broader than the problem it claims to solve.
The TAKE IT DOWN Act borrows its structure from the DMCA’s already-controversial notice-and-takedown system, then strips out the safeguards.
Under the DMCA, a takedown request must include a statement under penalty of perjury. False claims can result in liability. There’s a counter-notice process so the person whose content was deleted can push back. TIDA has none of this. There’s no penalty for false claims, no counter-notice, no requirement that the filer prove anything before content disappears. A platform gets a complaint, has 48 hours, and deletes. That’s the entire process and exactly why the Take it Down Act introduces a new censorship mechanism.
The law defines a violation as involving an “identifiable individual” engaged in “sexually explicit conduct,” without defining that conduct narrowly.
More: The Take It Down Act: A Censorship Weapon Disguised as Protection
Political speech is vulnerable too. A deepfake of then-candidate Trump kissing Elon Musk’s feet went viral before TIDA took effect. There was no nudity or explicit content but under the TIDA’s language, that satire could be classified as NCII and deleted.
A meme recasting Vice President Kamala Harris and Governor Tim Walz as characters from Dumb and Dumber was already pulled from Meta for being sexual in nature. Anyone with a form and a grievance can file a request and platforms facing five-figure fines per violation will delete first.
The law also applies to messaging platforms, some of which offer end-to-end encryption. If a platform can’t see message contents, it can’t scan for NCII or find “known identical copies.” Complying with the law as written means breaking encryption or scanning content before it gets encrypted. The FTC’s letter doesn’t address this and the law doesn’t carve out encrypted communications.
Enforcement sits entirely with the FTC.
The law passed the House 409 to 2 and the Senate unanimously. Nobody voted against protecting victims of revenge porn because that’s how the bill was sold.
What Congress built is a takedown system with no safeguards against abuse, enforced by a politicized agency, applicable to encrypted communications, and designed to make platforms censor first and think later.

