Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.
The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.
All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.
What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
We obtained a copy of the bill for you here.
The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.
Blackburn’s bill repeals it entirely, after a two-year transition period.
Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”
AI platforms would be incentivized to heavily monitor users.
Enforcement doesn’t sit only with federal regulators; state attorneys general and private actors both get standing to sue. The downstream effect on publishing is direct. Once liability protections go, platforms can no longer host content neutrally.
Reporting on contentious subjects doesn’t need to be factually wrong to become a liability problem. It just needs to be frameable as “harmful.” The predictable result: platforms tighten policies, reduce reach, or quietly stop hosting the material that exposes them most.
The bill requires AI developers to prevent “reasonably foreseeable harms” from their systems. “Harm,” “foreseeable,” and “contributing factor” are not defined in fixed terms. They get decided after the fact, by regulators and courts working from evolving interpretations.
An AI output can be judged unlawful under standards that didn’t exist when the system produced it. For developers, the rational response is aggressive preemptive restriction: building systems that refuse more, flag more, and generate less of anything that might one day attract a lawsuit.
Blackburn frames the bill as clearing up a “patchwork of state laws” through a single national standard. The agencies empowered to define and enforce that standard: the FTC, DOJ, NIST, and Department of Energy. Rather than competing state-level experiments, this creates a centralized governance structure where a handful of federal bodies set the rules for AI development across the entire country.
Blackburn’s framework absorbs several existing proposals wholesale. Each one carries its own surveillance and censorship architecture. The Kids Online Safety Act (KOSA) brings algorithmic systems under federal oversight. Platforms would be required to modify personalized recommendation engines, disable infinite scrolling and autoplay, and restrict notification systems to prevent “compulsive usage.”
This goes beyond content moderation. It regulates how information gets ranked, delivered, and amplified at the system level.
The NO FAKES Act creates new liability for AI-generated replicas of individuals’ voices or likenesses, and extends that liability to platforms that knowingly host unauthorized material. Anyone can sue. Platforms that fail to comply with takedown requirements face substantial fines.
The GUARD Act mandates age verification for AI chatbot makers, bans minors from access, and requires additional child safety measures. Age verification at this scale means identity verification. The data collected to confirm someone isn’t a minor doesn’t disappear after the check.
The AI LEAD Act introduces federal liability standards covering defective design, failure to warn, and strict liability for AI products deemed “unreasonably dangerous,” the same framework being imported into the broader bill.
The bill explicitly declares that training AI models on copyrighted works is not fair use. That single provision opens the door to litigation against virtually every major AI developer. It also establishes liability for unauthorized use of a person’s voice or likeness in AI-generated content, covering both training and deployment.
NIST gets directed to develop national standards for content provenance and watermarking of AI-generated media, with requirements that AI providers allow content owners to attach provenance data to their work and prohibitions on its removal.
The infrastructure this builds tracks the origin and authenticity of digital content across platforms at a technical level. Surveillance is the word for it, even when it’s being sold as authentication.
Removing Section 230 and introducing broad legal exposure creates a system where platforms and AI developers live under constant litigation risk tied to content, outputs, and system behavior. That converts platform self-censorship from a choice into a survival strategy.
The bill doesn’t need government agents flagging articles. It just needs to make the legal cost of hosting difficult reporting high enough that platforms do the math themselves.

