The US Department of Justice has moved to intervene in Elon Musk’s xAI lawsuit against Colorado, escalating a federal challenge to the state’s first-in-the-nation artificial intelligence “antidiscrimination” law just over two months before it is set to take effect.
The intervention, filed in federal court in Denver, marks the first time the DOJ has joined a constitutional challenge to a state AI regulation. It pairs the federal government with xAI, the company behind the Grok chatbot, in arguing that Senate Bill 24-205 violates the US Constitution and threatens American leadership in artificial intelligence.
We obtained a copy of the filing for you here.
The Colorado law, signed by Democratic Gov. Jared Polis in 2024, requires developers and deployers of “high-risk” AI systems used in decisions about hiring, housing, education, lending and other consequential matters to take reasonable care to prevent algorithmic discrimination, disclose how their systems operate and notify consumers when AI plays a role in decisions affecting them. It is scheduled to take effect on June 30 after an earlier delay.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
The main claim in xAI’s case, filed April 9, is a First Amendment claim. The company argues the law amounts to compelled speech because its definition of algorithmic discrimination prohibits disparate-impact discrimination generally while exempting algorithms designed to “increase diversity or redress historical discrimination.”
That carveout, xAI contends, forces developers to align their models’ outputs with state-preferred viewpoints.
“By requiring ‘developers’ and ‘deployers’ to differentiate between discrimination that Colorado disfavors and discrimination that Colorado favors, SB24-205 compels Plaintiff xAI — a ‘developer’ under the law — to alter Grok, forcing Grok’s output on certain State-selected subjects to conform to a controversial, highly politicized viewpoint,” the company’s attorneys wrote in their complaint.
The lawsuit also contends that the law would cause Grok to “abandon its disinterested pursuit of truth and instead promote the State’s ideological views on various matters, racial justice in particular,” and argues the statute is “unconstitutionally vague” and “invites arbitrary enforcement” because key terms are not defined.
The DOJ’s complaint adds a Fourteenth Amendment Equal Protection theory. Federal attorneys argue that by relying on demographic data and statistical disparities to identify discrimination, the law would compel developers to make race-, sex- and religion-conscious adjustments to AI outputs.
“SB24-205 constrains the information that AI systems convey, obligates AI developers and deployers to discriminate, and then enforces the state-mandated discrimination with onerous policy, assessment, and disclosure requirements that will disproportionately burden small businesses and start-ups,” DOJ lawyers wrote in the 19-page filing.
In a public statement, Assistant Attorney General Harmeet K. Dhillon, who leads the department’s Civil Rights Division, framed the case in sharper political terms. “Laws that require AI companies to infect their products with woke DEI ideology are illegal,” she said.
“The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far-left worldview at odds with the Constitution.”
Assistant Attorney General Brett Shumate of the Civil Division emphasized the economic stakes. “America’s success in the AI race will depend on removing barriers to innovation and adoption across sectors,” he said. “Laws like Colorado’s that force AI models to produce false results or promote ideological bias threaten national and economic security and must be stopped.”
The Colorado law was the only state AI statute singled out by name in President Trump’s AI executive order last year.
The office of Colorado Attorney General Phil Weiser, who is named as the defendant and would be responsible for enforcing the law, declined to comment on the active litigation.
State lawmakers who backed the bill pushed back against the federal challenge. Rep. Brianna Titone, D-Arvada, a lead sponsor, called the DOJ’s claims a distraction from ongoing reform efforts in the legislature. Senate Bill 205 “is, and has always been, promoted as a policy to prevent and curtail discrimination for consequential decisions,” she told the Colorado Sun in an email.
Rep. Manny Rutinel, D-Commerce City, another sponsor, accused the administration of acting on behalf of the president and Musk personally. “Coloradans deserve technology that works for everyone,” he said, “not just billionaires.”
The law has had a contentious path even within Colorado. Polis signed it reluctantly in 2024, citing concerns about its impact on the state’s tech sector, and the original February effective date was pushed back to June 30 to allow further negotiation amid industry objections. The legislature is preparing a third round of amendments before the law takes effect.
If xAI and the DOJ prevail, the case could shape how other states approach AI regulation. Several have considered similar measures, and Colorado’s status as the first to enact one had positioned it as a national test case for AI consumer protection.
A ruling on whether the law can take effect as scheduled is expected before June 30.

