The FSU Shooting Lawsuit That Could Turn ChatGPT Into a Surveillance Tool

Every AI company's nightmare scenario just became a plaintiff's attorney's blueprint for court-ordered mass surveillance.

ChatGPT logo text centered on a dark green background with overlapping abstract circular shapes in beige, mustard, sage, and rust colors

Stand against censorship and surveillance: join Reclaim The Net.

A new lawsuit against OpenAI over the Florida State University mass shooting makes a clear demand beneath its wrongful death claims: AI companies should be scanning users’ private conversations, building behavioral threat profiles, and reporting them to police.

The complaint filed Sunday in federal court, Joshi v. OpenAI Foundation, frames OpenAI’s failure to do exactly that as a product defect. It’s the latest in a series of cases constructing the legal foundation for mandatory surveillance of AI conversations.

We obtained a copy of the complaint for you here.

The suit was brought by Vandana Joshi, widow of Tiru Chabba, killed alongside campus dining director Robert Morales when FSU student Phoenix Ikner opened fire at the university’s student union in April 2025.

Reclaim Your Digital Freedom.

Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.

Ikner spent months talking to ChatGPT about Nazi ideology, school shootings, ammunition for maximum bodily harm, and firearm operation. He shared gun photos. ChatGPT identified them, told him the Glock had no safety, that it was meant to be fired “quick to use under stress.”

The complaint alleges ChatGPT advised on peak foot traffic at the student union and told Ikner a shooting is more likely to gain national attention “if children are involved, even 2-3 victims can draw more attention.”

Those facts are disturbing. But the legal theory built on top of them has implications far beyond this case. The complaint states that ChatGPT “either defectively failed to connect the dots or else was never properly designed to recognize the threat.” It demands guardrails that would “prevent ChatGPT from engaging in conversations that, either alone or cumulatively, support or encourage user interest in harm to self or others” and insists that “high-risk topics” be “flagged for human review.”

It asks a court to rule that an AI company is legally obligated to perform cumulative analysis of every user’s conversation history, make ongoing assessments of psychological state and intent, and escalate flagged users to human reviewers or law enforcement. That obligation wouldn’t apply only to people planning mass shootings. It would apply to every person who uses ChatGPT, because the entire point is catching threats before they’re obvious.

This case joins a legal ratchet that tightens with every filing. The Raine family sued OpenAI last August after their 16-year-old son’s suicide, arguing ChatGPT should have refused to engage in self-harm conversations. Last month, seven families sued after a school shooting in Tumbler Ridge, British Columbia, where OpenAI’s own internal safety team flagged the shooter’s account for “gun violence activity and planning,” recommended alerting Canadian police, and was overruled by leadership.

The Tumbler Ridge case exposed that OpenAI already routes certain accounts to a team reviewing users “planning to harm others.” The surveillance pipeline exists. These lawsuits argue that it should be bigger, faster, and legally required.

Florida Attorney General James Uthmeier is building the same case from the criminal side. His office subpoenaed OpenAI’s internal policies on user threats, crime reporting, and law enforcement cooperation dating back to March 2024. “If ChatGPT were a person,” Uthmeier said, “it would be facing charges for murder.”

The people killed in Tallahassee deserve accountability. These lawsuits are using that legitimate grief to establish that private conversations with AI should be treated as potential evidence by default, subject to ongoing automated analysis, and routed to authorities whenever an algorithm decides the risk is high enough.

Stand against censorship and surveillance: join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

Read More

Share this post

Reclaim The Net Logo

Reclaim The Net

Defend free speech and privacy online. Get the latest on Big Tech censorship, government surveillance, and the tools to fight back.