Seven families of victims from the February mass shooting at Tumbler Ridge Secondary School have filed wrongful death and negligence suits against OpenAI and CEO Sam Altman in California federal court, and the remedies they want would reshape how every American interacts with consumer AI.
Buried inside the financial damage claims sit demands that, if granted, would convert ChatGPT and products like it into permanent identity-verified surveillance platforms wired directly to law enforcement.
The injunctive relief is a call for the death of anonymous AI use. Plaintiffs want a court order forcing OpenAI to ban previously flagged users from creating new accounts, which only works if the company first knows who every user actually is.
We obtained a copy of the complaints for you here, here, and here.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
They want OpenAI to flag and review certain users before granting access, the company to notify police whenever its internal systems decide a user might become violent, and chats terminated when conversations involve what the company classifies as repeating or escalating violent ideas.
The requirements assume a level of identification, retention, and reporting that current consumer ChatGPT does not have, and that the company would have to build.
The shooter, 18-year-old Jesse Van Rootselaar, killed two family members at home on February 10 before traveling to the local secondary school and killing six more people, before committing suicide.
According to the lawsuits, OpenAI’s automated system flagged the killer’s account for “gun violence activity and planning” in June 2025, eight months before the attack. A safety team reviewed the flagged content and recommended the company contact the Royal Canadian Mounted Police. Leadership overruled the team, the suits allege, deactivating the account instead.
More:Â From Private Conversation to Police Report in the Age of AI
When Van Rootselaar created a new account using a different email but real name, OpenAI’s troubleshooting documentation reportedly walked deactivated users through exactly that process.
Altman publicly apologized on April 23 in a letter to the Tumbler Ridge community published in full by local outlet Tumbler RidgeLines. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he wrote.
The company has separately said in a blog post that “When conversations indicate an imminent and credible risk of harm to others, we notify law enforcement,” and an OpenAI spokesperson told reporters the company has “a zero-tolerance policy for using our tools to assist in committing violence.”
OpenAI has stated that it considered referring Van Rootselaar’s account in June 2025 but determined the activity did not meet the threshold for involving an imminent and credible risk.
That threshold is the entire fight. Plaintiffs argue OpenAI’s safety review identified Van Rootselaar as posing a credible and specific threat of gun violence against real people, and that leadership vetoed escalation to police because doing so “would set a precedent compelling OpenAI to notify authorities every time its safety team identified a user planning real-world violence,” according to the complaints. Lead attorney Jay Edelson framed the implication of his own remedy plainly.
Granting the injunction, he said, “would require a dedicated law-enforcement referral team tasked with reporting OpenAI’s own users to authorities.”
The lawsuits are going beyond asking for a one-time fix to a specific tragedy. They are asking a federal court to order the construction of a permanent pipeline from private chatbot conversations to police case files, staffed by a dedicated team whose job is identifying customers and forwarding them to the state. The volume implied by Edelson’s own framing, that such a team would be busy enough to need full-time staffing, says everything about how often these systems flag their own users.
The identity verification piece does similar work without saying the quiet part. Banning a user from re-registering means the company has to verify, at signup, that the new account does not belong to someone previously banned. That cannot be done with email addresses, which are free and infinite, and which Van Rootselaar herself swapped out. It requires anchoring every account to something harder to discard. Government ID, a phone number tied to legal identity, biometric data, and payment instruments connected to a real name. The shooter’s ability to register a second account is being used as the argument for ending pseudonymous AI use entirely.
More:Â From Private Conversation to Police Report in the Age of AI
The critiques target OpenAI’s specific conduct. The remedies, though, would apply to everyone. Mandatory law enforcement referrals based on algorithmic flagging means an automated classifier, with whatever false positive rate it happens to have, decides which users get reported to police. ChatGPT serves hundreds of millions of users globally.
Even a fraction of a percent classification error rate produces an enormous volume of innocent people referred to authorities for what they typed into a chat window. The chats themselves become discoverable evidence, retained by a company now legally obligated to retain them, accessible through the kind of warrant process whose scope tends to expand once the infrastructure exists.
What gets lost in framing this as an AI-safety story is what the underlying request actually proposes for ordinary users. Pseudonymous access to a writing tool, research assistant, or conversational interface ends. Every prompt becomes a record tied to a verified legal identity, retained indefinitely against the possibility of future review, and screened by classifiers whose criteria are not public and whose outputs feed directly to police.
The same infrastructure that flags users planning real-world violence will flag users discussing fiction, journalism, harm reduction, suicidal ideation they want to talk through privately, abusive relationships they are trying to describe, and an unknowable list of other contexts where what looks like a threat to an algorithm is actually a person trying to think.

