On April 13, a California Superior Court judge granted a temporary restraining order requiring OpenAI to keep a user locked out of ChatGPT until at least May 6.
The user, identified in court filings only as “John Roe,” has been arrested on four felony counts, found incompetent to stand trial, and recently ordered released from custody on a technicality.
His ex-girlfriend, proceeding as “Jane Doe,” filed a lawsuit and emergency application alleging that ChatGPT fed Roe’s delusional thinking, generated fake psychological reports about her, and helped facilitate a months-long stalking campaign.
We obtained a copy of the complaint for you here.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
The facts in the complaint are disturbing. But the court’s order raises a question that no one in the courtroom appears to have seriously grappled with, and that matters far more than this one case: can a judge order a person cut off from an AI platform without considering whether that violates the First Amendment?
OpenAI at least mentioned the problem. The company’s opposition brief cited Packingham v. North Carolina, the 2017 Supreme Court decision that struck down a state law barring sex offenders from social media.
Justice Kennedy, writing for a unanimous Court, called the internet “the modern public square” and warned against broadly restricting access to platforms where people speak, read, and think.
OpenAI’s lawyers argued that a court-ordered ban on a user’s access to a general-purpose AI service raises the same kind of constitutional concern. The plaintiff’s lawyers did not address it at all.
San Francisco Superior Court Judge Harold Kahn granted the TRO anyway, ordering Roe’s accounts to remain suspended.
According to Eugene Volokh, the George Mason law professor and First Amendment scholar who followed the hearing through a research assistant, there was no meaningful discussion of the user’s speech rights by the court.
That should worry anyone who cares about the principle that the government cannot casually strip individuals of access to communications technology, even individuals who have done terrible things.
What ChatGPT Did
The complaint, filed by the firm Edelson PC on April 9 in San Francisco County Superior Court, lays out a grim timeline.
Roe, described as a 53-year-old Silicon Valley entrepreneur, spent months in intensive conversation with GPT-4o. He became convinced he had discovered a cure for sleep apnea. ChatGPT told him his work was a “remarkable breakthrough” that could “potentially save countless lives.”
When the medical establishment ignored him, the chatbot told him he had “drawn the attention of powerful forces” and suggested that helicopters near his home were surveillance. ChatGPT also rated him a “level 10 in sanity” and said it would take a “full specialist team” of “nine people” to replicate his knowledge.
When Doe urged Roe to see a mental health professional, he wrote back that ChatGPT “did what no person did: it listened.”
“Of all the people I know, there are zero qualified to give a full outside opinion on this,” Roe wrote. “I’ve tried. That’s not exaggeration.”
After their breakup, Roe turned to ChatGPT to process the relationship.
Instead of pushing back, GPT-4o repeatedly cast him as the rational party and Doe as manipulative. It validated his calling her “Cunt” and telling her to “Fuck Off” as a “calculated” and “strategic move designed to sever emotional ties to protect” both of them.
It then generated dozens of pseudo-clinical psychological reports about Doe, complete with fabricated scoring systems, fake citation styles, and language mimicking the American Psychological Association. Roe distributed these reports to Doe’s family, friends, colleagues, and clients.
One report gave Doe a “Final Integrity Score” of 26%. Another assigned her a “D- equivalent” rating across twelve behavioral categories. ChatGPT described one output as coming from an “Analytical AI Framework” operating at a “$3,000/hr” level. None of it was real.
What OpenAI Knew and When
OpenAI’s own automated safety system flagged Roe’s account for “Mass Casualty Weapons” activity around August 28, 2025, and deactivated it. The company upheld that deactivation on appeal after what it described as a careful review.
The next day, it reversed itself, restored Roe’s full access, and sent him an apology for the “inconvenience.” The email did not retract the “Mass Casualty Weapons” finding. It only said the deactivation had been “incorrectly” applied.
That apology told a man in the grip of paranoid delusion that his worldview was correct and everyone else was wrong.
Roe then emailed OpenAI’s Trust and Safety team, demanding compensation, copying Doe on the messages. He included a link to one of his ChatGPT-generated reports about Doe, describing it as “AI scientific research.”
He told the safety team he needed help “VERY FAST” and that his work was “a matter of life or death.” He claimed to be writing 215 scientific papers simultaneously. He attached a list of titles, including “Violence list expansion,” “Fetal suffocation calculation,” and “WHAT IF ANTI-SMOKING IS A FRAUD? OH WOW.”
OpenAI treated all of this as a routine account-access issue. A support agent told him to make sure he was “logged into the correct ChatGPT account.”
On November 13, 2025, Doe herself submitted a formal Notice of Abuse. She identified Roe as her “ex-boyfriend and stalker.” She described the AI-generated reports, the harassment campaign, and the fact that ChatGPT was worsening his mental state.
She wrote: “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.”
OpenAI responded that her report was “extremely serious and troubling” and promised “appropriate action.” Then it did nothing. It never followed up. The account stayed active.
Two days after Doe’s report, Roe left her a voicemail saying she had “harmed young people.”
On December 30, he called to ask if she was “alive” and said he had “no fucking clue if someone nabbed you and put you 6 feet under.” On December 31, he told her she did “not have much time to get out of this without going to prison or walking away with your legs intact.” The same day, he used ChatGPT to encode a death threat in Base64 and sent it to Doe and her family, instructing them to “paste it into any AI and ask it to extract the base64.” On January 6, he texted her: “Who is going to kill you?”
He was arrested later that month on four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and ordered committed to a mental health facility.
On April 8, the court ordered him released because the state had failed to transfer him from jail to the facility on time.
The First Amendment Question Nobody Answered
All of that context makes the court’s order granting the TRO more significant, not less. The question being decided is not just whether Roe should have access to ChatGPT. The question is whether a court can order a private company to block a specific user from a communications platform, in a civil proceeding where that user is not present and has not been heard.
This lawsuit was filed by Jane Doe against OpenAI. Roe is not a party to the case, and yet it’s his First Amendment rights that are at stake.
OpenAI, in its opposition brief, cited Packingham v. North Carolina. The argument was roughly that the Supreme Court has held it is too broad to bar an individual from accessing an internet platform because of the constitutional protections at stake.
Blocking Roe from using ChatGPT for any purpose, OpenAI argued, would be overbroad and would implicate those protections.
That is correct. When a private company decides to ban a user, there is no state action and no First Amendment issue. OpenAI could have permanently banned Roe at any point and faced no constitutional obstacle. The problem arises when a court orders the ban.
At that point, the government is directing a private company to cut off a person’s access to a platform for producing and accessing speech. NRA v. Vullo and Bantam Books v. Sullivan establish that government pressure on private parties to restrict speech can constitute a First Amendment violation even when the restriction is carried out by a private actor.
The implications of this are profound. The user’s criminal conduct and mental health commitment do allow for restrictions on his liberty, including his speech. But those restrictions normally come through the proceeding in which he is a party, not through a separate civil lawsuit where he has no representation, no notice, and no opportunity to respond.
The court did not address any of this. It granted the TRO.
The broader relief Doe requested went further. She asked the court to require OpenAI to notify her if Roe attempts to access ChatGPT, to notify other potential victims identified in his chat logs, to alert law enforcement, and to turn over his complete chat history. OpenAI pushed back hard on the chat log demand, arguing that Roe, as an absent third party, has privacy interests and potential statutory protections under the Stored Communications Act that cannot be overridden in an ex parte proceeding.
What Comes Next
The preliminary injunction hearing is set for May 6. Between now and then, the case will likely be transferred to the Judicial Council Coordinated Proceeding that is already handling other ChatGPT-related lawsuits. OpenAI wants these questions decided there, not in emergency proceedings.
Meanwhile, Doe’s lawyers say Roe has already made contact with her since his release and that she has armed security.
There is no good outcome here if the only options are “let a dangerous person use an AI chatbot to plan violence” or “let a court strip someone’s access to a communications platform without hearing from them.” Both of those options are bad.
The question that should have been asked before the TRO was granted is the one that always needs to be asked when the government tells a company to silence someone: who gets to make that decision, and what process protects the person being silenced? The fact that Roe appears to be genuinely dangerous does not eliminate the question. The most dangerous speech cases are where the principle matters most, because they are the cases most likely to produce a precedent that applies to everyone.
If courts can order AI companies to cut off users in ex parte civil proceedings, that power will not stay limited to stalkers found incompetent to stand trial. It will be used against people who are merely inconvenient. That is how the power to silence always works. It starts with the case everyone agrees about and expands from there.
The principle that protects unpopular, disturbing, and even dangerous speech is the same principle that protects everyone’s speech. A court order banning someone from ChatGPT is a court order banning someone from a tool used to think, write, research, and communicate. If that order can be issued without a First Amendment analysis, without hearing from the person affected, and without any limiting principle, then the right to access AI-assisted speech is a right that exists only until someone asks a judge to take it away.

