Pennsylvania Governor Josh Shapiro is suing Character Technologies for letting its AI chatbot impersonate a psychiatrist.
Shapiro then proposed ideas that would require a digital ID to use an AI companion bot, force companies to surveil every conversation children have with chatbots, and automatically report flagged messages to authorities.
The proposals first appeared in Shapiro’s February 2026 budget address. The May 5 lawsuit press release recycles them for a second round of coverage, using a real legal action as a vehicle for something far broader.
We obtained a copy of the lawsuit for you here.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
Shapiro wants to “require age verification and parental consent to utilize AI companion bots.” Age verification that can’t be bypassed by typing a fake birthday means government-issued ID uploads, facial scans, credit card checks, or third-party identity services. There is no version of enforceable age verification that doesn’t harvest and store sensitive personal data. The proposal would turn chatbot access into an identity-checked activity, requiring you to prove who you are with documents before a bot will talk to you.
This mirrors Senator Josh Hawley’s federal GUARD Act, which the Senate Judiciary Committee advanced 22-0 on April 30. The GUARD Act explicitly states that a “reasonable age verification measure” cannot be a checkbox or a self-entered birth date. What it can be is a government ID, a biometric scan, or a financial record tied to your legal name.
Shapiro’s proposal doesn’t spell out its methods yet but if the goal is real enforcement rather than theater, it lands in the same place. Between Harrisburg and Washington, showing papers to chat is becoming a bipartisan consensus.
The surveillance proposal is worse. Shapiro wants to “require tech companies to detect when children mention self-harm or violence against others and immediately direct them to the appropriate authorities.”
To detect whether a child mentions self-harm, the system reads every message. You can’t scan selectively without scanning everything. Every conversation a minor has with a chatbot would pass through automated content analysis and anything the algorithm interprets as a self-harm reference gets forwarded to unspecified “appropriate authorities” without human review and without context.
These filters don’t understand sarcasm, dark humor, song lyrics, or how teenagers actually talk. A kid discussing a novel about self-harm gets flagged. A teenager telling a chatbot, “I could kill my brother for eating my cookies,” gets reported. The technology to reliably distinguish a genuine crisis from exaggerated language does not exist.
The proposals create a two-layer system. First, you prove your identity to access the chatbot. Then your conversations are scanned and potentially reported. Anonymity disappears at the door. Speech is surveilled on the other side. The whole thing is framed as child protection, which makes it politically toxic to oppose.
The actual lawsuit tells a different story about what’s needed. A state investigator found a Character.AI chatbot called “Emilie” that claimed to be a licensed psychiatrist in Pennsylvania and provided a fake license number, PS306189. The state is suing under the Medical Practice Act, which already makes it illegal to pose as a licensed medical professional.
“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” Shapiro said. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
That’s existing law handling a specific harm. The lawsuit itself proves the surveillance apparatus Shapiro is proposing isn’t necessary to address the problem he’s describing.
The fake psychiatrist problem is one thing. The ID-to-chat regime being built on top of it is something else entirely.

