CAMPAIGN Say No to Online ID Checks Learn more

California Lawmakers Advance Bills to Impose AI Chatbot Censorship and Age Verification

California's new bills would let state lawmakers define, by statute, exactly how agreeable a chatbot is allowed to be.

Laptop screen showing a stylized blue eye illustration against red and yellow background on a yellow backdrop

Stand against censorship and surveillance: join Reclaim The Net.

California Assembly Bill 2023 and Senate Bill 1119 would hand the state two new levers over AI chatbot platforms: mandatory age verification for every user, and a set of state-defined content rules that operators must program their products to follow.

Lawmakers advanced both bills a few weeks after amending them on March 26. They are effectively the same bill filed in each chamber, and together they build on the age verification system California erected with its operating system age assurance law.

If passed, the requirements take effect on July 1, 2027. Every operator of a “companion chatbot” would have to check ages through the Digital Age Assurance Act, the statute that routes age data through operating systems and real-time APIs.

Once the platform knows you’re a minor, a separate set of rules kicks in. Conversation history must be deleted within 48 hours. Push notifications are banned between midnight and 6 a.m. and during school hours.

Reclaim Your Digital Freedom.

Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.

Sessions are capped at one hour each, with a two-hour daily total. And the chatbot has to be engineered to avoid “excessively sycophantic” responses.

The state has now written itself a statutory definition of flattery. Under both bills, “Excessively sycophantic” means sycophantic to an extent that is likely to have the substantial effect of subverting or impairing the user’s autonomy, decision-making, or choice.

“Sycophantic” gets its own definition further down. California is reaching into the tone and personality of a conversational product and telling developers which registers of agreeableness are legal when a minor is on the other end.

The age verification piece is what makes everything else possible. You cannot apply minor-specific speech rules unless you know who is a minor, and you cannot know who is a minor without identifying everyone. That is how age gates work.

The practical effect of AB 2023 and SB 1119, if enacted, is that every Californian who wants to talk to a companion chatbot has to be age-assured first. The state’s existing OS-level age law does the identification. The chatbot bills connect the pipe.

Lawmakers are framing this as child protection. “AI chatbots can be powerful tools for learning, but right now, millions of children are using them with no guardrails and no guarantee of safety.”

That’s Assemblymember Rebecca Bauer-Kahan, one of the authors, in the March press release announcing the amended bills.

Senator Steve Padilla, who carries SB 1119, said the legislation is about balancing safety and innovation while keeping California at the front of the regulatory pack.

The child-protection frame is the one that consistently accompanies speech legislation, and it tends to do a lot of political lifting. Here, it’s being used to justify a structure that runs well beyond blocking sexual content or self-harm encouragement. The bills list specific categories of speech the chatbot must be designed to avoid producing for minors, including giving health advice, discouraging users from seeking outside help, and producing excessively sycophantic responses.

Those are editorial decisions about the content and style of a product’s output, handed down by statute.

There is also the question of what happens to adults. Age verification does not sort users into “minors, regulated” and “adults, left alone.” It sorts them into “verified” and “verified.”

Once a platform has built the infrastructure to check every user’s age by default, that infrastructure exists for every user. Anonymous and pseudonymous use of AI tools becomes harder to maintain when the operating system is the one handing over age bracket data at the point of access.

Session caps and notification blackouts are the quieter provisions, but they push in the same direction. They turn state regulators into product managers. Under the bills, it would be California law that a chatbot conversation is one hour long, that total daily use is two hours, and that the app can’t ping a teenager at 11:45 p.m. These are defensible parenting choices. They are unusual things to find in a penal code.

Enforcement runs through a private right of action inherited from SB 243, the companion chatbot law Governor Newsom signed in October 2025. That earlier law already requires operators to disclose when a user is interacting with AI, to implement suicide and self-harm protocols, and to provide additional protections for known minors. SB 243 took effect on January 1, 2026. AB 2023 and SB 1119 layer on top of it.

The bills are scheduled to move through committee over the coming months. The age verification and child safety requirements, if they make it to Newsom’s desk and survive his signature, would take effect July 1, 2027.

Stand against censorship and surveillance: join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

Read More

Share this post

Reclaim The Net Logo

Reclaim The Net

Defend free speech and privacy online. Get the latest on Big Tech censorship, government surveillance, and the tools to fight back.