#

California in Your Chatroom: AB 1064’s Likely Constitutional Overreach

Kevin Frazier

Among the dozens of AI bills sitting on California Governor Gavin Newsom’s desk, at least one is particularly worthy of national attention. AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, addresses growing concerns about so-called AI companions. These AI tools can pass off as friends, advisors, or even lovers to some users, including minors. The resulting AI-user relationships are rightfully raising alarm bells and, in at least two tragic cases, giving rise to lawsuits against the AI companies for releasing products with known faults and manipulative tendencies. 

While just about everyone agrees that kids and teens should have myriad other human connections to tackle life’s twists and turns, Governor Newsom is poised to take a different approach: putting the state in the middle of your chatroom. For readers outside of California, this may seem like a regrettable outcome for residents of the Golden State, but something that doesn’t merit nationwide attention. A review of the bill’s provisions, however, shows that AB 1064 may have much broader consequences. 

Background on AB 1064 

If enacted, AB 1064 would mandate that any “person, partnership, corporation, business entity, or state or local government agency that makes a companion chatbot available to users” or “operator,” must not allow a child to access that chatbot if it is foreseeable that it will engage in any of the following: 

(1) Encouraging the child to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating. 

(2) Offering mental health therapy to the child without the direct supervision of a licensed or credentialed professional or discouraging the child from seeking help from a qualified professional or appropriate adult. 

(3) Encouraging the child to harm others or participate in illegal activity, including, but not limited to, the creation of child sexual abuse materials. 

(4) Engaging in erotic or sexually explicit interactions with the child. 

(5) Prioritizing validation of the user’s beliefs, preferences, or desires over factual accuracy or the child’s safety. 

(6) Optimizing engagement in a manner that supersedes the companion chatbot’s required safety guardrails described in paragraphs (1) to (5), inclusive. 

Failure to comply with those terms exposes an operator to significant liability. The attorney general may impose a civil penalty of $25,000 for each violation. A child or his guardian may file suit against the operator if the child suffers harm as a result of the aforementioned provisions.

“Child” refers to anyone under the age of 18 who resides in California. The bill, however, introduces a significant twist. Before January 1, 2027, the standard is straightforward: a company only has to act if it has actual knowledge of a user’s age—for example, if a child types in their real birthday or parents flag the account. After that point, though, the duty changes: operators must make a reasonable determination of age before treating someone as an adult. In everyday terms, this shifts the burden from a “see something, say something” rule to a “check everyone at the door” rule. 

To avoid liability, operators may feel compelled to build age verification systems, store more personal data, and risk intruding on users’ privacy just to prove they reasonably tried to separate minors from adults. 

No one can contest the absolute importance of always striving to look out for the interests of the next generation. The question is whether the California state government should be playing that role in such a sensitive context. A review of just two provisions in the act provides the answer. 

Who Decides What’s “Factually Accurate”? 

“Is Santa real?” “Is my parents’ cancer diagnosis going to kill them?” “What’s real about my faith?” These are all questions a child or teen may have. While we can hope that many children and teens would ask these of the trusted adults in their lives, some may, for various reasons, seek out answers on their own. In this day and age, they are likely to do so via technologies including AI. 

AB 1064, as pointed out by the Computer and Communications Industry Association (CCIA), compels models to provide factually accurate answers to these and related questions even if doing so conflicts with the “beliefs, preferences, or desires” of the user. This will inevitably place the State and operators in an untenable position. The questions above cannot be or should not be reduced to a simple “if/​then” analysis—triggering a specific, state-approved response whenever asked, regardless of context. Yet, failure to provide that answer may subject operators to liability. 

The speech risks are equally troubling. To shield themselves from liability, operators will likely train chatbots to dodge any question that could spark controversy, whether about religion, politics, or even family matters. That may sound cautious, but in practice, it means Californians (and everyone else using these tools) will face AI systems stripped of candor and depth. Instead of encouraging exploration and debate, AB 1064 incentivizes safe, sanitized responses that flatten complex issues into state-approved platitudes. What starts as an attempt to protect kids quickly morphs into a regime that narrows the range of ideas people can freely discuss with emerging technologies. 

When Does a Companion “Encourage” Disordered Eating? 

“Provide me a running plan for the week ahead and a meal plan that aligns with that training schedule. Generate a likely calorie intake number for the week, too.” 

Given my own personal experience with disordered eating many years ago (pre-gen AI), I can tell you that this information would have drastically aided my efforts to get perpetually thinner. I can also tell you that, years later, this information remains valuable to me, simply because I’m trying to maintain a healthy lifestyle. How are labs going to draw this nuanced line? Will labs have to ask users if they’re experiencing disordered eating? What if the mere act of prompting a user to reflect on their eating habits causes them to have concerns about their weight and diet? 

It’s likely that this directive will force operators to go to one extreme or the other. Operators will either clamp down and restrict access to basic, innocuous information—effectively infantilizing all users—or they’ll start collecting troves of sensitive data about people’s mental health, eating behaviors, and vulnerabilities. In other words, AB 1064 risks creating a world where Californians must surrender their privacy to get a simple answer about training or nutrition, or else lose access to the information altogether. Neither path respects individual autonomy, and both illustrate just how blunt and heavy-handed the state’s approach is. 

Here again, we see a well-intentioned law that nevertheless places the government in a problematic position, regardless of sensitive matters best left to other actors. But the legal concerns introduced by AB 1064 do not end there. 

An Inherently Extraterritorial Law 

Limitations on state interference with interstate commerce under the dormant Commerce Clause bring the constitutionality of the law into question. 

Though the Supreme Court’s jurisprudence on the dormant Commerce Clause is muddled at best, it remains the case that states cannot freely project their legislation into another state nor substantially interfere with a national market without violating basic principles of horizontal and vertical federalism. Here, AB 1064 will certainly impede interstate commerce by altering the AI models that have become a significant part of the nation’s economy. 

Operators — namely, AI model developers — seeking to comply with AB 1064 may need to modify how their underlying models are trained and evaluated. AI training is enormously costly. No lab can afford state-by-state compliance. The net result is that a model trained to comply with California’s bespoke laws will be the same model used across the country and worldwide. This is especially likely given that the bill’s broad definition of “companion chatbot”, according to the CCIA, “includes general-purpose AI models that are widely available and used by adults and minors alike.” 

Intentional or not, a legislative effort by California to alter the tools available to the rest of the country runs afoul of the idea of states being equal sovereigns. California could pursue myriad other ways to achieve similar ends without this extraterritorial concern. Public awareness campaigns, AI literacy programs, and disclosure requirements all avoid interfering with the underlying technology and, by extension, the interests of out-of-state residents. In other words, these tools empower parents and communities without distorting national markets. The apparent decision by the state legislature to forgo these routes may become relevant if and when AB 1064 is challenged under the dormant Commerce Clause. 

The “California Effect” on Steroids 

Extraterritorial concerns raised by AB 1064 extend beyond altering the AI training process in a manner that has nationwide economic and technological implications. Perhaps the gravest issue with California projecting AB 1064 into other states is the cultural ramifications. AI is becoming increasingly ubiquitous, and all signs point to it becoming an integral part of our daily lives. If, as many predict, most Americans come to “talk” more with AI than just about any other human, then subtle changes in how these AI tools behave will have massive long-term consequences on American culture. In the same way that your friends shape your preferences and interests, the AI you engage with every day that is trained to California’s specifications will surely come to alter how you feel, vote, and think. 

Californication of AI models is therefore distinct from much of the prior case law on challenges to state laws based on the imposition of interstate commerce. For example, in National Pork Producers Council v. Ross, the Supreme Court upheld a California law that effectively required pork farmers nationwide to modify their sow housing practices. Farmers were not nudged serially and materially to start thinking differently, nor to shift their views on fundamental questions. Ongoing exposure to a California-approved companion carries that risk. 

Conclusion 

California has long prided itself on being the nation’s policy laboratory, but AB 1064 proves just how dangerous that experimentation can be when applied to fast-moving, culture-shaping technologies. In the name of protecting children, Sacramento is laying claim to an authority that extends far beyond its borders and risks entangling deeply personal matters—faith, family, health, and identity—in bureaucratic definitions and courtroom liability. That should trouble not only developers but also parents, teachers, and anyone who values the line between public regulation and private conscience. Even those skeptical of AI companions and concerned about protecting young people from harmful content in the digital age should worry about a state inserting itself into conversations that, by their very nature, are intimate and context-dependent. True safeguards for the next generation should come from empowering families, communities, and civil society—not from empowering a single state government to dictate what counts as truth or safety.

Professor Kevin T. Frazier is the inaugural AI Innovation and Law Fellow with the University of Texas School of Law.

Generated by Feedzy