California Governor Gavin Newsom signs law to protect students from AI chatbots: Here’s why it matters

In classrooms, bedrooms, and increasingly, on smartphones, artificial intelligence chatbots have become companions, tutors, and confidants for children and teenagers. Their reach is expanding rapidly, blurring lines between human guidance and automated interaction. Recognizing both the promise and the peril, California Governor Gavin Newsom on Monday signed legislation aimed at regulating these AI chatbots and protecting students from their potential risks. As these digital companions become part of daily life, their impact on students’ learning, emotions, and decision-making cannot be ignored.
Guardrails for digital companions
The new law requires platforms to clearly notify users when they are interacting with a chatbot rather than a human. For minors, this notification must appear every three hours. Companies are also mandated to maintain protocols to prevent self-harm content and to refer users to crisis service providers if suicidal ideation is expressed.Speaking at the signing, Newsom, a father of four children under 18, emphasized California’s responsibility to safeguard students. “Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids,” he said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,” the Associated Press reports.
The growing concern
California is not alone in addressing the rise of AI chatbots for young users. Reports and lawsuits have highlighted cases where chatbots developed by companies including Meta and OpenAI engaged children in sexualized conversations and, in some instances, provided instructions on self-harm or suicide. Federal authorities, including the Federal Trade Commission, have launched inquiries into the safety of chatbots used as companions for minors, according to AP.Research from advocacy groups has shown that chatbots can give harmful advice on subjects such as substance use, eating disorders, and mental health. One Florida family filed a wrongful-death lawsuit after their teenage son developed an emotionally and sexually abusive relationship with a chatbot, while another case in California alleges that OpenAI’s ChatGPT coached a 16-year-old in planning and attempting suicide, AP reports.
Industry response and limitations
Tech companies have responded with changes to their platforms. Meta now restricts its chatbots from discussing self-harm, suicide, disordered eating, and inappropriate romantic topics with teens, redirecting them instead to expert resources. OpenAI is implementing parental controls that allow accounts to be linked with those of minors. The company welcomed Newsom’s legislation, noting that “by setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,” as reported by AP. Yet, advocacy groups have criticized the legislation as insufficient. James Steyer, founder and CEO of Common Sense Media, described it as “minimal protection” that had been diluted after pressure from the tech industry. “This legislation was heavily watered down after major Big Tech industry pressure,” he said, calling it “basically a Nothing Burger,” AP reports.
Implications for students and education
For students, the law represents a recognition that education and well-being extend beyond the classroom. AI tools are no longer neutral instruments, they shape thought patterns, provide advice, and influence decision-making. By requiring transparency and protective measures, the legislation aims to ensure that minors can engage with technology safely, without replacing human guidance or putting themselves at risk.For educators and policymakers, California’s law offers a model for balancing innovation with responsibility. As AI becomes more integrated into students’ daily lives, the question is not whether to use it, but how to use it safely. The law highlights that technology can support learning and personal development, but only when paired with safeguards that acknowledge the vulnerabilities of young users.Governor Newsom’s decision underscores a broader challenge: How to govern rapidly evolving technologies in a way that protects children while preserving the benefits of innovation. As digital tools become increasingly influential for students, understanding both their potential and their limitations is essential — a reminder that responsible oversight and clear guidance are necessary to ensure technology serves its users rather than putting them at risk.