Today, the average person likely imagines a chatbot when hearing the term “AI.” Modern AI chatbots first became used on the open market in 2016, when Facebook allowed developers to create a chatbot to carry out daily actions on their platform. Fast forward to 2025, and AI is a part of almost every product on the market today. Tech companies, governments, and essentially every industry are racing to implement AI into their internal processes and external products.
Although AI has been well known and talked about over the last decade, the United States still does not have comprehensive standards and regulations for the use of AI. Standardization critics argue that regulating AI could hinder innovation. However, rapid AI advancements demand careful oversight, given AI’s increasing impact on daily life. The key question is what protections exist for consumers. According to a Gallup Poll, 80% of US adults prioritize AI safety and data security, even if this slows development. Current consumer protection laws are insufficient for AI’s risks. Comprehensive measures are needed for prevention rather than after-the-fact responses.
However, last month, California lawmakers passed some of the most comprehensive AI bills, directly addressing risks associated with AI through detailed new regulation. California’s new bills aim to show that consumer protection and tech innovation can coexist, sharpening the need for balanced, forward-looking regulation.
Other states have followed California’s models. For example, California’s data protection laws, modeled after the EU’s strong GDPR regulations, have now influenced legislation in states like Colorado and Virginia. Therefore, California’s passage of several first-in-the-nation AI bills can pave the way for broader legislation across state and federal governments.
So what exactly is new? Some of the more well-known measures include SB 942, dubbed the “California Artificial Intelligence Transparency Act,” and SB 53, AB 2602, and AB 1836.
Name, Image, and Likeness
Statutes like AB 2602 and AB 1836 focus on the use of name, image, and likeness in digital replicas to ensure the subjects of such replicas have transparency into how their personas are used, especially posthumously. Despite strong Intellectual Property and NIL protection in the US, it’s a bit hard to wrap your mind around the ease of use and access for tools like OpenAI’s Sora. Sora not only offers a social media platform for AI-generated videos (a model similar to TikTok), but also gives users the ability to replicate pretty much anyone doing pretty much anything. These statutory protections came from a real need; AI programs like Sora have already done damage. For example, one user used Sora to generate and disseminate videos of Martin Luther King, Jr., spewing hate speech that look real enough to believe. Sora highlights just one example of the wide legal gaps that remain after new AI models are released with little discretion or standards. These statutes seek to increase accountability by, for example, now legally requiring consent from a deceased person’s estate before their voice or image can be used in AI models. Businesses must take proactive steps to prevent misuse of digital replicas on top of requirements to disclose transparency reports, risk assessments, incident reporting, and even providing protection for whistleblowers who directly work with and create AI models.
Creation and Dissemination
SB 53 and SB 942 aim to better regulate the creation and dissemination of AI technology. Prior to September 2025, there were few accountability mechanisms for private companies to address how their tools affect the human user. California’s SB 53, the “Transparency in Frontier Artificial Intelligence Act” (TFAIA), outlines protocols to ensure accountability to the creators of this software. The law focuses on developers working with sophisticated AI (think: Google, Anthropic, OpenAI, Meta) that are trained with a large quantity of models (data points / interactions). These standards primarily apply to companies making $500 million or more in annual revenue, and require them to comply with risk assessments to reveal and avoid potentially catastrophic risks, instill safety standards within their operations, and remain transparent to the end user. Furthermore, companies are obligated to publicize their safety policies and practices.
Structural Disparities
The TFAIA acknowledges that the large-scale compute infrastructure necessary to train and run these models is concentrated in the hands of a few organizations. This creates a high barrier that limits who can participate in the industry and, in turn, makes it more difficult to address some of the social constraints of AI. Enter CalCompute. CalCompute was created by the TFAIA and will be a publicly owned and operated cloud computing cluster that will keep space and resources for research, innovation, and development of AI technology for the public good.
Finally, one of the hallmarks of the TFAIA is the strong protections for whistleblowers working on these products. Again, the AI industry has been in a vacuum, with the smartest and most innovative tools created without regulation; the social implications are just becoming understood as AI has now reached the mass market. California’s legislature recognizes that the people on the front lines developing these tools are key stakeholders in ensuring safe AI use.
Oh, how far we’ve come?
It is much too early to tell the real impact of these laws, and whether their provisions will promote safer AI creation and use. At this point, Anthropic is the only major developer to fully endorse TFAIA while others, like Meta and Google, heavily lobbied against the bill.
The promising part of this year’s California legislative sessions is that there are actually now some regulations for AI on the books. However, we still do not know if this will impact how companies are approaching their AI tooling. Observers should watch to see whether California’s laws will provide enough incentives to ensure human-centric development of AI tools actually protects consumers.
