California Just Raised the Bar on AI Transparency
By Courtney King | September 29, 2025
On September 17, 2025, California passed SB 53 into law. The bill is officially titled the Transparency in Frontier Artificial Intelligence Act (TFAIA), and it marks a significant shift in how the state approaches AI governance.
Rather than targeting narrow applications like hiring algorithms or facial recognition, this law addresses the very foundation of modern AI: large-scale general-purpose models that power everything from chatbots to autonomous systems. If your work intersects with AI governance, compliance, or policy, this bill matters.
Who Is Covered
The law applies to what it calls "large frontier developers". These are companies with more than $500 million in annual revenue that train foundation models using at least 10²⁶ FLOPs of compute.
In plain terms, we are talking about major players training extremely powerful models. OpenAI, Anthropic, Google DeepMind, Meta, and Amazon are obvious candidates. Smaller companies are not subject to the law right now, but the structure of the bill allows for future expansion.
Key Requirements
Here is a breakdown of what large developers are now required to do.
1. Publish a Frontier AI Framework
Each developer must write and publicly share a comprehensive governance framework that explains how they manage catastrophic risk. This framework must cover topics like:
Use of national and international standards
Risk thresholds and assessment methodologies
Mitigation strategies before deployment
Third-party evaluations
Internal governance structures
Cybersecurity practices
Internal model use and post-deployment monitoring
Frameworks must be updated annually and re-published within 30 days of any material change.
2. Report on Catastrophic Risk
Companies must submit quarterly summaries to the California Office of Emergency Services detailing any internal assessments of catastrophic risk. These reports focus on internal use cases, not just public-facing products.
This represents a new level of visibility into behind-the-scenes AI operations.
3. Disclose Critical Safety Incidents
If a model causes significant harm—such as loss of control, serious injury, or enabling a large-scale cyberattack—developers must report the incident to state authorities. Most reports are due within 15 days. In cases of imminent risk to life, developers have just 24 hours to report to the appropriate authorities.
4. Protect Whistleblowers
The law includes strong protections for employees who raise concerns about safety violations. Large developers must provide an anonymous reporting mechanism and update the whistleblower on the company’s response. Any attempt to retaliate against whistleblowers is prohibited, and violations are subject to legal action.
5. Avoid False Claims
Companies are prohibited from making misleading statements about their risk mitigation efforts or the safety of their models. Good faith mistakes are not penalized, but reckless or deceptive statements may result in civil penalties.
What This Signals
The passage of SB 53 confirms what many in the AI governance world already sensed. Voluntary safety commitments are no longer considered sufficient. California is now requiring real disclosures, internal controls, and government reporting from companies at the frontier.
This law also introduces the concept of catastrophic risk into the regulatory landscape. That includes any AI-enabled event resulting in more than 50 deaths or over $1 billion in damage. Few laws in the world are this specific or this forward-looking.
In addition, SB 53 creates a pathway for international or federal safety regimes to be recognized by California. If a company is complying with a comparable system, and California formally accepts that system as equivalent, it can fulfill its obligations through that route. But this is not automatic. California will maintain control over which regimes qualify.
CalCompute: A Public Cloud for AI Research
The bill also authorizes the creation of CalCompute, a public cloud computing cluster designed to support ethical AI research. The plan is to house this resource within the University of California system and make it accessible to academic researchers and public-interest organizations.
The goal is to reduce reliance on private cloud infrastructure and give public institutions a foothold in high-performance computing. If funded, this could change who gets access to advanced AI research tools and under what terms.
Implications for Stillwater Clients
If you are building or deploying AI systems, even if you are not yet covered by the law, now is the time to take these requirements seriously. Here is what we recommend:
Create an internal risk framework aligned with SB 53’s requirements
Build clear policies for reporting and addressing safety incidents
Conduct regular assessments of high-risk models, especially foundation models
Document deployment plans and intended uses in clear language
Set up whistleblower protections and internal escalation paths
Monitor the development of CalCompute and other public sector initiatives
This is also an opportunity to show leadership. Companies that voluntarily adopt SB 53-style practices will be in a stronger position with regulators, partners, and the public.
The Bottom Line
SB 53 is the most comprehensive piece of AI governance legislation passed at the state level in the United States. It is clear, specific, and focused on the right layer of the stack. If you work with advanced AI models, this law is now part of your operating environment.
California has taken a bold first step. Other jurisdictions will follow. The age of frontier AI transparency is no longer theoretical. It has arrived.
Stillwater Insights helps companies prepare for this new era of AI accountability. We offer assessments, governance frameworks, and fractional support for AI compliance. If your organization needs guidance, let’s talk.