California enacts landmark AI transparency law: The Transparency in Frontier Artificial Intelligence Act

Alert
|
5 min read

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act ("TFAIA") into law. With this, California became the first state in the nation to establish a comprehensive legal framework to ensure transparency, safety and accountability in the development and deployment of advanced artificial intelligence ("AI") models.

Scope

The TFAIA sets out new transparency and governance requirements for organizations developing certain advanced AI systems, known as "frontier models." While the law applies broadly to all frontier developers, certain obligations are specifically targeted at "large frontier developers."

Key definitions under the TFAIA include:

  • Frontier model: a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
  • Frontier developer: a person who has trained, or initiated the training of, a frontier model.
  • Large frontier developer: a frontier developer that, together with its affiliates, collectively had annual gross revenues in excess of US$500 million in the preceding calendar year.

Notably, the California Department of Technology is granted authority to update these statutory definitions as technology evolves.

Key Obligations

Frontier AI Framework

  • Under the TFIAI, large frontier developers are required to implement and publish a comprehensive        Frontier AI Framework. This framework must be updated and made public at least annually, and within 30 days of any material modification. The Frontier AI Framework must provide a detailed account of how catastrophic risks are identified, assessed and mitigated throughout the lifecycle of a frontier model. The Frontier AI Framework must, inter alia, include documentation of governance structures, mitigation processes, cybersecurity practices, and alignment with standards (whether national or international) and industry-specific best practices.
     
  • The TFIAI defines "catastrophic risk" as a foreseeable and material risk that a frontier developer's development, storage, use or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than $1 billion in damage to, or loss of, property arising from a single incident involving a frontier model doing any of the following: (i) providing expert-level assistance in the creation or release of a chemical, biological, radiological or nuclear weapon; (ii) engaging in conduct with no meaningful human oversight, intervention or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion or theft, including theft by false pretense or, (iii) evading the control of its frontier developer or user.
     
  • The TFAIA also requires the Frontier AI Framework to disclose how the developer identifies and responds to "critical safety incidents." Under the TFAIA, a critical safety incident is defined as:
    • Unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property
    • Harm resulting from the materialization of a catastrophic risk
    • Loss of control of a frontier model causing death or bodily injury
    • A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk

Publication of Transparency Report

The TFAIA imposes significant transparency and risk management obligations on all frontier developers, with additional requirements for large frontier developers.

  • Before, or concurrently with, deploying a new frontier model, all frontier developers (not only large frontier developers) must publish a transparency report. This report must, inter alia, include: (i) a mechanism for individuals to communicate directly with the frontier developer; (ii) the release date of the frontier model; (iii) the modalities of outputs the frontier model supports; and (iv) intended uses of the frontier model, along with any restrictions or conditions on those uses.
  • Large frontier developers face additional transparency obligations. Their transparency reports must also include: (i) an assessment of catastrophic risks associated with the frontier model; (ii) the results of this risk assessment; (iii) disclosure of any third-party involvement in the risk assessment process; and (iv) a description of other steps taken to comply with the Frontier AI Framework. Furthermore, large frontier developers must submit a summary of any catastrophic risk assessment related to internal use of their frontier models to the California Office of Emergency Services every three months, or on another reasonable schedule as agreed with the Office.
  • Frontier developers may redact portions of their transparency reports or Frontier AI Frameworks to safeguard trade secrets, cybersecurity practices, public safety, national interests or to comply with applicable laws. However, any redactions must be justified and unredacted versions should be documented for five years.

Critical Safety Incident Reporting Mechanism

The TFIAI requires frontier developers to report any critical safety incidents to the Office of Emergency Services within 15 days of discovering the critical safety incident, or within 24 hours if there is imminent risk of death or serious physical injury. Reports are kept confidential and exempt from public records laws to protect trade secrets.

Whistleblower Protections

The TFIAI protects whistleblowers who report major health and safety risks related to frontier AI models. Frontier developers must clearly inform all employees about these whistleblower rights and their responsibilities. In addition, large frontier developers must set up a reasonable internal system for anonymous reporting and must provide monthly updates to whistleblowers on the progress of their investigations.

Enforcement

The TFAIA authorizes the California Attorney General to enforce the law, including penalties of up to $1 million for each violation. The law will take effect on January 1, 2026.

Key Takeaways

With the TFAIA, California has shifted the AI transparency landscape from voluntary industry standards to a mandatory legal regime. Notably, Governor Newsom has presented the TFAIA as a blueprint for other states, particularly in the absence of a comprehensive federal framework for AI regulation. While it remains too early to assess its nationwide impact, the TFAIA may contribute to a growing patchwork of state-level AI regulation, similar in some respects to the influence of the California Consumer Privacy Act on privacy laws. Its influence is already visible beyond California; for example, New York has advanced its own frontier AI legislation, the Responsible AI Safety and Education Act ("RAISE"), which has passed the state legislature and is awaiting the Governor's signature.

Finally, organizations with AI governance programs aligned to the European Union's AI Act may have a head start in adapting to the TFIAI, given shared themes of transparency, governance and incident management—although the detailed obligations under each regime are distinct.

White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.

This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.

© 2025 White & Case LLP

Top