California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (SB 53) into law Monday, establishing the first U.S. state regulations for leading AI companies through mandatory transparency and safety reporting requirements.
The bill is the first law in the United States to explicitly regulate the safety of powerful, frontier AI models. While other states have passed laws addressing certain AI aspects, SB 53 creates a distinct regulatory framework focused on top developers. Governor Newsom stated, “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.” The law’s provisions are intended to create public insight into the development processes of these advanced systems.
The law requires leading AI companies to publish public documents detailing how they are following best practices to create safe systems. It also creates a new pathway for companies to report severe AI-related incidents to the California Office of Emergency Services. The act strengthens protections for whistleblowers who raise concerns about health and safety risks, shielding employees from potential reprisal for coming forward. This new reporting channel and the whistleblower protections are designed to increase accountability for the systems being developed.
Non-compliance with the law’s mandates can trigger civil penalties, which will be enforced by the California Attorney General’s office. The legislation attracted intense criticism from industry groups like the Chamber of Progress and the Consumer Technology Association. In contrast, AI company Anthropic endorsed the bill. Another major developer, Meta, called it “a step in the right direction,” signaling a degree of industry division on the new regulations.
Even with some support, companies expressed a clear preference for federal legislation to avoid a “patchwork” of state-by-state laws. In a LinkedIn statement published several weeks ago, OpenAI’s chief global affairs officer Chris Lehane wrote, “America leads best with clear, nationwide rules, not a patchwork of state or local regulations. Fragmented state-by-state approaches create friction, duplication, and missed opportunities.” This reflects a broader industry concern about navigating multiple, potentially conflicting, regulatory environments across the country.
On the same Monday, U.S. Senators Josh Hawley and Richard Blumenthal introduced a federal bill that would require top AI developers to “evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents.” The proposal would create a mandatory Advanced Artificial Intelligence Evaluation Program within the Department of Energy. Participation in this program would be compulsory, a measure that echoes the mandatory nature of California’s new reporting requirements.
These domestic legislative actions coincide with growing international calls for AI regulation. At the United Nations General Assembly last week, President Donald Trump remarked that AI “could be one of the great things ever, but it also can be dangerous, but it can be put to tremendous use and tremendous good.” The next day, President Vladimir Zelensky of Ukraine told the U.N., “We are now living through the most destructive arms race in human history because this time, it includes artificial intelligence.”