AI Policy & Governance, CDT AI Governance Lab
Newsom Working Group Calls for Vital Transparency into AI Development
In September of last year, California Governor Gavin Newsom created a working group, led by renowned academics and policy experts, to prepare a report on AI frontier models in order to inform regulators and create a framework for how California should approach the use, assessment, and governance of this advanced technology. In March, the working group released a draft of its report for public comment, and this week, CDT submitted our feedback.
Our comments commended the working group for emphasizing the critical importance of transparency in AI governance. For years, researchers and advocates have argued that transparency is a vital lever for managing the risks of AI and making its benefits available to a variety of stakeholders, by enabling rigorous research and providing the public with critical information about how AI systems affect their lives. Transparency requirements can also create conditions for companies to develop AI systems more responsibly, and help hold those companies accountable when those systems cause harm.
But as of now, essentially all transparency in the AI ecosystem depends on purely voluntary commitments from AI companies, which are not a stable foundation for managing AI risks. AI companies have seemed all too willing to backtrack on their transparency commitments when their business incentives change. Recently, for example, Google released the state-of-the-art AI model Gemini 2.5 Pro without a key safety report — violating a previous commitment it had made. This reveals a vital role for regulators: ensuring that AI developers provide the transparency needed for safe, responsible, and accountable AI development.
At the same time, we pushed the working group to add more detail to its transparency recommendations, given that the most useful transparency measures are those that are precisely scoped and backed by a clear theory of change. We called for the disclosures emphasized by the draft report to be complemented by further crucial information, which would enable visibility into not only the technical safeguards a developer implements, but also the internal governance practices that the developer relies on to manage risk. We also called for visibility into how developers assess the efficacy of their safeguards, as well as information about how developers decide whether risks have been mitigated adequately enough to deploy a model. In addition, we urged the working group to promote visibility into how developers intend to respond to significant risks that materialize. Finally, we urged the working group to consider how developers ought to be incentivized to grant pre-deployment access to qualified third-party evaluators where warranted.