Skip to Content

AI Policy & Governance, CDT AI Governance Lab

Ensuring NIST’s AI Safety Institute Consortium Lives Up to its Potential

Speaking alongside the UK’s AI Safety Summit last fall, Vice President Kamala Harris announced the creation of a United States AI Safety Institute (AISI), to help realize the vision of NIST’s AI Risk Management Framework and address the evolving risks posed by increasingly complex AI systems. As the White House described at the time, the purpose of the AISI would be to develop technical guidance for regulators to address issues ranging from content authenticity to identifying and mitigating algorithmic discrimination, in partnership with external experts from civil society, academia, and industry. 

At CDT, we’ve long held that such multistakeholder engagement is critical — AI is already affecting all sectors of the economy, has profound impact on people’s rights, and is raising questions about safety and national security that both echo conversations surrounding prior general purpose technologies and present new challenges. Multistakeholder engagement is deeply important to ensure that not only developers and deployers of technology but also impacted communities, particularly those who are most marginalized, have a say in how that technology is developed and governed. Too often, civil society organizations face barriers to participating in technology governance processes, so when NIST officially established the AI Safety Institute and a multistakeholder consortium to support its work, we were excited for CDT’s AI Governance Lab to plug into the effort.

With the Consortium finally shifting into gear after a few weeks’ delay, we’re taking the opportunity to share our hopes for NIST’s effort, and what we’ll be watching for when it comes to meaningfully involving public interest stakeholders.

First, the AI Safety Institute should continue to center the full spectrum of risks that NIST itself has identified through its work on trustworthy and responsible AI in recent years, motivating the AI Risk Management Framework (RMF). While generative AI and foundation models have characteristics that diverge from more traditional AI systems in some ways and may present novel risks, many elements of AI governance are applicable to AI systems of all kinds. When soliciting participation in the Consortium, NIST described a broad interest in promoting the development of trustworthy AI and its responsible use, but the working groups that have been identified to organize the Consortium’s involvement appear at first glance to be primarily focused on a narrow subset of relevant issues such as evaluations for capabilities related to chemical and radiological weapons, and security of dual-use foundation models. Foreseeing harms of new technologies is an important part of risk management — but more familiar, existing risks such as privacy invasion, ineffective systems, and discrimination remain unsolved challenges. We hope NIST will continue to prioritize the important goals of helping practitioners more effectively operationalize the RMF even as it considers additional harms, taking into account the contextual lens that is needed when considering risks of all kinds.

Second, the AISI can demonstrate its commitment to a truly inclusive process by making a particular effort to integrate the perspectives of public interest consortium members who have fewer resources to commit to the collective effort. These perspectives are necessary to spot and remediate issues that will directly affect communities, but they risk being crowded out by those organizations who can afford to have multiple representatives commit significant time to following and shaping the consortium’s agenda. The best outcome of these efforts will be recommendations and actionable practices that reflect a true consensus that balances interests of protecting rights, ensuring safety, and advancing innovation. Without care, however, the process is subject to inadvertent capture by motivated interests. We are optimistic that participating in the Consortium alongside civil society peers can enable public interest priorities to be valued and incorporated, and we look to NIST and the AISI leadership for support toward that outcome.

Finally, NIST and all of its AI-related efforts should continue to prioritize the development and adoption of rigorous methods to identify, measure, and remediate risks and harms of AI. Trustworthy AI needs trustworthy measurement, but too often, the methods being deployed to support policy proposals or justify rapid and widespread release of new tools rest on shaky ground. Measurement validity is particularly important when it comes to AI safety, since consequential decisions about the launch and use of advanced systems will increasingly be informed by results of these measurements — and improperly scoped measurements could lead to faulty decisions that threaten people’s safety, access to opportunity, and well-being. It can be tempting to focus on measures just around model “capabilities” and security risks, but measurement must encompass a broader set of risks, and the definitions of those measures must be transparent. NIST has a long history of advancing measurement science, so we look forward to contributing to these efforts to ensure that measurements are valid, reliable, and contextualized in the sociotechnical nature of the AI systems. (Read more about why we think measurement is a critical piece of the AI governance puzzle.)

We are encouraged that the U.S. has recognized the important role of NIST in developing actionable tools and guardrails to guide practitioners in deploying more responsible AI systems, and the apparent commitment to making sure this ongoing work reflects the input of the broader ecosystem of stakeholders. Importantly, we hope to see stronger governance of the Consortium, including clarity around how decisions are made, how industry contributions will be incorporated with the fewest conflicts of interest, and commitments to centering not only the most active voices but the perspectives of those most likely to be impacted (and the broad topical focus that requires) by recommendations and standards coming out of NIST’s efforts.

Inattention to this critical dynamic within the Safety Institute and Consortium could mean that promising momentum would risk being eroded by too strong a focus on “hard” security issues and narrowly scoped definitions of safety, at the expense of tackling well-known but under-addressed areas of risk. And while there is clear value in including practitioner voices, incautious reliance on their contributions could fuel concerns around undue industry influence over the recommendations that emerge. These scenarios are not inevitable, but will take careful leadership and attention to avoid; an investment in a robust multistakeholder process will be an important tool toward this goal. Care is all the more important as NIST navigates aggressive timelines for making progress against its obligations under the executive order, and the significant resource constraints it faces. We look forward to diving into a variety of workstreams to advocate for public interest priorities and reflect back impressions of the process to the broader community as the work takes shape.