AI Policy & Governance, Equity in Civic Technology
Analysis of Federal Agencies’ Plans to Comply with Recent AI Risk Management Guidance: Inconsistencies with AI Governance May Leave Harms Unaddressed
Federal agencies are just one week away from their December 16th deadline to publish updated AI use case inventories detailing their implementation of the required minimum risk management practices established by the Office of Management and Budget’s (OMB) memorandum on agency use of AI (M-24-10), titled Advancing Governance, Innovation, and Risk Management for Agency use of Artificial Intelligence. These practices matter because they ensure that agencies take appropriate steps throughout the planning, acquisition, development, and use of any AI system to identify and address potential harms as they grapple with whether and how to use this technology.
Agencies’ recently published plans for complying with M-24-10 lay the groundwork for agencies not only to meet this deadline but to advance robust and transparent AI governance practices across the whole of the federal government, and CDT read all 38 publicly available compliance plans, so you don’t have to. In reading them, we found several inconsistencies related to AI governance which could lead to under-identifying or failing to address potential harms that result from AI use. At the same time, several federal agencies are adopting innovative approaches to AI governance that warrant closer consideration as potential models for other agencies.
While some of these compliance plans incorporate encouraging practices, there is still significant room for improvement. Agencies should now take steps to build on these initial plans and establish the internal infrastructure to sustain this work going forward.
Federal agencies have inconsistent approaches to AI governance
M-24-10 charges each agency Chief AI Officer with “instituting the requisite governance and oversight process to achieve compliance with this memorandum and enable responsible use of AI in the agency.”
Agency compliance plans provide important insight into the overarching process that agencies are putting in place to govern their use of AI. Each compliance plan outlines how agencies will 1) solicit and collect information about AI use cases across the agency, 2) review all use cases for their impact on the rights and safety of the public, and 3) certify that each use case is in compliance with M-24-10. Indeed, an agency’s ability to manage the risks of AI will only be as strong as its governance and oversight practices.
However, the plans we reviewed reveal that agencies are adopting inconsistent approaches to fulfilling these obligations. This could be a result of the varied maturity levels between agencies’ AI governance programs, differing strategies for integrating these new obligations within existing agency operations, and significant differences in the level of rigor that agencies applied to fulfilling their M-24-10 requirements. AI governance plans vary widely in whether they:
- Create multi-phase, multidisciplinary AI governance processes;
- Establish new AI governance protocols;
- Review agency use cases beyond what is required in M-24-10; and
- Address civil rights and privacy explicitly.
Create multi-phase, multidisciplinary AI governance processes
Robust and accountable AI governance requires that multiple levels of review and expertise are engaged throughout the planning, acquisition, development, and use of an AI system. Some agencies have already taken steps towards achieving this goal by creating clear multi-phase, multidisciplinary processes for the review and certification of agency AI uses.
- The Department of Housing and Urban Development (page 10) adopted “review gates,” which establish clear stages of approvals throughout the deployment process that must be met before an AI use case can move forward;
- The Department of Labor (page 11) established a “Use Case Impact Assessment Framework” that guides all risk determinations, and requires that all such determinations are reviewed by policy offices throughout the agency including the Civil Rights Center, Office of Disability Employment Policy, the Privacy Office, and several others;
- The Department of Veterans Affairs requires that the agency’s AI Governance Council approves the agency’s complete annual AI inventory and use case determinations, following review by both the Chief AI Officer and agency subcomponents; and
- The Department of State (page 11) requires that any significant changes to an AI use case are independently reviewed and evaluated by agency governance bodies.
Together, these approaches help ensure that AI governance procedures are uniformly implemented across an agency and that AI systems are subject to multiple phases of review that involve different groups of experts within an agency.
But many other agencies are implementing these practices on a more ad hoc basis. Most concerningly, some agencies do not have any specific cross-agency review process for making important decisions like assessing AI use cases and determining which impact rights or safety. Instead, these agencies leave this oversight authority solely to the Chief AI Officer, as opposed to the more common practice of directly involving an agency’s AI Governance Board. For example, the Chief AI Officers at the Departments of Agriculture (page 15) and Defense (pages 3-4) are given sole responsibility for managing the documentation and validation of agency risk management practices.
Without additional agency-wide coordination and oversight, these deficiencies could result in significant gaps or errors in agencies’ AI governance practices, leaving the public vulnerable to potential AI harms.
Establish new AI governance protocols
Some agencies are embedding standardized decision-making processes for the review and approval of AI systems into existing governance processes. For instance, agencies like the Department of Interior (page 10), Department of Veterans Affairs, Federal Trade Commission (page 2), and Office of Personnel Management (page 12) are integrating their AI governance work into existing agency-wide risk management and information technology review programs.
Other agencies — including the Department of Health and Human Services (page 4), Department of Labor (page 11) and General Services Administration — are taking a different approach by creating new systems to support the intake and review of AI uses. To aid in this process, these agencies are creating standard operating procedures, questionnaires, and other standardized documentation to aid their subcomponents in fulfilling their AI governance obligations.
Both approaches offer benefits and drawbacks. Creating new systems allows agencies to establish processes specifically tailored for AI technologies, whereas adapting existing processes enables agencies to leverage already available resources. We look forward to learning more as agencies continue to fulfill the requirements in M-24-10.
Review agency use cases beyond the requirements of M-24-10
Many agencies are also implementing oversight mechanisms to create routine review processes that supplement their required annual review of all AI use cases. For example, a subset of agencies have created procedures for the consistent, semi-annual review of all AI use cases, including the Department of Agriculture (page 4), Department of Labor (page 5), Nuclear Regulatory Commission, U.S. International Development Finance Corporation (page 17), and Department of Treasury (page 3). Other agencies — like the Department of Homeland Security (page 6), Department of Transportation (page 6), Department of Treasury (page 4), Social Security Administration (page 3), and Department of Interior (page 4) — are standing up specific processes to audit and re-review any use cases excluded from the required risk management practices or non-public national security-related use cases for compliance with M-24-10.
Most agencies, however, have only committed to reviewing department use cases on an annual basis, which may be insufficient to keep up with the rate of AI adoption in government, and many also do not have specific protocols for auditing excluded or non-public use cases.
Address civil rights and privacy explicitly
A cornerstone of M-24-10 is its focus on protecting the rights and safety of the public. As such, it directs every Chief AI Officer to coordinate “with officials responsible for privacy and civil rights and civil liberties on identifying safety-impacting and rights-impacting AI within the agency.” The extent of such engagement, however, varies widely between agencies.
Promisingly, the vast majority of agencies have appointed senior civil rights, civil liberties, human rights, and privacy officials to their AI Governance Boards, which every agency is required to establish under M-24-10 to oversee both risk management and innovation. But this does not go far enough. Agencies need to take additional steps to embed these officials into the decision-making process for ensuring that AI systems are in compliance with M-24-10. For instance, several agencies — including the Department of Energy (page 1), Department of Labor (page 5), Department of Transportation (page 3), General Services Administration, and the Office of Personnel Management (pages 4-5) — have already done this by creating separate, dedicated working groups with civil and human rights, privacy, and sociotechnical expertise that are charged with the review and oversight of rights- and safety-impacting use cases. This structure ensures that civil rights and privacy experts have a dedicated seat at the table and are able to provide direct input about agencies’ highest risk use cases.
While this is a positive development, a majority of agencies have no civil rights or privacy officials in substantive decision-making roles within their broader AI governance process. Instead, many agencies only have representation from such officials in a purely advisory capacity through their AI Governance Boards.
To address this shortcoming, agencies should prioritize integrating senior civil rights and privacy officials into their decision-making processes, especially for any rights- or safety-impacting use cases. Agencies should also consider opportunities to upskill their offices of civil rights and to engage with external civil rights and privacy advocates.
Promising practices that others should consider
Agencies’ compliance plans also reveal a range of emerging and innovative practices that show promise as potential tools to increase the effectiveness of agencies’ AI governance. These include the following examples:
- Partnering with academia and other experts: The Department of Labor (page 3) partnered with Stanford University to develop the agency’s internal guidance on M-24-10 compliance, which includes agency-specific risk standards.
- Creating independent review process: The Department of Labor (page 12) established a third party review process for any use cases where the agency’s Chief AI Officer was directly involved in development of the AI system to ensure the independence and accuracy of all use case evaluations.
- Centralizing permissions for staff to use AI: Several agencies, such as the Department of Treasury (pages 7-8) and the Social Security Administration (page 5), established dedicated processes to prevent the unauthorized uses of online-based AI systems and to remove any unapproved systems from Department networks.
OMB should encourage agencies to continue experimenting with such approaches, and the CAIO Council should leverage its position as the central interagency forum on AI governance to facilitate sharing best practices and collaboration between agencies. As a starting point, other agencies should look to these practices as examples to inform and supplement their own AI governance work.
Conclusion
Agencies’ M-24-10 compliance plans are a promising start, and reveal that many agencies are well underway with their work to complete their updated use case inventories by December 16th. Ultimately, however, the impact of these compliance plans will only be as strong as their implementation. As we head into the new year, it is critical for agencies to keep up the momentum and urgency around implementing these critical safeguards.