AI Policy & Governance, Equity in Civic Technology
AI Action Plan Should Promote AI Transparency, Accuracy, Effectiveness and Reliability, CDT Says
The Center for Democracy & Technology (CDT) submitted comments to the Networking and Information Technology Research and Development National Coordination Office on the highest priority actions that should be in the new AI Action Plan required under Executive Order 14179. As this executive order explains, the AI Action Plan would “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
Our comments identify several well-established principles for trustworthy and effective AI that have bipartisan support and acceptance and should form the basis of the AI Action Plan. During his first term, President Trump issued Executive Orders 13859 and 13960 and Office of Management and Budget’s Memorandum M-21-06, which articulate principles that include:
- Evaluating and addressing risks to people’s privacy, civil rights, civil liberties, and safety;
- Improving transparency and accountability to the public;
- Ensuring accuracy, reliability, and effectiveness; and
- Incorporating public input.
These principles were incorporated into the National Institute of Standards and Technology’s (NIST) consensus-driven AI Risk Management Framework (AI RMF), and Congress has endorsed similar principles on a bipartisan basis, as illustrated by the Bipartisan House Task Force on AI’s report and Bipartisan Senate AI Working Group’s roadmap. Companies have also voluntarily adopted similar principles in their own AI governance commitments.
Our comments recommend several elements that the AI Action Plan should include to advance these goals:
Continuing NIST’s work to develop guidance for AI governance
- NIST’s vital role in AI governance is to develop voluntary standards and evaluation and measurement methods grounded in technical expertise regarding how AI systems work, how AI systems can cause or contribute to risks to people’s rights, and how those risks can be mitigated.
- The AI Action Plan should direct NIST to continue developing standards through a process that meaningfully integrates civil society expertise to ensure that risks to communities are spotted and addressed and to support greater understanding of how a system’s design and capabilities affect its behavior and performance.
- The standards-development process should center not only prospective security risks, but also current, ongoing risks such as privacy harms, ineffectiveness of the system, and discrimination.
- NIST also provides necessary expertise on the valid and reliable measurement of different qualities of an AI system, such as safety, efficacy, or fairness, so standards development should include a multifaceted approach involving multiple methods to measure any given quality.
Ensuring the use of trustworthy AI in the federal government
- The AI Action Plan should advance safe, trustworthy, effective, and efficient AI in government service delivery and operations – realizing the potential of AI systems in modernizing government requires enabling responsible uses through robust guardrails to protect individual’s safety, privacy, and civil liberties.
- The AI Action Plan should center six best practices to guide federal agencies’ use of AI: risk assessment and mitigation, testing and evaluation, centralized governance and oversight, privacy and security, public engagement, and transparency.
- In the specific context of law enforcement, the AI Action Plan should protect due process rights by requiring disclosure of an AI system’s use to people accused of a crime based in part on evidence or leads generated using that system.
- Given alarming reports about DOGE’s use of AI systems to make a host of high-risk decisions across the federal government, these measures are more important now than ever.
Aligning the use of AI for national security purposes with civil liberties and the Constitution
- Many AI use cases for national security-related decisions are high risk because life or liberty are at stake, but are not made public, which can shield the abuse and misuse of AI systems.
- Where classification needs prevent public reporting of AI use cases in a national security setting, the AI Action Plan should ensure effective reporting to relevant congressional committees and support the establishment of an independent oversight body for such use cases.
- The AI Action Plan should also ensure effective governance and oversight through coordination between Chief AI Officers and through an independent oversight mechanism within the Executive Branch.
Advancing competitiveness by supporting openness in the AI ecosystem and investing in the National AI Research Resource
- The AI Action Plan should set a course that ensures America remains a home for the development of open models, which can accelerate AI innovation and facilitate the rapid and responsible adoption of AI by businesses.
- Open models also mitigate the concentration of power and control over the sharing of knowledge and expression within the closed AI model ecosystem where many different entities rely on the same few companies’ closed AI models.
- The AI Action Plan should require robust standards for agencies to monitor developments in open models’ capabilities and identify potential public safety and national security risks, rather than prematurely imposing export restrictions that would undercut American competitiveness and cede AI leadership.
- The AI Action Plan should also recognize the implementation of the National AI Research Resource as a priority, which can strengthen our nation’s AI research infrastructure to democratize access to the computational resources, data, and tools needed for cutting-edge AI development.
Shaping responsible private sector use of AI
- Agencies have the sector-specific expertise needed to help companies adopt practical governance measures that ensure their AI systems are effective, fit for purpose, and safe; do not undermine people’s rights; and comply with long-standing legal obligations.
- The AI Action Plan should direct agencies to take regulatory and non-regulatory approaches by pursuing new enforcement actions, adapting their regulations, and providing guidance to hold companies accountable when they adopt AI into their business practices.
- The AI Action Plan should include formal interagency coordination mechanisms to help agencies exercise their individual authorities while collectively ensuring that companies routinely apply principles of trustworthy AI.
To advance American AI leadership, the AI Action Plan should ensure that public and private sector development and use of AI advances fundamental American values.