Skip to Content

AI Policy & Governance, Free Expression, Privacy & Data

CDT Comments to NTIA on AI Accountability

Pasted from the introduction:

***

The Center for Democracy & Technology (CDT) respectfully submits these comments in response to the National Telecommunications and Information Administration’s (NTIA) request for comments regarding artificial intelligence (“AI”) system accountability measures and policies. CDT is a nonprofit 501(c)(3) organization fighting to advance civil rights and civil liberties in the digital age. CDT’s focus includes the impact of data- and algorithm-driven discrimination, as well as accountability for the entities involved in developing and deploying such systems.

In addition to these comments, we invite the NTIA to refer to our recent agency comments and publications on worker surveillance, tenant screening, the relationship between trade negotiations and AI accountability, large language models’ performance in languages other than English, digital identity, risk assessments for automated decision-making, public housing, and the use of generative AI in education, all of which pertain to aspects of AI accountability in specific sectors. The recent testimonies of CDT President and CEO Alexandra Reeve Givens before the Senate Committee on Homeland Security & Governmental Affairs, the House Committee on Energy and Commerce, and the Senate Committee on the Judiciary, also address relevant topics.

The comments below begin with an illustrative (though not exhaustive) overview of AI-associated harms. A key goal for accountability should be to minimize these harms and to provide redress when they occur. Because “artificial intelligence” is such a capacious and flexible phrase, we specifically address three types of algorithmic systems in turn: automated decision-making tools; systems for content analysis, moderation, and recommendation; and generative AI including large language models (LLMs). We discuss how the algorithmic systems in question are used in the public and private sectors and the ensuing harm to individuals and groups of people. The seriousness of the potential harms makes clear that accountability for them cannot be left simply to industry, but must involve all key stakeholders, from the government and regulators to civil society and independent experts to communities and individuals harmed or otherwise affected by the use of AI.

Next, we present some high-level considerations for AI accountability policy. We discuss four major components of the AI accountability toolbox: transparency, explainability, and interpretability; audits and assessments; laws and liability; and government procurement reform.

  • Transparency is the foundation of accountability. That starts with the disclosure that AI is being used in, for example, a decision about benefits. Of course, merely knowing the role of AI is not sufficient. Transparency also requires the disclosure of information such as how an AI system was trained, how it arrives at decisions, and an explanation of the decision or output in a particular case.
  • Audits and assessments are widely understood as fundamental to accountability but important questions remain unanswered. These include (a) how to ensure auditors have sufficient independence, expertise, and resources; and (b) how to develop the standards to be used by auditors, recognizing that they will embody value judgments that should be made only after input from all affected stakeholders.
  • In many cases, existing laws such as civil rights statutes provide basic rules that continue to apply, but those laws were not written with AI in mind and may require change and supplementation to serve as effective vehicles for accountability.
  • Government procurement laws and policies for the acquisition of AI systems and services can provide a model and drive development of best practices.

As an illustrative example, we then analyze how these accountability mechanisms work in practice and could be improved in the particular context of AI in hiring and employment, in which CDT has extensive internal expertise and has published standards for civil rights accountability.

We close by discussing accountability for harms that result from content produced by generative AI. Generative AI tools may be used by individuals for a wide range of creative and expressive purposes, at least some of which are protected by the First Amendment. Generative AI tools are developed and used over multiple phases and by multiple types of actors, which makes accountability complicated. Nevertheless, each of the four accountability tools described above apply to generative AI.

There likely is not a one-size-fits-all liability model that adequately protects individuals’ rights in relation to generative AI tools, but it will be vital to map how existing legal principles across criminal and civil law would apply to cases involving generative AI to ensure that we do not end up with a liability gap that leaves serious threats to individuals’ rights unaddressed. Policymakers should pursue accountability frameworks that focus on risk assessment and mitigation and incentivize the implementation of safeguards against abuse…

Read the full comments here.