Skip to Content

AI Policy & Governance, Privacy & Data

CDT’s October 2020 Comments to NIST on Principles for AI Explainability

The Center for Democracy & Technology (CDT) thanks the National Institute for Standards and Technology (NIST) for establishing the groundwork for this important aspect of developing trust in automated systems, and appreciates the opportunity to provide comments on NIST’s Four Principles of Explainable Artificial Intelligence. We offer the following comments in support of NIST’s principles, along with our hope that NIST will continue to lead this and other conversations to establish standards, best practices, and common understanding around the development, use, and assessment of automated systems. As always, CDT looks forward to future engagements with NIST and offers the expertise of its staff and that of the GRAIL Network wherever it can be helpful.

CDT generally agrees with the four principles set forth by NIST, which establish that the fundamental purposes of explanations require that they provide accurate information about the reasoning leading to an output and that the information disclosed should give the audience the right level of knowledge and understanding for their circumstances, including understanding whether certain inputs or outputs fall within the system’s range of competence. These principles should provide a solid foundation on which to build a larger, more detailed discussion of NIST’s approach to building trust in automated systems. However, we suggest the following ideas to improve the utility of NIST’s principles.

Read our suggestions and the full comments here.