Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT’s Comments to European Commission on Artificial Intelligence (AI HLEG)’s Draft Ethics Guidelines for Trustworthy AI

The Center for Democracy & Technology supports the High-Level Expert Group (HLEG)’s efforts to develop guidelines for trustworthy AI and appreciates the opportunity to comment on this draft. In particular, we commend the group for affirming a rights-based approach to governing AI, for moving beyond the development of principles, and for acknowledging the need for a context- and domain-specific implementation of the values discussed in these guidelines. While we agree that trustworthiness is a key objective for any system, the HLEG must also acknowledge the limitations of current methods for mitigating bias in machine learning models.

In many contexts and applications, truly trustworthy AI remains hypothetical. Moreover, trustworthiness depends not only on the ethical purpose and technical robustness of the model or application but also on the governance of the entire societal context or legal system within which an AI application sits. We recommend that the HLEG place greater emphasis on (1) the importance of mechanisms and processes for continually interrogating and challenging AI systems from both the inside and the outside and (2) the importance of assessing the entire system (including underlying policies, laws, and human-technology interactions) that surround the AI.