Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

Just Be Ethical: High Level Guidelines on AI Are Fine, But Offer Little Guidance

Last week, the European Commission’s High Level Expert Group on Artificial Intelligence (HLEG) released its report, “Ethics Guidelines for Trustworthy AI.” The report is part of the Commission’s efforts to address the risks and benefits of AI and aims to establish a framework through which developers, regulators, and other stakeholders can assess whether systems involving AI comport with fundamental human rights, function as intended, and are resilient to attacks and failure. The guidelines are not binding, but “put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy.” CDT provided comments on the HLEG’s draft guidelines in February, 2019.

Overall, the report represents a good-faith effort by the Commission to promote the alignment of developing technologies with more established norms of human rights and values. CDT is pleased the guidelines incorporated our suggestion to acknowledge that assessments of automated decision-making systems should consider not only the technical aspects of the system, but also the societal and institutional aspects. This broadened scope helps illustrate that systemic discrimination and other undesirable effects often become embedded in new applications of technology, but remain hidden beneath a veneer of unbiased objectivity associated with machine-based reasoning.

CDT also appreciates that the guidelines emphasize the importance of considering the impacts of AI systems on vulnerable groups and the need for continual assessments. Further, the guidelines suggest that developers and operators of AI systems should be sure that they provide accessible ways for people to seek redress for any harmful impacts of the use of those systems. This is particularly important for vulnerable and historically-disadvantaged groups for whom the harmful impacts can be exacerbated while their opportunities for meaningful redress are more limited.

One major drawback of the guidelines is that they offer only high-level guidance.

However, one major drawback of the guidelines is that they offer only high-level guidance. As a result, they may be difficult to translate into practice, may not accommodate real-world tensions, and avoid some critical and timely questions about whether there are certain applications of AI that are so risky or misaligned with human values that they should not be used. Although the guidelines designate four high-level concepts — respect for human autonomy, prevention of harm, fairness, and explicability — they offer little guidance as to which particular set of values are implied and no guidance as to who should decide among them. Without resolving these questions and offering more detailed guidance on assessing AI systems in the real world, the value of the guidelines is limited to an expression of ideals without a mechanism for applying them.

The ambiguity around the use and application of the word “ethics” in the guidelines, and more generally by the tech industry, creates a separate problem which some are calling “ethical whitewashing.” This is the idea that a company might claim that its products or services are “ethical” or developed according to ethical standards without offering much detail as to what those standards entail, how they apply to the product or service, or whether anyone else would agree with the assessment.

Given this trend, it is especially disappointing to learn that some of the clearest statements about ethical boundaries for AI were removed from the report. To effectively shape the development and use of AI systems, policy guidelines need to provide enough detail and certainty to allow for their application in the real world and to allow others to verify efforts to comply with them. To its credit, the guidelines acknowledge the necessity of applying different ethical principles depending on the application of any given AI system. However, the guidelines do not clarify which principles should apply to which applications or how one should evaluate applications according to any set of principles (beyond the four high-level tenets above).

CDT hopes that the HLEG will strengthen and refine these guidelines with fewer implied assumptions about the application of ethical principles in AI assessments, more discussion about how to balance tensions between ethical principles, and examples of contextual application as it proceeds through a “pilot phase” toward updating the guidelines next year.