Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

EU’s AI Act: CDT Europe responds to the European Commission’s Proposal on High-Risk Classification

Also by Rachele Ceraulo, CDT Europe Advocacy Intern

We are approaching the final weeks of negotiations on the EU’s AI Act. Last week, the European Commission’s compromise text on the classification of high-risk AI systems was leaked to the media. The Commission’s decision to table this compromise is telling, as it indicates that the question of high-risk classification is a point of intense negotiations between the parties. It is concerning that the provisions of the AI Act linked to the uses of AI that would pose the highest risk to human rights are in serious danger of being watered down. This is coming on top of proposed Article 6, which would allow for a self-assessment-based carve-out from high-risk classification. The combination of Article 6 and this Commission proposal could totally undermine the vital protections offered by the high-risk classification on certain uses of AI.  

The Compromise Text

The compromise text introduces three criteria that would exempt certain AI systems used in high-risk contexts from complying with their obligations under the high-risk regime. According to the compromise, entities would be exempt from compliance, under the following circumstances:

  • Executing low complexity tasks;
  • Confirming or improving tasks that are accessory to human assessments; or
  • Performing preparatory tasks to an assessment.

If the AI system meets one of these three exemptions, it would be deemed to not pose a significant risk of harm, and thus would not qualify as high-risk under the Act. 

The Commission provided some practical examples for these derogations. For the first two exemptions, it offered the example of “AI system for recruitment or selection of natural persons, notably for […] screening or filtering applications” and “AI system intended to be used for recruitment or selection of natural persons, notably for advertising vacancies.”

The document also outlines a new mechanism for self-assessment under Article 6, whereby a provider must assess whether or not their AI systems are high-risk and draw up documentation supporting that determination. Providers could also be compelled, upon request, to submit these documents to the national competent authorities. 

Problems with the Commission’s proposed compromise text 

The problem with Article 6 and the Commission’s self-assessment proposal is that such an approach would put the burden on regulatory authorities to try to establish whether a company’s assessment was accurate. Regulatory authorities would have to sift through and make sense of companies’ own documentation, a process that would require massive resources that are not realistically going to be made available. Furthermore, such an approach incentivises companies to self-assess that their systems are not high-risk to avoid the further requirements that are associated with such a classification.

What’s more,  providers are under no obligation to notify the supervisory authorities that they are choosing to exempt themselves from their obligations under the high-risk regime. There is also no penalty mechanism foreseen for abusive uses of this system, meaning that providers who miscategorise their systems would not be fined.

Criteria of High-Risk Systems

The proposed exemptions for systems that otherwise would qualify as high-risk likewise have the potential to create confusion or give rise to harm. Whether something is “a narrow procedural task of low complexity” is vague. It is true, to use the example cited by the Commission, that simply converting unstructured data (e.g., a scanned resume) into semi-structured or structured data (e.g., a searchable resume) would usually not pose a substantial risk of harm. However,  there is  a very blurry line between using AI to make job applications or similar materials more accessible/understandable to human recruiters and using AI to analyse those materials in a way that influences recruiters’ decisions

The “accessory” and “preparatory” exceptions are also cause of serious concern. Repeated studies have shown that humans are inherently likely to defer to algorithmic recommendations, and exempting systems that are cast as merely preparatory or an adjunct to human decision-making would end up being exceptions that swallow the rule. A company could claim that recruiters have the final say, even though in practice, an algorithmic recommendation could be the deciding factor. CDT’s own research and our analysis of the EU AI Act has previously made clear the high risk of discrimination that the use of AI in recruitment and hiring poses. 

Recommendations

For the reasons outlined above, and in order to safeguard the core integrity of the AI Act’s human rights protections, it will be crucial that Article 6 self-assessment provisions be dropped, and the categorisation criteria for high-risk remain as was initially envisaged in the European Commission’s text. As previously highlighted by CDT, a risk-based approach can be helpful in ensuring proportionate regulation, but in order to appropriately protect human rights, this needs to integrate a rights-based approach. That is why, considering the above-mentioned challenges in categorising risks, mandatory inclusion of fundamental rights impact assessments (FRIAs) will be crucial to ensuring a robust rights-protective approach.

Visit the CDT Europe site.