Also authored by Claire Fourcans
In 2018, an Amazon automated recruiting system taught itself that male candidates were preferable over women by observing patterns in male-dominated résumés of Amazon’s workforce. Similarly, and as the Center for Democracy & Technology (CDT) established in a report on disability discrimination in algorithm-driven hiring tools, such systems also tend to discriminate against disabled workers. Given these and other examples, it is not surprising that the Artificial Intelligence Act, currently under discussion at the European Union (EU) level, classifies as high-risk artificial intelligence (AI) systems that are used in different phases of employment, including those used to make decisions in the course of employment (Annex III 4) and to recruit and select job candidates by screening and evaluating their applications (Annex III 4.a), the latter of which is the focus of this article.
AI systems used to screen CVs, test personality, aptitude and cognitive abilities or to establish whether candidates are a “cultural fit” could be particularly discriminatory for people with disabilities, as they might require applicants to use tools which are not accessible for them. For example, a test that requires spoken answers is not accessible for people who cannot speak (temporarily or permanently). They might also be used to test personality traits which are not necessary to perform all jobs, such as optimism. In another recent example in the context of employment, an Uber AI facial recognition system failed to let black drivers log into their accounts because automated facial-verification software wrongly decided their selfie pictures were of someone else.
To prevent such discrimination and guide development of employment decision technologies, CDT has – with other fundamental rights and technology organisations – been leading the way in publishing Civil Rights Standards for 21st Century Employment Selection Procedures, which expanded on the Civil Rights Principles for Hiring Assessment Technologies. Those principles and standards are vital steps that can provide guidance for the EU AI Act. However, at the moment, the EU AI Act doesn’t incorporate, much less make binding, any of those principles and standards. It also fails to ensure that the existing protections provided by the EU Equality Law against discrimination in hiring would be effective when recruitment is done via an AI system. Indeed, under articles 9 and 10 of the 2000 Directive on equal treatment in employment and occupation and articles 17 and 19 of the Recast Directive, EU Member States have the obligation to provide an effective right to remedy in case of discrimination in hiring, with a reversed burden of proof on the employer to prove that recruitment was not discriminatory. Yet, as CDT previously pointed out, the EU AI Act fails to include avenues to redress which would ensure victims can have access to judicial review in case of discrimination and doesn’t align its provisions with the existing obligations under these Equality Directives.
The questions remain therefore: how can the EU AI Act establish a legal framework which respects both EU law and international human rights standards as enshrined in the UN guiding principles on business and human rights in the technology sector? And, how can the negotiated legislation respond to the challenge of preventing and responding to discrimination by AI systems in hiring, which is made more difficult by the opacity and complexity of AI systems, or so-called “Black Box effect”? The EU AI Act must ensure that, as underlined by Equinet, affected persons can meaningfully challenge that decisions taken by AI systems are not discriminatory and, if they are, that they can ask a judge for remedy.
The Limited Avenues for Remedies in the EU AI Act
Under the proposed EU AI Act, AI systems classified as high-risk will have to go through a conformity assessment. The proposed conformity assessment mechanism does not, however, offer access to any concrete right to remedy. This reflects how end-users are overall absent and not explicitly considered in the proposal, as noted by the Ada Lovelace Institute.
To bridge that gap, the Council is proposing to allow complaints from individuals and NGOs to the national supervisory authority (art. 63), and the European Parliament rapporteurs have proposed an entire new chapter on remedies for natural and legal persons. Under these amendments, claimants could lodge a complaint with their national supervisory authority. If the latter does not act nor inform the claimant about proceedings following their complaints, they would have an effective right to judicial remedy, which they could exercise by appealing the national authority decision to a Court. This proposal by the EP rapporteurs is supported – with some alterations – by many other MEPs who have submitted 15 joint amendments to the Parliament’s draft report from the co-rapporteurs on this point.
CDT supports this proposal from the co-rapporteurs and calls for the Council to support the integration of the proposed complaint and appeal mechanisms into the text of the EU AI Act. CDT also calls for these mechanisms to be extended to allow remedy to be sought for infringements by both providers (e.g. those who develop the AI hiring system) and users (e.g employers deploying the AI hiring system) as proposed by the EP rapporteurs. It is indeed crucial in the context of hiring that employers can be held accountable for discrimination resulting from the use of AI recruitment systems. In addition, CDT urges the co-legislators to align these mechanisms with the existing EU Equality Laws to allow victims to access national employment courts as seamlessly as possible. As we have previously argued, relying on national-level supervisory authorities risks being too-removed from the day-to-day realities of discrimination and the scale of use of AI in decision-making to have a concrete impact on individual cases. This is why close interaction between the national supervisory authorities and labour courts will be key.
The Proposed AI Liability Directive, Another Missed Opportunity to Ensure Access to Effective Remedy and Equality Protection
This potential for failing to align with existing EU equality law or providing effective right to remedy is not only a risk within the EU AI Act. Other related legislative proposals similarly lack the necessary harmonisation and completeness necessary to protect fundamental rights. This is the case for proposals issued by the Commission in September 2022 on product and AI liability. Under the revised Product Liability Directive, companies developing the systems as well as those using them can be liable only when a person is dead or injured or has lost data, which would not offer progress in the case of discrimination in hiring.
The proposed AI Liability Directive establishes that deployers and users can be liable when the victim establishes they have committed a fault that resulted in the damage. This fault can be an act or omission, including when deployers or users have not respected their duty of care and did not adopt a conduct to avoid damage to ‘legal interests’ such as fundamental rights. This new Directive also includes specific dispositions for high-risk AI systems: national courts can order disclosure of evidence and, in case of non-disclosure by the defendant, courts can rule on a presumption of non-compliance with the defendant’s duty of care. National courts can also conclude at the presumption of a causal link between the fault and the damage when certain conditions are met.
This proposed new legislation is a step forward in ensuring effective access to remedy for discrimination in the use of AI hiring systems, as it could be established that an employer has omitted to respect their duty of care when they have not ensured their hiring AI system does not discriminate against certain categories of people. Yet, the proposed AI Liability Directive does not refer to existing protection from EU Equality Laws, which shift the burden of proof to the employer in case of discrimination in hiring. It is an important gap as, despite victims-supportive provisions for high-risk systems, the AI Liability Directive does not make it less difficult for claimants to prove the discrimination from the deployers or users of AI systems. As highlighted by a study by the KU Leuven Centre for IT and IP Law, victims will need legal advice and resources to be able to open the “black box” of AI systems. The opacity and complexity of such systems will make it very difficult to prove the fault of the employer or the deployer and to show how use of the AI system is to blame for the discriminatory outcome. This is in addition to the opacity in how employers use the system’s outputs, for example when they misrepresent how much weight they give to a system’s outputs in their hiring decision. Furthermore, the proposed AI Liability Directive will need to be clarified, particularly around the notion of “duty of care”, and to align with existing rules to bring more certainty to ensure persons affected by AI systems have effective access to remedy for damages caused by AI systems in practice.
The Proposed AI Legislative Framework Must Effectively Enforce the Existing EU Right to Not Be Subject to Discrimination in Hiring
In light of the gaps mentioned above, CDT calls for the proposed AI Liability Directive to be aligned with existing EU Equality laws and for it to offer clear guidelines for employers when they use AI systems in hiring. These guidelines should be inspired by the Civil Rights Standards for 21st Century Employment Selection Procedures. In addition, CDT calls on negotiators of the AI Liability Directive to adopt a strict liability regime for high-risk AI systems as suggested by the European Parliament in its 2020 resolution on a civil liability regime for artificial intelligence.
It is imperative to create a legal framework for the deployment of AI systems in the EU that offers the highest level of protection for fundamental rights. This framework must ensure that EU citizens can effectively enjoy protection from the existing EU Equality laws, in particular in the context of discrimination in hiring, and benefit from an effective right to remedy as defined by International Human Rights Law standards. Without this, the proliferation of AI systems threatens to create a situation of widespread denial of access to justice for EU citizens.