Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

Access to Justice and Effective Remedy in the EU AI Act: The State of Play

The EU AI Act, which seeks to regulate the use of artificial intelligence so that it respects human rights and the rule of law, is currently in the final stages of complex — and, thus far delayed — negotiations. With over 3,000 amendments to the legislation tabled by relevant committees in the European Parliament, questions remain about whether the AI Act will adequately address access to remedy and justice for victims of any discrimination that results from AI decision-making.

The initial draft of the AI Act focused almost exclusively on remedies for AI vendors and professional users — including public authorities. While the Commission’s proposal imposed transparency obligations on providers that deploy certain AI systems with limited risk, such as informing that the natural person has been interacting with an AI system (Article 52), it excluded from the scope of this obligation a broad range of law enforcement applications, and did not hold deployers of AI systems accountable in cases of discrimination resulting from AI decision-making. Most importantly, it failed to include avenues to redress for the very individuals most likely to be adversely affected by the use of AI systems, or for public interest organisations, who have a crucial role in representing those individuals. 

While EU equality laws prohibit discrimination on protected grounds in access to services, employment, and education, and provide safeguards and remedies for people who are discriminated against, they precede the widespread use of AI systems in decision-making. To help address the real gaps that exist in protection against both direct and indirect discrimination in the context of AI decision-making, the AI Act should explicitly require entities using AI systems to prove that those systems are not discriminatory, which it currently does not do.

Such a requirement would be particularly impactful in the area of employment, where use of AI in selection has become widespread practice, heightening the risk of discrimination, in particular for people with disabilities and other groups at risk. The EU’s Employment Equality Directive already places the burden of proving that hiring discrimination did not occur on employers, yet in the case of AI decision-making, this requirement can be hard to fulfill. The AI Act must therefore further clarify the responsibilities of developers and vendors of AI systems, and spell out how an individual that suspects discrimination would have access to remedy and justice in practice. EU equality law already states that employers are responsible for overseeing processes that ensure no discrimination takes place; it is now crucial to allow them to understand how the AI systems they use are making decisions. 

A joint draft report from the two co-rapporteurs for the file in the LIBE and IMCO Committees, Dragoș Tudorache (Renew) and Brando Benifei (S&D), does improve access to redress. It requires deployers of high-risk AI decision-making systems to inform individuals that they are subject to an AI system (Amendment 145). It also introduces a right for individuals — or, groups of individuals — to lodge a complaint to the national supervisory authority against providers and deployers of all AI systems in case of breach to their health, safety, or fundamental rights; a right to be heard and informed about the outcome of the decision in such a case; and a right to an effective judicial remedy following a national supervisory authority’s legally binding decision in those cases (Amendments 262 and 263). This proposal has been widely welcomed, and there have been further calls to allow not-for-profits or other organisations to make complaints on behalf of impacted individuals or communities. 

Along with the proposed redress mechanism, Tudorache and Benifei acknowledge that AI systems used by law enforcement authorities for predictive policing hold a particular risk of discrimination, justifying the new prohibition on this practice (Amendment 76), and specifically refer to existing EU non-discrimination law (Amendment 77). They also add a provision to ensure cooperation with the European Data Protection Board and Fundamental Rights Agency on guidance on fundamental rights, including non-discrimination.

Some of the proposed amendments subsequently tabled by other rapporteurs go further, suggesting that deployers provide more information on high-risk AI systems to strengthen transparency, and submit to fundamental rights impact assessments. They specifically consider the situation of people with disabilities in the case of high-risk AI systems: they suggest paying specific attention to the accessibility, for persons with disabilities, of information on the use of high-risk AI systems that affect natural persons, and promote AI research and development in disability accessibility. Some rapporteurs also propose including pecuniary and non-pecuniary damages in cases of breaches of individuals or groups’ rights, or requiring providers or deployers of AI systems to establish mechanisms for handling internal complaints. Finally, new definitions have been introduced to clarify the chain of actors involved in the AI lifecycle, which tended to lack legal certainty under the initial proposal.

While these amendments are a step in the right direction, it is not clear how many will make the final cut, and it’s difficult to gauge the concrete impact they will have on offering actionable ways to make discrimination claims. In the case of AI systems used in an employment context, which would be classified as high-risk, employers would — under the proposed amendments — be required to inform people when they are subject to an AI system. Beyond that, though, the proposed redress mechanism is too general, leaving serious questions unanswered about accountability for various stages of a discrimination claim process: to start, to whom would a potential complaint be addressed? Who would bear the burden of proof for such a claim? Even the mechanism for evaluating a claim could improve: relying on national-level supervisory authorities risks being too removed from the day-to-day realities of discrimination and the scale of use of AI in decision-making to have a concrete impact on individual cases.  

Conclusion

Individuals and communities, particularly those from marginalised or vulnerable groups who face greater risks, are in an inherently unequal relationship with those who deploy AI systems. The leading Committees in the Parliament have, at least partially, addressed some of the concerns that arise from this issue, but their solutions mean that some breaches to fundamental rights will be easier to prove than others. We already know from existing EU equality legislation that giving people concrete actionable rights has been most effective at combating discrimination. This lesson must be carried through to the AI Act, with clear obligations on those who deploy AI in decision-making to constantly disprove discrimination, or to cease to use AI in decision-making where the risk to rights is too great. 

For remedies to meet international standards, they must be effective. This means that the complaint mechanism needs to be easily and directly accessible to those potentially adversely impacted. An effective remedy is also one that is timely; the complexities of a national-level mechanism alone are unlikely to satisfy this need. If the AI Act does not integrate a meaningful, effective, and easily accessible redress mechanism for persons and groups discriminated against by AI systems, it will fail in its primary objective. EU lawmakers must push for these changes in the final weeks of negotiation to ensure that the ultimate goal of protecting human rights in AI decision-making is upheld.