Skip to Content

AI Policy & Governance, European Policy

Effective Remedies in AI: an Insufficiently Explored Avenue for AI Accountability

The right to an effective remedy, or effective redress, enshrined in the European Charter of Fundamental Rights, accomplishes a crucial role in operationalising all rights in the EU — whether fundamental or not. It makes both the effective enforcement of legally-backed rights in the EU a free-standing fundamental right, and allows EU institutions — and member states, when implementing EU law — to be held directly accountable by individuals. 

The right to effective remedies lays a foundation to ensure that the protections set in legislation governing the digital sphere, including the AI Act, are effectively applied. However, when it comes to AI and other complex technologies, challenges inherent to the functioning of the technology make access to an effective remedy more difficult. 

The Obstacles to Effective Remedies in AI 

AI poses a novel challenge to effective remedies because there is generally little or no information as to how an AI system functions. This lack of transparency, often compounded by a lack of disclosure that AI is being used, makes it difficult for individuals to know in what ways they have been affected by an AI system, or if they were subjected to an AI system in the first place. 

These challenges are amplified when legal procedures struggle to catch up to the reality of digital products. For example, the traditional burden of proof rules (procedural rules that must be met by individuals to effectively put on their case) have long required victims to explain the internal functioning of complex systems that operate with a high degree of opacity and autonomy. As was flagged in the recently published European Commission’s Digital Fairness report, these obstacles can make the right to compensation practically ineffective, even when sufficient information is held by a complainant. 

The AI Act goes some way towards remedying the transparency issue: providers deploying any type of AI system directly engaging with individuals must disclose use of the system unless it is obvious, and deployers must notify individuals if they are subjected to decision-making supported by a high-risk AI system. While these are positive steps, transparency on its own is not sufficient to achieve effective remedies. In recognition of the need for a two-pronged approach addressing both foundational transparency challenges and procedural difficulties, the European Commission began the exploration of a liability framework for AI-fueled harms as early as 2020, in parallel to the development of the white paper that would lead to the AI Act.  The result was the AI Liability Directive (AILD), a proposal which outlined basic steps towards easing procedural burdens for complainants in recognition of the hurdles posed by AI’s opaque functioning. Despite the AILD’s process-oriented nature and modest impositions, the draft law is struggling to get off the ground — even as the effective remedies issue in AI remains unaddressed.

The AI Act’s Lukewarm Approach to Individual Rights

It is clear from the parallel conceptualisation and development of the AI Act and the AI Liability Directive that the AI Act was never intended to solve the effective remedies issue on its own. Instead, the AI Act set out to strengthen the protection of fundamental rights in the AI era.

In its sparsely populated “Remedies” section, the AI Act modestly suggests two rights: the right to obtain an explanation for any decision taken subject to the output of a high-risk AI system where it has significant effects, and the right to lodge a complaint to a market surveillance authority for any infringement of any provision of the AI Act, with the only requirement that the authority take such complaint “into account”. Anyone, be it an individual or an entity, can make a complaint to a regulator, regardless of whether they have been affected by an AI system. This is a welcome feature of the AI Act, which rightly widens the scope of actors that may raise concerns. Yet the expansion in terms of quantity of likely complainants seems to have come at the expense of the quality of the assessment and processing of the complaint.

Nothing in the AI Act compels a market surveillance authority to engage with the substance of a complaint, or the complainants themselves. In this regard, the complaint mechanism set by the AI Act is not only tepid, but also a significant departure from the approach endorsed by data protection legislation. Both the General Data Protection Regulation and the Law Enforcement Directive set a higher standard, introducing a right to an effective judicial remedy against a binding decision of a supervisory authority. Both laws enable individuals to take action against a supervisory authority if it does not handle a complaint or does not inform the data subject of the outcome within three months. None of these safeguards operates in the AI Act, leaving individuals with no guarantees of meaningful engagement with the most significant regulators under the Act. At best, this leaves member states with ample flexibility to enable their regulators to get to work swiftly. At worst, it leaves national AI regulators — and by extension, AI providers and deployers — entirely unaccountable to individuals harmed by AI systems if they fail to take proper action. All in all, the right to complain to national AI regulators lacks teeth.   

The Need for an Ongoing Focus on the Effective Remedies Issue

The conversation on effective remedies in the field of AI is both necessary and urgent, particularly as narratives pushing for unfettered innovation gain traction in the European Union. As we prefaced in our analysis of the written answers of key incoming European Commissioners, current approaches signal a pro-industry approach which stands in tension with the expressed desire to uphold fundamental rights. Yet, as made clear by the European Charter of Fundamental Rights and jurisprudence of the European Court of Justice, the right to effective remedies is not optional. 

CDT Europe will be exploring the different ways in which existing and proposed laws in the EU, above and beyond the AI Act, enable or strengthen access to effective remedies for AI-induced harms – with a view to identifying and exposing any gaps that need to be filled.