AI Policy & Governance, European Policy
The AI Liability Directive: The Next Sprint to Uphold Human Rights in AI in the EU
In the throes of the AI Act negotiations, a lesser-known yet significant legal proposal was put forward by the European Commission in September 2022: the AI Liability Directive (AILD), a draft framework for civil liability rules allowing individuals to bring claims for damages caused by AI systems.
Discussions on the Directive were placed on hold pending the AI Act’s adoption and the publication of a complementary impact assessment report by the European Parliamentary Research Service (EPRS). The report is now out, reopening the debate on the necessity and scope of the Directive. The report’s key finding is clear: the AI Liability Directive must not only proceed, but should be strengthened and broadened in scope.
The Relationship between the AI Act and the AILD
The AI Act relies on a risk-based approach to regulate AI systems, introducing safeguards and mitigations, as well as a web of oversight bodies and control mechanisms to ensure providers and deployers follow the rules subject to penalties for non-compliance. However, the AI Act fails to create private rights of action for individuals to seek compensation for harms caused by AI systems.
The AILD attempts to fill this gap by facilitating the ability of individuals to seek compensation directly from providers or deployers of AI systems when they have suffered damage as a result of an AI system’s output or failure to produce an output, particularly where that system is high-risk. Catering to the challenges inherent to grappling with AI for non-professional users, the AILD as drafted creates a mechanism to facilitate access to disclosure of evidence for suspected harms stemming from high-risk AI systems and introduces a rebuttable presumption of causality, intended to ease the burden on individuals to prove causation between the harm suffered and the AI output or lack thereof.
Despite the AILD’s potential to make more attainable legal remedies for individuals adversely affected by AI systems, the law stalled while another, simultaneously introduced legislative proposal on product liability, the Revised Product Liability Directive (PLD), progressed. The latter proposal — now adopted — created wide-ranging rules governing liability for defective digital products including software, prompting the question whether the overlap was significant enough to defeat the purpose of the AILD.
It was this question that the European Parliament Research Service report sought to answer.
A Renewed Push for Strengthening the AILD
The report unreservedly recommends for the AILD to be taken forward. While acknowledging the overlap between the AILD and the PLD, the report notes two key gaps which the AILD has unique potential to address: securing access to remedies for non-professional users of AI (as opposed to the PLD which is geared towards professional users), and facilitating routes to redress for types of damage not foreseen by the PLD.
The report also suggests significant amendments to the proposal. Briefly, the report suggests:
- The AILD should cover more than just AI systems. The report recommends that the AILD should provide compensation avenues for individuals harmed by general-purpose AI models with systemic risk, as well as other technologies outside of the scope of the AI Act but nonetheless relevant, such as the autonomous vehicles.
- The AILD should introduce strict liability in some cases. The report, echoing a 2020 European Parliament resolution on AI Liability, calls for providers to be strictly liable (i.e., regardless of fault) for any damage caused by AI systems prohibited under the AI Act.
- Extending the presumption of causality to non-compliance with AI Act rules on human oversight and General Purpose AI (GPAI) models with systemic risk. Recalling the AI Act’s obligations to ensure human oversight of high-risk AI systems and the obligations for GPAI models with systemic risks, the report recommends that non-compliance with these provisions should lead to a causal link being presumed between any AI output and the damage caused.
- Transforming the AILD into a Software Liability Regulation. The report highlights the benefits of applying the provisions of the AILD universally to all types of software applications, such that a broader range of users of technologies would benefit from the disclosure of information mechanism and the rebuttable presumption of causality.
The Importance of the AILD for Fundamental Rights Enforcement
Product liability frameworks have long been understood to be a complex, free-standing area of law. However, the AILD presents a unique opportunity for fundamental rights enforcement in that it takes concrete steps to enable individual access to redress for a violation of a duty of care protecting life, physical integrity, property, and fundamental rights. In other words, the AILD as written facilitates the rights of action by individuals for harms caused by AI system outputs that infringe their fundamental rights — an angle that is entirely absent from the PLD, as well as for the AI Act itself. If the recommendations in the EPRS report were to be taken forward, this remedy would likely be available on a much broader scale.
The EPRS report is a crucial contribution in the ongoing debate on the appropriate scope of the AILD. Co-legislators would be well-advised to carefully consider its recommendations, as well as the potential that the AILD holds for creating solid guardrails to protect the enforcement of individual rights.