Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.
- The Latest on the EU’s Proposed Artificial Intelligence Act
According to a timeline circulated in December, the mandated European Parliament Committees and Parliament plenary are respectively expected to vote on the AI Act in February and March 2023. Although the European Parliament made good progress towards agreement on how to regulate biometric recognition and how to define an AI system, political groups are still largely divided on those issues, suggesting that the votes will be postponed.
- Since December, the 14 rapporteurs on the AI Act have made significant progress at technical negotiations on the text, especially on the scope of the Regulation — which entities it applies to. Notably, they discussed a proposal to remove Article 10(5), which originally granted exceptions from the rules on data confidentiality for the purpose of correcting AI biases. Following these discussions, leading co-rapporteurs Benifei and Tudorache gave up their idea to give the EU AI Board — the body charged with providing guidance to member states on implementation of the AI Act — the ability to make binding recommendations for how national authorities implement the proposal.
- The two co-rapporteurs circulated two new sets of compromise amendments at the beginning of January. The first set of amendments extends the list of high-risk AI systems to those whose incidents or “malfunctions” give rise to a number of consequences, including infringements upon fundamental rights. It also removes general-purpose AI (GPAI) from the list of high-risk AI systems contained in the AI Act’s Annex III, and specifies that GPAI systems will be treated separately “pending discussions” that have not yet occurred.
- In their second set of compromise amendments, the co-rapporteurs proposed requiring fundamental rights impact assessments that apply to all users of high-risk AI systems, both public and private, with accompanying minimum requirements for conducting such assessments. This proposition echoes long-standing calls from civil society, and seems to be supported by most political groups. The compromise text also increases the list of obligations for deployers of high-risk AI systems, to now require that they consult with worker representatives, inform employees, and obtain their consent before implementing a high-risk AI system in a workplace. These amendments were discussed at an expert-level meeting between rapporteurs on the details of the text without too much controversy.
Once these discussions are closed, lawmakers will still need to address the issue of general-purpose AI, and what Annex III’s list of high-risk AI systems should include.
In the Council of the European Union, Sweden took over the Presidency from the Czech Republic, which agreed on a general approach on the AIA at the end of last year. Some remaining disagreements do still exist amongst EU member states such as Germany, which will likely keep pushing to improve the Council’s text during interinstitutional negotiations. The text adopted by the Council significantly restricts the scope of the original Commission proposal, particularly by weakening the prohibition on the use of remote biometric identification systems, and the safeguards on use of AI systems in the context of law enforcement and migration.
The Swedish Presidency of the Council will now need to wait for the European Parliament’s Plenary to vote on the text — which is unlikely to occur by the March 2023 deadline — before trilogue negotiations can begin. It remains to be seen whether the latter will happen under the newly established Presidency, or under the upcoming leadership of Spain, which will take over the Presidency in July 2023.
- In Other ‘AI & EU’ News
- The proposals for a Product Liability Directive and for a Directive on AI Liability have been preliminary allocated to the Legal Affairs Committee (JURI) in the European Parliament, which raised discontent among other committees, who would like to lead on the negotiations of the text. The Internal Market Committee (IMCO) wishes to lead on the first proposal, and the Civil Liberties Committee (LIBE) wants to co-lead with LIBE on the second one. Negotiations are underway, pending a decision by the Conference of Committee Chairs.
- At a JURI Committee meeting on 9 January, Justice Commissioner Didier Reynders warned the Parliament that the 2024 European elections are fast approaching, calling on legislators to start negotiating on the proposals for a Product Liability Directive and for a Directive on AI Liability. He also raised concerns about the delays to the AI Act vote in the European Parliament, and the consequent delay to negotiations on the legislation with the Council, stressing that discussions in the Council of the European Union’s Working Party on Civil Law Matters (JUSTCIV) had already started.
- The Council of Europe’s CAI Process
Last year, the European Commission announced that it intended to lead negotiations on the Council of Europe’s draft Convention on AI on behalf of the 27 EU member states, delaying the whole Council of Europe’s Committee on Artificial Intelligence (CAI) process and raising concerns among EU institutions, civil society, and member states.
In the CAI’s long-awaited plenary, which finally took place from 1-13 January, the CAI Secretariat unexpectedly raised a proposition made by the U.S.: to delegate the CAI’s work to a drafting group that would exclude civil society organisations. This proposition, supported by the U.K. and Canada, was made by Washington to avoid disclosing its negotiating positions to non-country representatives, and was heavily criticised by NGOs involved in the drafting process such as Fair Trials, Homo Digitalis, the Conference of International Non-Governmental Organisations (CINGO), and Algorithm Watch.
The U.S. proposition ultimately moved forward despite civil society objections, and therefore the drafting group started work behind closed doors on the conclusions chapter of the CAI draft treaty. While that chapter is less controversial than other provisions, countries were still divided on whether to exclude AI systems developed for national defence from the scope of the draft convention, and on the issue of access to remedy for individuals harmed by AI decision-making.
Just before the plenary, experts from across the world also gathered for a side webinar on the challenges of developing international and national frameworks for managing the risk and impact of AI, as well as the challenges of making those frameworks mutually interoperable.
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.