European Policy, Privacy & Data
CDT Europe’s AI Bulletin: May 2023
European Parliament Committee Vote Edition
Also authored by CDT Europe’s Rachele Ceraulo and Vânia Reis.
Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.
I. The Latest on the EU’s Proposed Artificial Intelligence Act
After several delays to the Artificial Intelligence Act (AIA) in April, including the last political discussions — where European Parliament shadow rapporteurs on the AIA swept aside all the law’s remaining contentious issues, and instead discussed a new article on general principles applicable to all AI systems — the prospect of finalising the text and moving toward a vote in the IMCO/LIBE Committees by the end of the month seemed unrealistic. Nonetheless, at their last meeting on 27 April, the rapporteurs reached a provisional agreement that paved the way for the joint committee vote on 11 May.
As highlighted in our previous AI Bulletin, yet another set of technical discussions was required in order to reach this tentative agreement in the relevant Committees. The discussions in three pivotal shadow meetings were centred on ironing out remaining hurdles on contentious issues, and developing proposals for general compromises, in order to move forward with a provisional agreement.
More specifically, the negotiating team:
- Expanded the list of prohibited AI practices, to include the use of emotion recognition systems in the fields of law enforcement, border management, employment, and education, and the deployment of AI systems for mass scraping of social media or CCTV footage to feed into facial recognition databases;
- Extended the ban on biometric identification to include “post” remote biometric identification deployed in public spaces — except when such use is subject to pre-judicial approval and ex-post for serious crimes;
- Clarified that AI systems used in the context of critical infrastructure systems, falling under the AI Act’s Annex III, will be considered high-risk solely based on whether they pose a significant risk of harm to the environment — as opposed to all other use cases, which are deemed to be high-risk if posing a risk to health, safety or fundamental rights;
- Agreed on general principles applicable to all AI systems — a set of non-binding principles aimed at guiding the development and use of AI systems. These include human agency and oversight, privacy and data governance, and non-discrimination and fairness. While this provision is not intended to create new obligations for AI operators, general principles will have to be incorporated into technical standards and documents for technical guidance;
- Clarified the provision introducing a right to explanation on individual decision-making — which grants individuals the ability to request that the deployer of a high-risk AI system provide an explanation on the role of the system in decision-making processes. This provision applies in the event a decision taken based on the output of an AI system produces legal effects, or has an impact on an individual’s health, safety, fundamental rights, or socio-economic well-being.
- Introduced the future possibility of upgrading the AI Office to an agency to help support application of the Regulation, and cross-border enforcement — an option that is not yet feasible given the EU’s existing budget.
The rapporteurs also found an agreement on the thorny definition of “significant risk”, the criteria that — under the Parliament position — will be applied to decide whether an AI system that falls under Annex III is high-risk or not. The risk assessment process should revolve around a two-pronged methodology, considering both “the effect of such risk with respect to its level of severity, intensity, probability of occurrence and duration combined altogether”, and “whether the risk can affect an individual, a plurality of persons or a particular group of persons”.
The negotiators also revised the list of high-risk use cases under Annex III. The most significant amendments were made in the migration and border control management area, which now includes AI systems used for monitoring and surveillance at borders for the purpose of detection and recognition of individuals, as well as AI-based predictive analytic systems deployed to forecast and predict migratory movements and border crossing. The rapporteurs also expanded the list of high-risk uses to include recommender systems for user-generated content used by social media platforms that are Very Large Online Platforms (VLOPs) under the DSA. This late addition goes further than the EU Digital Services Act (Article 27), which places mandatory transparency obligations on these systems in order for recipients of a service to better understand, or change, the parameters of how such systems function. If adopted, the inclusion of social media recommender systems as high-risk AI systems would fundamentally change the implementation of the obligations under the DSA.
In addition, the negotiating team was able to finalise its position on its proposed three-layered approach to “powerful models”, namely general-purpose AI systems (GPAIs), foundation models, and generative AI.
- The first layer concerns GPAIs — AI systems that can be used in and adapted to a variety of applications for which they were not purposefully designed. The text provides that, in the event that a downstream operator modifies a non-high-risk system — including a general-purpose AI system — in a way that makes it high-risk, that operator becomes the new provider of the system and thus responsible for complying with its respective obligations under the Regulation. By the same token, the text requires the original (GPAI) provider to transfer the technical documentation and all other necessary information to the new provider.
- The second layer deals with foundation models — models trained on large amounts of data and used to accomplish a multitude of downstream tasks — and confirms previous proposals to subject these models to a stricter regime. Before making these models available, providers must comply with a series of obligations, including identifying and mitigating “reasonably foreseeable” risks to — among others — health, safety, and fundamental rights, and complying with their cooperation obligations, by providing downstream providers with technical documentation and clear instructions for use. The text also provides for data governance measures, among other things on the suitability of data and on bias identification and mitigation.
- Finally, the third layer assigns specific requirements to providers of “Generative AI systems” — systems specifically intended to generate content with varying levels of autonomy — such as complying with transparency obligations and taking safeguards to prevent the generation of illegal content. Providers must also publish a summary of training data protected under copyright law.
With the adoption of the text in a joint committee hearing on 11 May, the European Parliament now has the final step of voting on the text in plenary, scheduled for 16 June, before entering the inter-institutional negotiations with the EU Council and Commission, commonly known as the Trilogues and now standard practice for the adoption of EU legislation. The task now is for the co-legislators to bridge their respective mandates and conclude a final text, an endeavour that may prove challenging given the disparities between the General Approach of the Council and the European Parliament report. Once negotiations have concluded, the final text of the AI Act will be adopted by the respective EU institutions and entered into law.
II. In Other ‘AI & EU’ News
- In an open letter addressed to the rapporteurs and members of the European Parliament sitting in the leading Committees, Amnesty International called for the AI Act to prohibit the use of certain AI systems that are incompatible with the human rights of migrants, refugees, and asylum-seekers. The letter called for prohibitions including automated risk assessment and profiling systems, and predictive analytic systems used to interdict, curtail and prevent migration.
III. The Council of Europe’s CAI Process
At the fifth Plenary meeting of the Council of Europe’s Committee on Artificial Intelligence (CAI), participants made a list of decisions, several of which will prove formative for the ongoing negotiations. The most notable was the granting of observer status to Pour Demain — a non-profit think tank — and the European Trade Union Confederation.
The plenary also agreed that the Drafting Group should proceed with redrafting the draft Preamble and Chapter VII, and examine Chapters I, IV, and V in light of comments made by representatives during plenary, as well as written comments and suggestions submitted by individual Delegations. Crucially, the plenary also agreed to examine the timeline and working methods, as set by the Committee of Ministers, after having expressed concern about the feasibility of finalising the text by September 2023. The relevant Secretariat has been instructed to prepare a proposal for a new organisation of the work of the Committee, including a proposed revised timeline to be discussed at the next Plenary meeting scheduled for 31 May-2 June.
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.
- Regulatory Sandboxes for Artificial Intelligence – Hype or Solution? (KUL, T. Moraes)
- As AI Act Vote Nears, the EU Needs to Draw a Red Line on Racist Surveillance (Euronews)
- French Lawmakers Challenge Olympics Surveillance Before Top Court (Politico)
- The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment (Brookings)
- Spain’s AI Doctor: Investigation into Spanish Government Use of AI Systems (Lighthouse Reports)