Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s AI Bulletin: July 2023

Also authored by CDT Europe’s Rachele Ceraulo and Vânia Reis.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website. Due to the Summer period, there will be no AI Bulletin in August, but we will be back with you in September!

I. The Latest on the EU’s Proposed Artificial Intelligence Act

Negotiations between the European institutions — commonly known as the trilogues — on the AI Act have begun, launching the final stage of drafting the legislation. A first “handshake” meeting between the three institutions took place on 14 June, kick-starting the process that will result in the final version of the AI Act. Three trilogues have been officially scheduled for 18 July, and 3 and 26 October, and will be led by Spain, which has assumed Presidency of the Council for the latter half of 2023. 

In addition to trilogue meetings, as many as 13 technical meetings to sort out details of the AI Act have been announced for the Summer period. This commitment signals that the EU institutions aim to reach an agreement ahead of the 2024 European Parliament elections. Representatives at the technical level have already discussed several issues where consensus may be reached swiftly, including obligations of providers and users of high-risk AI systems; the conformity assessment procedure that verifies regulatory compliance of high-risk AI systems; and the requirements standardisation bodies have to meet when developing harmonised standards. Negotiators discussed the standards that are used by organisations to demonstrate that their AI systems comply with the regulation, and agreed to make very few amendments to the text on conformity assessment.

This eagerness to move swiftly, however, may be hampered by how contentious the negotiations ahead will be. Disputed issues include the definition of an AI system, which AI systems should be classified as high-risk, the list of use cases of AI considered high-risk, and the fundamental rights impact assessment sought by the Parliament. On June 29, the Spanish Presidency sought clarification on EU Member States’ respective negotiating positions on these issues, aiming to prompt discussion ahead of trilogue negotiations. 

For this last edition of the AI Bulletin before the summer break, we explore how and where the Parliament and Council diverge on these key issues.  

Definition of “AI System”

Since the draft Regulation was published in 2021, the definition of artificial intelligence (AI) has been one of the most contentious issues. The European Commission’s initial draft defined the technology as encompassing software developed based on a series of approaches and techniques – including machine learning, statistical, and logic- and knowledge-based approaches – that were listed in Annex I of the Regulation. By amending the list in the annex, the Commission could update the definition of AI in response to technological developments.

Both the Council and the Parliament, though, adopted a different approach, by moving the definition from Annex I to the legally binding articles of the Regulation. The two institutions substantially diverge, however, in their wording of the definition, meaning that negotiations to bridge their respective positions will likely be tricky. The Parliament’s definition – which aligns with the OECD’s – encompasses machine-based systems that operate with varying degrees of autonomy from human control. The Council, on the other hand, narrowed its definition to only allow for systems based on machine learning and logic- or knowledge-based approaches, that autonomously convert inputs into outputs through inference.

The Spanish Presidency already raised concern that the Parliament’s definition is too broad. This concern was echoed by other member states, who sought to maintain the definition reached in the Council’s General Approach. More broadly, consensus is growing among other national governments to hold off discussions on this until September, and wait for the OECD to review its own definition. 

CDT Europe is currently working on a paper to propose a better definition. In our view, both the European Parliament’s and the European Council’s definitions use vague language that do not necessarily represent the full extent of how AI systems can operate, which could result in dangerous loopholes.  

Classification of AI Systems as High-Risk

The AI Act adopts a “risk-based approach”, meaning that the requirements and obligations it sets out are tailored to the specific level of risk that an AI system presents. Under this logic, high-risk AI systems – deemed to pose significant risks to people’s health, safety, or fundamental rights – are subject to a stricter compliance regime, with mandatory requirements and a conformity assessment to be carried out before rollout on the EU internal market. 

The European Commission’s initial proposal automatically categorised as high-risk any system falling under the AI Act’s proposed Annex III — which lists high-risk applications and use cases of AI — but both the Council and the Parliament added a new means of making that determination. The Council’s General Approach classifies a system as high-risk when its output plays a significant role in shaping a decision-making process. The Council’s proposal further empowers the European Commission to outline the circumstances under which outputs of AI systems would be purely accessory to the decision or action taken.

The Parliament, on the other hand, introduced a system based on self-assessment, whereby AI providers will have to check whether their system poses a significant risk of harm to health, safety, and fundamental rights — or to the environment in the context of critical infrastructure systems. Additionally, it requires that when providers deem their systems safe, they must notify the competent authority, which will review the notification and reply if it considers the system to be misclassified. If the competent authority finds that a provider rolled out a misclassified system, it can impose sanctions on that provider. While a majority of EU member state delegations defended the Council’s text, others expressed interest in exploring the Parliament’s approach and tweaking it by removing the notification mechanism or introducing mandatory self-assessment criteria for AI providers.

List of High-Risk Use Cases in Annex III

The all-important Annex III lists a number of high-risk applications and use cases of AI that, in the draft proposal by the European Commission, are deemed as generating a high risk to people’s health, safety, or fundamental rights in their intended use. The original list contained eight categories, each containing at least one concrete use case. To accommodate for future technological developments and emerging uses of AI, the AI Act empowers the European Commission to add new use cases to Annex III that pose an equivalent or greater risk than systems already covered.

While the Council and the Parliament largely conformed to the European Commission’s proposed approach — amending the text to also allow for removal of use cases — both co-legislators took significant steps to amend Annex III’s list of use cases. The Council removed the categories on crime analytics and deepfake detection by law enforcement, and AI-enabled verification of travel documents, while introducing a category for AI systems used in critical digital infrastructure, or that assess risks and pricing in relation to life and health insurance. 

Meanwhile, the Parliament considerably redrafted and expanded Annex III, particularly in the context of migration and border control. The Parliament’s version now includes AI systems used for monitoring and surveillance at borders, and AI-based predictive analytic systems for forecasting and predicting migratory movements and border-crossing. It also includes AI systems used to influence voting results or behaviours, and recommender systems used by Very Large Online Platforms (VLOPs) to display user-generated content.

Fundamental Rights Impact Assessment

CDT Europe has consistently cautioned against the limitations of a risk-based approach to regulating AI systems, and has advocated for a rights-based approach. Fundamental rights impact assessments will thus be an important element of the Act, as they will increase accountability in private and public sectors alike by establishing processes to identify, assess, and mitigate AI impacts on fundamental rights. 

Despite the AI Act’s original requirement that high-risk AI systems undergo assessment to verify conformity with the requirements set out in the Regulation, it did not obligate users and deployers to conduct and publish specific impact assessments targeting fundamental rights. Following extensive pressure from civil society, the European Parliament introduced an obligation for a deployer to conduct a fundamental rights impact assessment before deploying any high-risk AI system on the EU internal market. The obligation sets out minimum requirements for conducting such assessments, including outlining the intended purpose of the system; identifying categories of persons likely to be affected by the system; assessing the system’s impact on fundamental rights as defined by the EU Charter, particularly potential harms to marginalised or vulnerable communities; and drafting a risk mitigation plan. 

The European Parliament also added a requirement for a six-week consultation period for the representatives of people likely to be affected by a high-risk AI system — equality bodies, consumer protection agencies, social partners, and data protection agencies — to provide their input into the assessment. Currently, the Council’s General Approach does not include such an obligation, though indications from recent technical meetings are that the Council has some willingness to accept a provision on fundamental rights risk assessments given AI Act’s original intended purpose, which is to protect individual rights from AI. 

Overall, even though the Spanish Presidency of the Council of the EU is enthusiastic about reaching an agreement, the issues on the agenda are thorny. Given the challenge each legislative body faced in concluding its own respective mandate on these topics, the upcoming negotiations may prove challenging.

II. In Other ‘AI & EU’ News

  • Policymakers in various jurisdictions are considering measures for governing artificial intelligence. Amid rising global attention on AI, and growing concerns about the risks of generative AI specifically, European Commission Executive Vice-President Margrethe Vestager announced EU-U.S. plans to release a Code of Conduct on AI. The Commission argues that, as many rulebooks intended to regulate AI are still going through the legislative process, this set of voluntary industry standards could bridge the gap in advance of formal regulation taking effect. However, there is a risk that some jurisdiction will never go beyond voluntary measures, and so such approaches should be viewed with caution. 

III. Recommended AI Summer Reads 📚📺🎧

CDT Europe presents our freshly curated recommended reads for this summer. For more on AI, take a look at CDT’s work.