Also authored by CDT Europe’s Rachele Ceraulo.
Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.
1. The Latest on the EU’s Proposed Artificial Intelligence Act
After co-rapporteurs failed to conclude negotiations on critical amendments to the Artificial Intelligence (AI) Act at their February political meeting, they suggested replacing their mid-March political meeting — where they aimed to discuss which high-risk AI systems Annex III of the EU AI Act should contain, as well as how to align requirements on high-risk AI systems with technical standards on AI — with a series of broader technical discussions on these issues.
The rapporteurs are expected to reach a provisional agreement on the definition of artificial intelligence, by far the most contentious topic of discussion in the negotiations. Similar to the Council’s position, the current proposal defines AI more narrowly than the original did, in alignment with the OECD definition. It now only encompasses systems based on machine learning, and excludes software developed using knowledge-based and statistical approaches, which may lead to further challenges during the negotiations. By that token, the rapporteurs also deleted Annex I of the draft regulation, which listed specific AI techniques such as machine learning or statistics.
The leading negotiators also revised the list of definitions at the beginning of the AI Act. The most significant amendments include a new definition of remote biometric verification systems, which are referred to as “AI systems used to verify the identity of individuals by comparing their biometric data against biometric samples in a reference database”. That definition further specifies that the verification process takes place only when the user (deployer) of the AI system has prior knowledge of whether the individual targeted by the system will be present and can be identified.
The rapporteurs differentiate these systems from remote biometric authentication systems, whose purpose is to authenticate the identity of an individual upon request of the individual itself. In previous amendments, the rapporteurs already defined biometric categorisation AI systems as those used “for the purpose of assigning natural persons to specific categories”, but the latest amendments now add “inferring their characteristics and attributes” to that definition.
In addition, the rapporteurs revised almost all the compromise amendments that were shared last month. The rapporteurs:
- Clarified that open-source AI models — publicly available AI systems for commercial and non-commercial use under various open-source licences — do not fall into the scope of the draft Regulation, unless they are placed on the market or put into service as part of a larger high-risk system, a prohibited practice, or a system producing deepfakes.
- Extended the ban on social scoring — AI systems used to evaluate or classify the trustworthiness of persons based on their social behaviour, known or predicted personal characteristics — to private actors, in addition to public authorities or persons acting on their behalf, which were already included in the initial Commission proposal.
- Introduced a presumption of compliance with AI Act obligations for high-risk AI systems that have been tested through regulatory sandboxes — tools established under the AIA to test and experiment on AI systems under supervision.
- Revised the list of legal requirements for high-risk AI systems, including that providers set up a risk management system and use it throughout the entire lifecycle of their high-risk AI systems to assess the systems’ risks. They also added a requirement for providers to consider “reasonable foreseeable misuse” of high-risk AI systems, alongside assessing their “intended purpose” as originally required by the Commission’s proposal.
- Maintained the fundamental rights impact assessment for high-risk AI systems, but reduced the list of minimum elements to consider in such an assessment, and introduced a carve-out from this requirement for AI systems that manage critical infrastructure.
- Clarified that the European Commission should issue common specifications — a set of additional technical requirements to comply with the legal obligations — for high-risk systems related to protecting fundamental rights. These would be issued by means of implementing acts, which would be repealed once these requirements are included in a published “harmonised standard”.
- Removed the provision that required AI developers to verify that the dataset used for training their AI systems was legally obtained, reportedly in an attempt to prevent large language models such as ChatGPT from being affected by the proposal.
- Removed references to the principles of “data minimisation” and “data protection by design and by default” in the data governance requirements for high-risk AI systems. Co-rapporteurs also slightly simplified the provision requiring that technical documentation from high-risk AI systems providers be assessed by notified bodies — the bodies tasked with verifying that high-risk AI systems conform to the requirements set out by the AIA.
- Downsized the role of the AI Office, the EU body tasked with streamlining enforcement of the AIA at the EU level. Possibly to address concerns about lack of resources, the co-rapportuers suggested that the AI Office intervene alongside national supervisory authorities only in joint investigations in cases of ‘widespread infringements’ of the draft regulation, and only on how requirements for high-risk AI systems apply .
The rapporteurs are currently holding technical discussions about their newly introduced standalone articles — provisions that are not necessarily linked to the rest of the draft regulation. These include:
- General principles applicable to all AI systems;
- A right to explanation of the role of an AI system in decision-making processes;
- Accessibility requirements for providers and users (deployers) of AI systems;
- Requirements on AI literacy for providers, users (deployers), and affected individuals;
- A right for natural persons not to be subject to non-compliant AI systems; and
- Requirements to mitigate the environmental impact of high-risk AI systems.
The leading negotiators are also working on proposals on general-purpose AI systems (GPAIs), such as ChatGPT-type generative AI. In their most recent amendments to these proposals, they suggest subjecting providers of these systems — and other actors involved in the AI value chain, such as distributors, importers, or deployers of AI systems — to lighter obligations, while maintaining some requirements for high-risk AI systems in certain instances.
Those requirements could include data governance and transparency obligations for providers of AI models that generate text which could be mistaken for human-made content. The amendments also clarify that any actor in the AI value chain that substantially modifies a GPAI, in such a way that the AI system becomes high-risk, would be considered a provider of such systems and would need to apply all related requirements accordingly.
Depending on progress on all these discussions, a new political meeting could be scheduled later this month or early April. An agreement is close, but MEP Benifei insisted that the leading negotiators want to have a solid text and a good majority, which means they might push the vote until this is achieved.
2. In Other ‘AI & EU’ News
- In a landmark decision on 16 February, the German Federal Constitutional Court ruled unconstitutional certain provisions allowing the police in Hesse and Hamburg to use automated data analysis software to prevent criminal acts. The court held that, because these provisions required the processing of stored data to create profiles of possible suspects as precautionary measures, the threshold of an identifiable danger that could have justified such interference was unmet, thus violating the right to informational self-determination.
- A coalition of 38 civil society organisations, led by ECNL, La Quadrature du Net, and Amnesty International France, signed an open letter calling the French National Assembly to reject Article 7 of the proposed law on the Paris 2024 Olympic and Paralympic Games. The provision would create a legal basis for the use of untargeted algorithm-driven video surveillance to detect suspicious events in public spaces. The letter highlights how the proposed unjustified and disproportionate surveillance measures contravene international human rights law, and create a precedent for other countries to legalise similar biometric surveillance practices in the name of security.
3. The Council of Europe’s CAI Process
On 21 February, the Council of the European Union’s Telecoms Working Party held a joint meeting with the European Commission to discuss the European Union’s involvement in negotiations on the Council of Europe’s draft convention on Artificial Intelligence. According to Contexte, the EU has a coordinated position on the preamble, the final provisions, and Chapter VII on governance. These issues have been discussed, but not finalised, at the level of the Council of Europe’s Committee for Artificial Intelligence (CAI).
Through the Telecom Working Party, the EU is expected to discuss the draft treaty again on 21 March, with the aim of preparing its position ahead of the next CAI meeting taking place on 19-21 April.
The treaty, which has not yet been publicly communicated except for a “zero draft“, could also be on the agenda of the Council of Europe’s summit of heads of state and government on 16-17 May in Reykjavik, Iceland. Watch this space!
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.
- Generative AI Systems in Education – Uses and Misuses (CDT)
- Moving from Empty Buzzword to Real Empowerment: A Framework for Enabling Meaningful Engagement of External Stakeholders in AI (ECNL)
- Meta- and Content Data in the Real World: Some Rule of Law Reflections (P. Levantino, The Digital Constitutionalist)
- New Study on AI in the Workplace: Workers Need Control Options to Ensure Co-Determination (Algorithm Watch)