Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s AI Bulletin: April 2023

Also authored by CDT Europe’s Rachele Ceraulo.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

1. The Latest on the EU’s Proposed Artificial Intelligence Act

In March, points of contention still remained within the text of the AI Act, forcing rapporteurs to schedule a series of additional technical discussions. This addition compounded delays in political meetings, in which negotiators try to reach agreement based on the outcomes of technical meetings. During this set of technical discussions, rapporteurs aimed to determine  what obligations would apply to providers of general-purpose AI systems (GPAIs)

The discussions also highlighted the strong differences of position amongst the political groups in the European Parliament, especially on new amendments that differentiate foundational AI models like ChatGPT — which are trained on large unlabeled datasets, and can be used for different tasks with minimal adjustment — from simpler models based on smaller amounts of data. The negotiating team was luckily able to find common ground on other points, for instance agreeing that GPAI providers must register their systems in the EU’s database.

Because negotiators made significant progress in these additional technical discussions, they were able  at the end of March to finalise their position on many previously discussed compromise amendments. In particular, the rapporteurs agreed on:

  • A clarification in the AIA’s recitals — or, statements of the law’s purposes — that AI systems prohibited under the draft AI Regulation cannot be exported outside of the EU; 
  • The contentious definition of artificial intelligence, which deviates from the original proposal and remains in line with the narrow OECD definition. They further proposed to clarify in the recitals the concepts of ‘machine-based system’ and ‘environments’;
  • The adoption of the latest versions of requirements for high-risk AI systems, including data governance requirements;
  • Their latest proposals on the ‘conformity assessment procedure’, which aims to verify that high-risk AI systems conform to the requirements set out by the AIA. Rapporteurs added a requirement for providers to consider ‘reasonable foreseeable misuse’ of high-risk AI systems, in addition to the duty set out by the Commission’s initial proposal for providers to assess the ‘intended purpose’ of those systems;
  • Newly introduced proposals on regulatory sandboxes, the tools established under the AIA to test and experiment on AI systems under supervision. Rapporteurs agreed that, if a high-risk AI system has been tested through a regulatory sandbox, that system is presumed to comply with the AIA’s requirements for such systems.

Two additional technical meetings were held in mid-April, which culminated in another political discussion where the rapporteurs reportedly largely agreed on:

  • Definitions of concepts key to and outlined in the draft Regulation, such as the much-discussed definition of ‘AI system’. The only definition not yet agreed upon is that of ‘significant risks’, which will eventually be used to identify high-risk AI systems; 
  • The provisions outlining how the AIA will be implemented. Rapporteurs downsized the proposed AI Office — the body tasked with streamlining enforcement at the EU level — to a purely coordinating role due to budgetary constraints, and put investigative powers back in the hands of the national supervisory authorities who are in charge of overseeing the AIA at the national level;
  • Common standards and specifications — which include technical solutions for complying with certain AIA requirements and obligations — to be issued by the European Commission;
  • Final provisions, including those that set out an obligation for all parties involved in the AIA to respect the confidentiality of information and data in carrying out their tasks; that set out the penalties that apply to infringements of the AIA; and that give the Commission the power to adopt delegated acts — non-legislative acts which aim to supplement or amend certain non-essential elements of the future legislation. 

In their latest meetings, the rapporteurs attempted to agree on the remaining provisions of the draft Regulation, namely:

  • Prohibited AI practices, in particular the recently proposed prohibition on use of emotional recognition systems in the fields of law enforcement, border management, employment, and education — except when used for medical or research purposes where subjects have consented. 
  • The new proposal by the Renew group to ban AI systems ‘for the general monitoring, detection and interpretation of private content in interpersonal communication services, including all measures that would undermine end-to-end encryption‘;
  • Provisions on GPAI, including obligations across the AI value chain;  
  • Stand-alone articles, or, provisions not necessarily linked to the rest of the draft regulation, such as general principles applicable to all AI systems;
  • The classification of high-risk AI systems, which needs a rewrite to include the definition of ‘significant risks’ once rapporteurs agree on it;
  • The draft Regulation’s recitals, or, the non-legally binding text thath sets out the reasons for the Regulation’s legally binding articles.

Because negotiators did not make enough progress on these points, the rapporteurs cancelled the last political meeting — which was supposed to lead to a final agreement — and replaced it with yet another set of technical discussions. As a result, the vote in joint LIBE and IMCO Committees, originally set for 26 April, has reportedly been rescheduled for 8 or 11 May

The rapporteurs intend to agree on their final text in Plenary on 31 May, with the aim of being ready to start interinstitutional negotiations in June. However, further delays could still occur, and the EPP Group could choose not to vote in favour of the Regulation due to its strong reservations about the list of AI practices that the AIA prohibits, the AIA’s list of areas in which use of AI systems presents a high risk, and newly introduced proposals on GPAIs, which among other things lay out the responsibilities of actors across the AI value chain.

2. In Other ‘AI & EU’ News 

  • On 17 April, the co-rapporteurs on the EU AI Act launched a call to action on ‘very powerful Artificial Intelligence’. They underscored that the Members of the European Parliament are committed to using the AI Act to provide a human-centric framework that effectively governs foundation models like ChatGPT. They called on European Commission President Von der Leyen and US President Biden to convene a high level Summit on AI within the context of the EU-US Trade and Technology Council, and establish a set of guidelines for the development and deployment of GPAIs. Rapporteurs concluded by urging GPAI developers to increase transparency about their models, and ensure that those models are safe and trustworthy.
  • Despite strong pushback from digital rights organisations and several Members of the European Parliament, on March 28, the French National Assembly passed the proposed Law on the 2024 Olympic and Paralympic Games. Article 7 of the law provides a legal basis for the use of untargeted algorithmic-driven video surveillance in publicly accessible space for the purpose of detecting suspicious behaviours. Following the approval of the bill, a group of over 60 MPs appealed the Constitutional Council, on the basis that the security measures contravene several constitutional principles, including the right to private life.
  • On 31 March, the Italian Data Protection Authority (DPA) temporarily limited OpenAI from processing the personal data of ChatGPT users who reside on Italian territory; in parallel, it also opened an investigation into the tech company over suspected breaches of the EU’s GDPR. The DPA held that, when training its language model, OpenAI failed to provide data subjects with information on its data processing practices. The Authority also raised concerns about the lawfulness of OpenAI’s data processing practices. Following the order, OpenAI disabled access to its generative chatbot across the Italian territory, and now has until 30 April to comply with the measures set out by the Italian privacy watchdog concerning transparency, the right of data subjects, and the legal basis for processing users’ personal data to train its algorithms.
  • On 14 April, the European Data Protection Board (EDPB) launched a dedicated task force to assist European DPAs in coordinating enforcement actions against ChatGPT. The announcement came after mounting scrutiny by several European privacy regulators over the American generative chatbot.

3. The Council of Europe’s CAI Process

From 19 to 21 April, the Committee on Artificial Intelligence (CAI) — the body in charge of negotiating the Council of Europe’s Convention on AI — gathered again for a Plenary meeting. On the agenda for the CAI’s discussions were the following provisions of the zero draft, which is essentially the first draft of the CAI convention on AI:

  • The draft preamble;
  • General provisions, such as the purpose, object and scope of the draft convention, and definitions;
  • Provisions on how to ​​assess and mitigate risks and adverse impacts of AI;
  • Fundamental principles for how to design, develop, and apply AI systems, which include equality and non-discrimination, transparency, and oversight across those processes;
  • Measures and safeguards to ensure accountability of AI systems, as well as redress for any harms that arise from those systems.

In addition to the provisions in the list above, the CAI Drafting Group — from which civil society organisations have been excluded — will reexamine provisions of the draft that establish how the Convention will be implemented and regulate how Council of Europe countries will cooperate to enforce the Convention. 

The CAI hopes to finalise the Convention’s text in September, and submit it to the Committee of Ministers — the Council of Europe’s decision-making body — in November 2023 so that it can approve the text. 

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.