Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s AI Bulletin: November 2022

CDT Europe Advocacy Intern Rachele Ceraulo also contributed to this bulletin.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

I. The Latest on the EU’s Proposed Artificial Intelligence Act

Negotiations on the AI Act are picking up pace, at least within the Council of the European Union. On 6 December, the Czech Presidency of the Council reached its general approach – the political agreement between all Member States that defines the Council’s negotiating position, moving into the all-important Trilogue negotiations with the European Parliament. The final text, discussed by delegations at the Telecom Working Party at the end of October, deals with many contentious topics such as:

The Committee of the Permanent Representatives of the Governments of the Member States to the European Union (Coreper I) — composed of Member States’ deputy ambassadors — added a few minor clarifications before greenlighting the AIA’s final text on 18 November, despite some remaining reservations from Member States: 

  • Germany proposed including all provisions concerning law enforcement in a separate chapter of the AI Act, or even in another legislative proposal
  • French ambassadors outlined that some concepts in the text lack definitions, and deplored the AI Act’s reliance on implementing acts, lack of objective criteria for defining high-risk AI applications, and omission of a requirement for impact assessment studies of high-risk AI systems. 
  • Other countries, such as Germany, Austria, Poland, and Denmark, expressed broader concerns with the final text, such as issues related to data protection and governance, and submitted a declaration at the meeting. 

For interinstitutional negotiations to start, the European Parliament will also need to finalise its own text, which some analysts determine is unlikely to occur before March 2023. Though the two co-rapporteurs, Dragoș Tudorache (Renew) and Brandon Benifei (S&D), announced at the end of last month that rapporteurs were ‘halfway through’ negotiations on the text, they still must agree on many contentious issues, such as the prohibition on using remote biometric identifications systems. Tudorache and Benifei proposed removing broad exceptions to the prohibition, which allowed law enforcement use of those systems, and extending the ban to both online and private physical spaces. That move has received broad support from the majority of political groups, with the exception of the European People’s Party (EPP). 

Throughout November, the co-rapporteurs also shared new compromise amendments:

  • The first set of compromise amendments gives the Commission more power to analyse regulatory gaps between the AI Act and existing sectoral legislation, and to amend the Regulation by extending the lists of situations considered high-risk (laid out in Annex III), of prohibited practices (laid out in article 5), and of AI systems requiring additional transparency measures (laid out in article 52). 
  • The second set of amendments proposes removing Annex I, which lists the AI techniques and approaches that fall under the definition of an AI system, and can be updated via secondary legislation. These amendments suggest instead defining an AI system through a dedicated provision that sets out three overall conditions. These amendments also shift the governance structure towards a more centralised enforcement architecture by replacing the Commission-led AI Board with an independent AI Office, inspired by the European Data Protection Board. Despite some resistance, once again from EPP, Tudorache reported that the Parliament generally supports these amendments.  
  • A last set of compromise amendments introduces a new complaint mechanism for consumers who ‘consider that their health, safety or fundamental rights have been violated’. It also updates the AIA’s post-market surveillance mechanism, which already requires providers to continually document and analyse the behaviour of high-risk AI systems as long as they are operated. Under these amendments, such analysis should also cover ‘other devices, software, and other AI systems that interact with the AI system, taking into account the limits resulting from data protection, copyright and competition law’.
  • The co-rapporteurs also suggest that supervisory authorities should be able to carry out unannounced on-site and remote inspections of high-risk AI systems, ‘reverse-engineer the AI systems’, and ‘require evidence to identify non-compliance’. Finally, the text broadens the scope of the pan-European database for high-risk systems, which now require any AI system – rather than standalone AI systems only – to be registered. It also requires both public authorities and providers of high-risk systems to register those systems. 

Critical issues still divide political groups, and will need to be tackled in December meetings before the Parliament reaches an agreement on its negotiating mandate. Watch this space!

II. In Other ‘AI & EU’ News 

  • The European Parliament appointed Pascal Arimont and Axel Voss as rapporteurs for  the revision of the Product Liability Directive and draft Directive on AI Liability, respectively, both from the EPP group and in the Legal Affairs Committee (JURI)
  • On 5 December, the U.S. administration and the European Commission convened in Washington, D.C., in the context of the third U.S.-EU Trade and Technology Council (TTC). The long-awaited ministerial meeting was set to cover cooperation on new and emerging technologies, including AI regulation, and resulted in the publication of a joint roadmap on AI that prioritises risk management and tools for evaluating and measuring trustworthy AI. Although Brussels and Washington are eager to coordinate their actions, their approaches to regulating AI have often diverged. 
  • In a leaked October 2022 ‘non-paper’ regarding the revisions to the EU AI Act proposed by the Czech Presidency, the U.S. administration voiced concerns over several provisions of the proposed Regulation. Specifically, the U.S. pushed for a narrower definition of AI systems and a broader exemption from risk management obligations for general-purpose AI, on the basis of burdensome compliance and technical costs for providers. Washington was also concerned that the list of systems considered high-risk under the AI Act is too broad in scope, and would like to see a more individualised risk assessment for high-risk systems based on four main criteria. On this note, human rights will only be assessed in particular contexts. Finally, Washington advocated for wider exemptions from the prohibition on the use of biometric recognition technologies in cases where there is a ‘credible’ threat, such as a terrorist attack.

III. The Council of Europe’s CAI Process

Due to the strong overlap between the Committee on Artificial Intelligence (CAI) Draft Convention and the EU’s AI Act, the European Commission has made clear its intention to lead CAI negotiations  on behalf of the 27 EU Member States. Because the Draft Convention must align with the AIA, the Commission’s decision has delayed progress by the Council’s CAI, pushing the CAI’s third plenary session from November 2022 to January 2023. 

EU Member States have, however, expressed their ‘strong preference‘ for the Commission not to intervene on national security aspects of the CAI Draft Convention that fall within their prerogative. The Czech Presidency will therefore need to adapt the mandate accordingly, and send the amended file to Coreper I in November in anticipation of the January negotiation meeting with the Council of Europe.

In line with the opinion of the European Data Protection Supervisor on the CAI negotiations process, civil society organisations called on the EU to avoid delaying the negotiation process. They reminded decision-makers that the purpose of the proposed Convention differs from that of the AIA, that the Council of Europe historically focuses on human rights, democracy, and the rule of law, and that the CAI convention and the AI Act are complementary in nature.

Other News on the Council of Europe

  • At its 43rd Session, on 25 October in Strasbourg, the Congress of Local and Regional Authorities of the Council of Europe adopted two reports: the first on the development of smart cities and regions, and the second on fighting hate speech and fake news.
  • The Council of Europe published a new study on artificial intelligence and education, which ‘presents an overview of [artificial intelligence and education] seen through the lens of the Council of Europe values of human rights, democracy and the rule of law”, and provides a “critical analysis of the academic evidence and the myths and hype’.

On 21 October 2022, the Council of Europe’s Digital Development Unit co-organised a meeting of Globalpolicy.AI, a cooperation platform involving multiple intergovernmental organisations with complementary mandates on AI. There, participants discussed their ongoing efforts to create risk and impact assessments for AI systems.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.