Skip to Content

AI Policy & Governance, European Policy, Government Surveillance, Privacy & Data

CDT Europe’s AI Bulletin: November 2023

Also authored by CDT Europe’s Rachele Ceraulo.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

I. The Latest on the EU’s Proposed Artificial Intelligence Act

On 2-3 and 25 October, the European Parliament, Council, and Commission held two rounds of “trilogue” negotiations on the AI Act, where they made headway on several provisions of the legislation. In particular, they were able to finalise work on the bill’s mandatory requirements for high-risk AI systems and penalties for non-compliance with the AI Act, which were already agreed at the technical level. However, remaining sticking points — including rules for foundation models and general-purpose AI systems, and full or partial exemptions to the legislation for AI systems used in the law enforcement or national security context — might scupper the lawmakers’ determination to strike a deal by 6 December as planned.

In this month’s CDT AI Bulletin, we’ll take a deep dive into the current status of trilogue negotiations, analysing agreed-upon and still-pending issues, and explore how developments at the technical level throughout October and November might affect the negotiations ahead.  

High-Risk AI Systems

At the fourth trilogue on 25 October, EU policymakers reached an agreement on the high-risk classification mechanism for AI systems listed under Annex III of the Regulation. The compromise largely maintains and builds upon the text proposed by the European Commission, and would allow providers of AI systems used in high-risk contexts to bypass the AI Act’s obligations for high-risk AI systems —  if their systems fall within one of four “filter” conditions. Negotiators agreed that it would be up to the European Commission to update those filter conditions through delegated acts, and outline the “use cases” that would be exempted from the high-risk regime. 

The tentative political agreement to move ahead with this new “filter” system starkly contrasts a damning legal opinion from the Parliament’s Legal Service. The opinion concluded that the newly proposed regime would be at odds with the Regulation’s aim to enhance legal certainty, and run counter to the principles of equal treatment and proportionality. The legal opinion echoes CDT Europe’s concerns with the agreement; we called on the EU institutions to reject exemptions from requirements for high-risk AI systems, and stick with the risk-based approach envisaged in the original AI Act.

The Council’s proposal to test high-risk AI systems in real-world conditions, with few safeguards and outside of AI regulatory sandboxes — supervised pre-market environments that allow providers to test and experiment on AI systems — was also on the agenda for discussion. Even though the Spanish presidency pressured their colleagues in the Member States to accept additional safeguards, in hopes of reaching potential compromise with the European Parliament, no compromise was reached.

Real-time Remote Biometric Identification (RBI)

Negotiators failed to agree on whether to except certain high-risk use cases — specifically, cases related to biometrics and their use by law enforcement agencies — from the AI Act’s requirements, indicating that the issue remains highly controversial. Nonetheless, the European Parliament seems prepared to backpedal on its full ban on real-time remote biometric identification (RBI): exceptions to the ban, initially proposed by the European Commission, found their way into a compromise position put forth by co-rapporteurs Dragoș Tudorache and Brando Benifei. 

These exceptions cover the use of high-risk AI systems in criminal investigations for offences — including terrorism, trafficking of human beings, drugs and weapons, exploitation of children, murder, and rape — that are punishable for a maximum period of at least five years. Whilst such limits on exceptions to the RBI ban could in theory be helpful, it would in practice be impossible to confine use of RBI to crime suspects, as that particular surveillance technology is by nature overly intrusive.

Other proposed safeguards include a requirement for law enforcement agencies to seek judicial authorisation before deploying a high-risk AI system, which could be postponed by up to 48 hours in exceptional circumstances. Real-time use of an RBI system would also only be allowed if that system is registered in the public EU-wide database of high-risk AI systems, and if it has undergone a fundamental rights impact assessment. The Parliament is rumoured to be hoping that, if it drops the complete ban on real-time RBI, other negotiators will accept prohibitions on other items including AI systems used to scrape social media or CCTV content to feed into facial recognition databases, biometric categorisation on the basis of sensitive data, and the use of emotion recognition systems in work and educational settings.

In step with the Parliament, the Spanish Presidency of the Council sought to gauge whether Member States would accept narrower exceptions to the ban on real-time remote biometric identification – an apparent step back from the Council’s decision to expand the cases in which real-time RBI technologies could be used by law enforcement authorities. Specifically, the Spanish Presidency proposed restricting use of real-time RBI to searching for victims of abduction, investigating trafficking and sexual exploitation of women and children, preventing terrorist attacks, responding to imminent threats to the life and physical safety of individuals, and protecting critical infrastructure. It further proposed allowing use of such systems to prosecute 16 of the most serious crimes under the European Arrest Warrant. 

These attempts to narrow exceptions to the AI Act, and introduce additional safeguards – including subjecting uses to judicial review and transparency obligations – appear to be steps in the right direction. But, as CDT and our civil society partners have previously argued, remote biometric identification raises significant human rights concerns, particularly around the rights to freedom of expression and peaceful assembly. Such systems demand collection, analysis, and review of the biometric information of very large numbers of people who are not suspects of law enforcement, in violation of individual anonymity in public spaces, and thereby exert significant chilling effects on decisions to participate in public gatherings. The Council’s latest proposals to ‘narrow’ the scope of allowed uses of RBI do not counter this fundamental challenge, as any use of the technology would be an intrusive form of mass surveillance, irrespective of the crime for which the technology would be deployed. 

General-Purpose AI Systems and Foundation Models

At the same trilogue meeting, negotiators also laid the groundwork for a new approach to governing foundation models and general-purpose AI systems (GPAIs) – a persistently hot topic in negotiations. The Spanish Presidency made the case to keep both foundation models and GPAIs firmly within the remit of the AI Act. A new idea, however, was floated: to introduce the concept of “high impact foundation models”, or, foundation models whose capabilities go beyond the current state-of-the-art and may not yet be fully understood, and which should be subject to additional transparency obligations. The negotiators proposed developing several criteria for deeming a foundation model “high impact”, including computing power and amount of training data. This late addition highlights that negotiations on the regulation are still highly technical, and that it is a challenge for negotiators to future-proof the AI Act.

On the topic of governance, there appears to be a growing consensus that enforcement and supervision of new rules for “powerful models” should be centralised at the EU level. To this end, the Spanish Presidency revamped the Parliament’s idea of an AI Office, which would be in charge of — among other things — carrying out compliance controls, defining auditing procedures, monitoring risks, and collecting citizens’ complaints about these systems.

France, Germany, and Italy, however, are now expressing concerns over the Spanish Presidency’s proposed two-tiered approach to regulating foundation models. The three countries oppose statutory regulation of foundation models, arguing that such regulation is incompatible with the AI Act’s technology-neutral and risk-based approach. Rather, they propose mandatory self-regulation of foundation models through codes of conduct, which would require developers of foundation models to create “model cards” – technical documentation providing information about the performance and characteristics of trained models – and make them available for each of their foundation models, thereby encouraging more transparency. 

Paris, Berlin, and Rome are also proposing an AI governance body, which would help develop guidelines and check the application of model cards. The trio of States also proposes a deferred sanction regime, which would only be applied following systematic infringements of the codes of conduct and a “proper” analysis and impact assessment of the identified failures. This approach to foundation models was at the centre of discussion at a 21 November meeting of the Telecom Working Party.

Nine more technical meetings were scheduled to nail down the most intricate and significant aspects of the Regulation, in preparation for the fifth trilogue on 6 December, which was expected to be the last. But, with discussions coming to a standstill on multiple contentious issues, including over the Council’s proposed exceptions for AI systems developed or used for national security purposes, derogations for law enforcement related to the use of high-risk AI systems, and the recent breakdown in negotiations on foundation models, the prospect of an agreement in December seems unrealistic.

II. In Other ‘AI & EU’ News

  • On 13 October, G7 members published the draft International Guiding Principles for Organizations Developing Advanced AI systems, which have been developed under the Hiroshima Artificial Intelligence process with an aim to establish guardrails for AI systems and promote safety and trustworthiness of the technology on a global level. The draft is based upon the existing OECD AI Principles, and includes 11 guiding principles including safety measures, risk assessment policies, and security controls. As an immediate follow-up to the publication of the principles, the European Commission launched a stakeholder survey on the draft principles that only remained open for 7 days. As a next step, G7 leaders will seek to endorse a final version of the guiding principles and voluntary international Code of Conduct for AI developers by the end of the year. 
  • A new study by AlgorithmWatch and AI Forensics, published in October, analysed the potential impact of large language models such as Bing Chat on elections. In the joint research project, technologists extensively examined the quality of Bing Chat’s answers to questions related to recent state elections in Bavaria, Hesse, and Switzerland. Finding that the tool answered even the most basic questions incorrectly or misleadingly, the researchers concluded that Bing’s Chat search feature would be woefully unreliable as a tool for seeking relevant information during an election process, and therefore may undermine the integrity of elections and thus a cornerstone of democracy. 
  • On 30 October, the Biden Administration published the long-awaited Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order aims to ensure the development of responsible AI practices by both the private and public sectors, and calls for actions from several agencies to prevent uses of AI that subject people to adverse decisions and treatment. Commenting on the Executive Order, CDT President and CEO Alexandra Reeve Givens stated, “It’s notable to see the Administration focus on both the emergent risks of sophisticated foundation models and the many ways in which AI systems are already impacting people’s rights.” Following the Executive Order, the U.S. Office of Management & Budget released guidance to govern federal agencies’ use of AI

III. Content of the Month 📚📺🎧

CDT Europe presents our freshly-curated recommended reads for the month. For more on AI, take a look at CDT’s work.