Skip to Content

AI Policy & Governance, European Policy, Government Surveillance, Privacy & Data

CDT Europe’s AI Bulletin: December 2023

Also authored by CDT Europe’s Rachele Ceraulo.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

The EU’s Proposed Artificial Intelligence Act: The Devil Is in the Details

Late in the evening of Friday 9th December, following a record-breakingly long trilogue negotiation, a political deal was struck on the EU AI Act. This edition of the AI Bulletin will give an initial analysis of what was agreed upon, with two caveats: firstly, the actual text of the AI Act has not yet been published; and secondly, there may still be further technical discussions that could have significant impact on how the law is applied. 

Rights advocates are united in their concern that some of the trade-offs in the negotiations have been to the detriment of the Act’s initial goal to protect human rights. The two issues that kept the negotiations going on longer than expected were law enforcement exceptions and foundation models. We delve into both topics below.  

Summary of What Seems to Have Been Agreed

The main body of the AI Act will maintain its risk-based approach. It includes four categories of risk, introduces fines, and has a dedicated section on general-purpose AI:

  • Minimal risk: Uses falling into this category — for example, spam filters — do not come with obligations, but companies can voluntarily commit to additional codes of conduct for such systems.
  • High risk: These systems are subject to requirements on risk mitigation, quality of the data sets used to train them, detailed documentation, and human oversight. Examples include systems used by employers to recruit, systems used in democratic processes, and some uses in the fields of law enforcement and border management. 
  • Unacceptable risk: Systems deemed a threat to human rights were to be subject to bans, though it appears that the final Act does not truly prohibit these uses, given its  broad exemptions for national security and forms of remote biometric surveillance including facial recognition. Parliament negotiators also added databases based on bulk scraping of facial images to this category. 
  • Specific transparency risk: Specific transparency requirements apply to systems such as chatbots, where obligations come into play to ensure that users know that they are interacting with a machine. AI-generated content will also have to be labelled as such, despite some warnings from rights advocates about the impact this can have on free expression, and also the difficulties in determining such text’s origin.  
  • General-purpose AI: the AI Act also has a section, only introduced in the final weeks of negotiations, on general-purpose AI. Models deemed to be ‘very powerful’ on the basis of computing power will be subject to further obligations regarding adversarial testing, risk mitigation, and transparency. These obligations will be operationalised through codes of practice developed by industry, with some consultation with other stakeholders via the European Commission. 
  • Governance: National competent market surveillance authorities are tasked with supervising the implementation of the Act at the national level. At the European level, there will be a new European AI Office that will sit within the European Commission, that will also supervise the implementation and enforcement of the new rules on general-purpose AI models.

Surveillance & Law Enforcement Use in the AI Act

Prohibitions and limitations on law enforcement use of AI in the context of surveillance were one area of intense debate during the final days of negotiations. The Council pushed back strongly against the European Parliament, which voted to totally prohibit the use of live facial recognition (remote biometric surveillance). As CDT Europe’s Iverna McGowan pointed out ahead of the deal, the Council failed to recognise that, for remote scanning systems to work, they must scan everyone including innocent people, posing a threat to fundamental rights. Although EU governments have repeatedly pushed the boundaries of the use of facial recognition, Europe’s national and regional privacy regulators have been consistent and clear – there is no legal basis in Europe for invasive scanning. 

From the press conference immediately following the deal, it appears that a prohibition on remote biometric surveillance is still in place, but with exceptions that threaten to be so broad as to swallow the rule. For instance, an earlier leaked draft suggests that exceptions would include threats of terrorism and searches for missing people. Given how broadly some of the EU and national laws in these areas are, this could, if not carefully worded, end up allowing for continuous surveillance in European public spaces. 

A serious crime limit was maintained for retrospective use of remote biometric surveillance; here again, precision will really matter, as leaked drafts suggested that the list of ‘serious crimes’ was long and included categories that objectively could be challenged as to whether they should be considered serious enough to be used as a threshold. 

Parliamentarians managed to also have a ban on predictive policing introduced, but again here, the exceptions could swallow the rule — and so we will need sight of the final text before making judgement.

The prohibition on emotional recognition AI was unfortunately only maintained for the areas of work and education. This is particularly concerning as there is no sound scientific evidence of the effectiveness of this technology, and it is proven, when used, to be discriminatory. Furthermore, an earlier leaked draft text suggested that this technology could be used in the context of migration. A human rights-based approach necessarily looks at more protections for vulnerable groups, not fewer. 

Fundamental Rights Impact Assessments 

High risk-systems as defined by Article. 6 of the AI Act, used by public bodies and private entities to provide essential public services, such as hospitals, schools, banks, and assurance companies, would have to undergo fundamental rights impact assessments (FRIAs). It is certainly welcome to see that FRIAs made it into the final text, though as CDT and other rights advocates have consistently argued, we need FRIAs is to ensure migration of risk and protection throughout the AI lifecycle and across different use cases, as it is not always obvious how a system might adversely affect rights. 

It also appears that a controversial loophole in Article 6, which allows companies to self-assess whether their AI system is high-risk, has remained in the text. Should this loophole remain, a company can ironically avoid doing an FRIA based on their own assessment that the AI system is not high-risk. A more robust approach would be to ask that the FRIA is submitted as proof of the system not being high-risk. 

General-Purpose AI Systems and Foundation Models

According to the negotiators on Friday night, the AI Act has maintained the so-called ‘tiered approach’ to foundation models, whereby ‘very powerful models that could pose systemic risks’ will have additional obligations such as adversarial testing and model evaluation. It seems that these risks will be defined on the basis of computing power and number of users, but this approach could be inadequate given that it would not, for example, mitigate against discrimination — an outcome possible even with less computing power and fewer users, which would still have significant societal effects. 

Leaks of the text in relation to foundation models suggested a strong focus on ‘red-teaming’, though this term carries no legal meaning in EU law. Further, red-teaming is an incomplete remedy for identifying risks, as the tool is by its nature exploratory and involves only internal company deliberations. It will be important to see a stronger focus on auditing rather than red-teaming in the final text. Auditing requires checking practices against a predefined set of practices, or validating that the results of a given evaluation are indeed true results.

While the AI Act won’t apply to open-source models that are made publicly available, those models will have to comply with rules on copyright, and would have to also comply with the appropriate obligations should they fall into the category of ‘powerful models’ or ‘systemic risk models’. 

EURACTIV has reported that, until technical standards are put in place, the codes of practice for high-risk models would complement the binding obligations. Should there be any undue delays with putting these technical standards and codes of practice in place, the Commission could still intervene through a delegated act. The codes of practice will be discussed by industry, civil society, and other stakeholders — however, it is not yet clear whether civil society would have a decisive say in the final codes. 

When Does the AI Act Take Effect?

The political agreement now has to be formally approved by the European Parliament and the Council. French President Macron has been publicly critical of the deal, creating some concerns over how smoothly Council approval might go. Furthermore, technical discussions that could have significant impact on parts of the text have continued this week. Once there is that formal agreement, the law would enter into force 20 days after its publication and become applicable two years after entry into force. There are exceptions, however, with prohibitions taking effect only six months after the law enters into force, and rules on general-purpose AI coming into play after the first year. The European Commission has also indicated that it will launch an AI Pact to ask for voluntary commitment on key elements of the Act in advance of the full implementation of the Act.

Content of the Month 📚📺🎧

CDT Europe presents our freshly-curated recommended reads for the month. For more on AI, take a look at CDT’s work