Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s AI Bulletin: October 2023

Also authored by CDT Europe’s Rachele Ceraulo.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

I. The Latest on the EU’s Proposed Artificial Intelligence Act

After a month-long summer recess, EU negotiators resumed work on the AI Act. Technical meetings occurred in quick succession in the run-up to the most recent trilogue negotiation — between the European Parliament, Council, and Commission — which took place on 2-3 October in Strasbourg. A fourth trilogue has also been officially scheduled for 25 October. 

In addition to several provisions discussed before the summer break, negotiators are seeking  consensus on a number of issues. These include confidentiality requirements for national competent authorities charged with enforcing the AI Act, penalties for non-compliance with the AI Act, and requirements for high-risk AI systems. Member states are also considering whether to accept some points already discussed with the Parliament, including measures in support of innovation and the EU database for high-risk AI systems.

Other issues still under intense negotiations include what rules should classify AI systems as high-risk, whether to mandate that AI deployers perform fundamental rights impact assessments, and whether the Regulation should cover general-purpose AI systems and foundation models

In this month’s AI Bulletin, we explore some of these sticking points, laying out the diverging positions of EU lawmakers and civil society recommendations on how to bridge the gaps through a human-rights lens. 

Human Rights Protections at Risk Due to Changes in High-Risk Classifications

On 22 September, a leaked European Commission proposal was circulated that presented a compromise on how to classify AI systems listed under Annex III of the Regulation as high-risk. The proposal departs from the original Commission proposal, which automatically categorised any system falling under Annex III as high-risk. The compromise text introduces three criteria that would exempt certain AI systems used in high-risk contexts from complying with their obligations under the governance regime for high-risk AI systems

The leaked proposal also adds a problematic self-assessment procedure to the AI Act’s proposed Article 6, which would incentivise companies to assess their systems as non-high-risk to avoid complying with the high-risk regime, and effectively put the burden on regulatory authorities to try to establish whether a company’s assessment was accurate. CDT Europe argues in our latest blog post that, in order to safeguard the core integrity of the AI Act’s human rights protections, these Article 6 self-assessment provisions must be dropped, and the criteria for categorising an AI system as high-risk should remain as initially envisaged in the European Commission’s text. 

Fundamental Rights Impact Assessments (FRIAs)

Unlike self-assessment procedures, mandatory fundamental rights impact assessments (FRIAs) help ensure a robust rights-protective approach. As CDT previously highlighted, a risk-based approach can be helpful in ensuring proportionate regulation, but must also integrate a rights-based approach in order to appropriately protect human rights.

While the European Parliament introduced an obligation to perform FRIAs, that provision is currently missing from the Council’s General Approach. In June, however, the Spanish Presidency of the European Council sought clarification on how willing EU Member States would be to accept a provision on FRIAs. EU member states have shown no signs of acceptance, instead questioning the added value of FRIAs for AI deployers. Their arguments against FRIAs were that the AI Act already places risk management obligations on AI providers; that FRIAs would duplicate GDPR data protection impact assessment requirements; and that FRIAs would increase administrative costs for the AI industry across the EU.

Discussions on FRIAs are likely to be next on the agenda for negotiations, and will remain a key area of divergence between the Council and Parliament in the coming weeks. While the Parliament sees this issue as a key battle, reports indicate that a blocking minority in the Council – spearheaded by France and Germany – is shaping up to reject any compromise that would require public and private-sector deployers of high-risk systems to assess how their AI systems affect fundamental rights. 

This divergence of perspectives on FRIAs has been particularly high on the agenda for civil society organisations and academics, who have stressed the importance of FRIAs within the context of the AI Act in order for the Regulation to respect human rights. For example, over 60 organisations including CDT Europe, coordinated by the European Center for Not-for-Profit Law (ECNL), urged negotiators to implement requirements for deployers of high-risk AI systems to conduct an FRIA ahead of deployment, for both public and private-sector users. Similarly, more than 150 academics called on the European institutions to introduce in the final text of the Regulation a mandatory FRIA for both public and private entities deploying AI systems. 

While the AI Act already has some provisions that would ensure that providers respect human rights, entities that purchase and use these systems are currently exempt from complying with such obligations. Civil society organisations have stressed that responsibility towards users should not end when the provider sells an AI system; rather, it should encompass everyone impacted. 

General Purpose AI & Foundation Models

Since the release of ChatGPT, and the renewed media hype for these technologies, “general-purpose AI systems” (GPAIs) remain a hot topic of debate in the negotiations on the EU AI Act. Currently, however, whether to include them in the final text of the Regulation remains unsolved.

Because GPAIs do not have a predetermined purpose or use, they would not automatically qualify as high-risk according to the European Commission’s purpose-based criteria for a high-risk AI system – even if applied in high-risk situations. This means that GPAI providers would under no obligation to comply with the mandatory requirements for high-risk AI systems,  including documentation and data governance requirements, prior to market rollout. Both the Council and the Parliament sought to close this gap in their respective positions, primarily by addressing power imbalances and the distribution of responsibilities among actors in the AI value chain. Despite this common goal, the two institutions adopted significantly different approaches to regulating these technologies. 

The Council’s General Approach proposed a specific regime for regulating general-purpose AI systems used as high-risk AI systems, or as components of high-risk AI systems, by applying certain requirements for high-risk AI to GPAI providers. How these requirements would be tailored to GPAIs is left to the European Commission’s discretion. While the Council’s General Approach includes dedicated obligations for GPAI providers, it introduces an exception that would allow providers to exempt themselves from responsibility by excluding all high-risk uses in their instructions of use through a standard legal disclaimer. Civil society organisations and international AI experts have warned that this approach not only disincentives due diligence at the original developer stage, but will lead to liability falling on entities that build on pre-existing GPAI systems to create user-facing applications, and therefore lack sufficient resources and capability to assess and mitigate risks stemming from design and development.

Meanwhile, the Parliament has focused on foundation models – AI models trained on large datasets, and used to accomplish a multitude of later-to-be-defined tasks – irrespective of their use. The Parliament’s regime governing foundation models is not dissimilar to the one for high-risk AI systems, in that it requires providers of these models to comply with a series of obligations. Those obligations include conducting tests to identify and mitigate risks to health safety and human rights, and providing downstream providers with technical documentation and clear instructions for use, in order to empower those downstream providers to comply with the AI Act.

The idea of regulating foundation models, as proposed by the Parliament, is facing strong opposition from a coalition of member states. This coalition argues that such an approach would depart from the original scope of the AI Act, which seeks to regulate AI systems and not the models that form the basis for those systems. Adamant about defending the logic of the Council’s General Approach, the coalition supports the adoption of a clear definition of general-purpose AI, while leaving the Commission some room for manoeuvre in defining requirements for GPAI providers. Arguing that over-regulating GPAIs could stifle innovation, they further recommend giving the market some latitude over the distribution of responsibilities across the AI value chain. It is clear that a more robust approach involving legislation, company practices, and continuous auditing will be needed to ensure an approach to regulating general purpose AI is achieved that protects people’s rights in practice.

II. In Other ‘AI & EU’ News

  • Tomorrow, CDT Europe – in partnership with the Open Government Partnership, and under the auspices of the Spanish EU Presidency – will hold a civil society roundtable dialogue on the EU’s AI Act. The event will be under Chatham House Rules, and bring together EU member state negotiators and key civil society organisations to discuss the text. While this event is invite-only, should you come from an EU member state and have not already received an invitation, please get in touch with us at [email protected]
  • On 13 September, President of the European Commission Ursula von der Leyen gave the annual State of the European Union address at the European Parliament in Strasbourg. In this last year of the current European Commission college (2019-2024), President von der Leyen outlined her ambition for the EU to “lead the way on a new global framework for AI”, which is to be built upon three main pillars: guardrails that encourage responsible and human-centric AI development; global AI governance; and guiding innovation. In the speech, the Commission President urged EU lawmakers to conclude negotiations on the AI Act, so that it may become a blueprint for AI regulation globally. Maintaining the global perspective, President von der Leyen shared plans to work with global partners on developing an approach to understanding the social impacts of AI, including the creation of an “[Intergovernmental Panel on Climate Change] IPCC for AI”, before concluding with the announcement of a new initiative to open up high-performance computers across the EU for AI start-ups to train their models.
  • In September, UNESCO launched the first annual Digital Learning Week, and to mark the occasion, released its guidance for generative AI in education and research. The guidance proposes seven key steps for governments to take in order to regulate generative AI, and encourages the establishment of policy frameworks for its ethical use specifically in education and research. The guidelines recommend a human-agent – maintaining a “human element” in oversight mechanisms –  and age-appropriate approach to the ethical validation and pedagogical design processes. They also place emphasis on the need to adopt global, regional, or national data protection and privacy standards.
  • Five months after OpenAI first clashed with the Italian Data Protection Authority over data protection concerns, the company found itself once again under intense media scrutiny. A series of French media outlets, including France24.com, Radio France, and TF1, blocked OpenAI’s web crawler, GPTBot, after the company announced that the tool would automatically collect publicly accessible online data to improve the accuracy of its AI models. They followed in the footsteps of many English-language media groups, like the New York Times and the Guardian, which disabled GPTBot in an effort to protect and avoid copyright infringement on their content.