AI Policy & Governance, European Policy
CDT Europe’s AI Bulletin: March 2025
Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here.
Third GPAI Code of Practice Draft Excludes Discrimination
The third version of the General-Purpose AI (GPAI) Code of Practice – and the last to be put to multistakeholder consultation – was published on 11 March, alongside a FAQ page on the Code of Practice. The draft is now split into four parts dealing with commitments, transparency, copyright, and safety and security, respectively. The latter section addresses obligations relative to risk assessment and mitigations, and has been subject to significant changes that regress the draft’s fundamental rights protections.
As we covered in our initial reaction to the draft, the list of risks that are mandatory to assess — also known as the “selected” systemic risks taxonomy — now excludes discrimination and is largely focussed on existential risks. Discrimination was moved to the list of risks that are optional to assess, joining other risks to fundamental rights such as privacy harms and increased spread of child sexual abuse material or non-consensual intimate imagery. The draft cautions GPAI model providers to only assess these risks when they are specific to models’ high-impact capabilities.
As we addressed in fuller comments to the draft, the explanations given – that fundamental rights risks don’t arise from high-impact capabilities, and that the EU digital rulebook better accounts for these risks – do not stand up to scrutiny and fail to justify the changes. A wide range of organisations have critically reacted to the changes in the systemic risk taxonomy, while also acknowledging some positives, such as that the draft’s provisions concerning external assessment were strengthened, and it now requires greater consideration of acceptability of risks by model providers.
This draft Code of Practice will undergo one final round of review before the final version is presented and published by 2 May. The AI Office and the AI Board will subsequently review the draft and publish their assessment, but the decision to go forward with the Code ultimately rests on the European Commission. The EC can choose to either approve the Code through an implementing Act, or – if the code is not finalised or simply deemed inadequate – to provide common rules for how GPAI model providers should follow their obligations by 2 August, the same date those obligations become applicable. Independent of this process, the European Commission can request standardisation of the rules for GPAI models. Once those standards are finalised, covered providers of GPAI models will be presumed to comply with their obligations under the AI Act.
Spain Takes a Robust Approach to Prohibited AI Practices
The Spanish government approved a bill implementing the AI Act at the national level, marking the first step towards its formal adoption. Notably, the bill sets out narrow conditions under which remote biometric identification (RBI) may be lawfully used for law enforcement purposes. The practice is in principle prohibited by the AI Act, but technically allowed for three law enforcement purposes – search of missing persons and victims of specified crimes, prevention of threats or terrorist attacks, and identification of suspects of specified criminal offences. It can only be lawfully carried out in a member state where it is explicitly authorised by implementing national legislation, which can be stricter – but not broader – than the terms set by the Act. The Spanish bill as written only authorises RBI use for one of the three purposes the AI Act lists, namely to locate and identify individuals suspected of committing criminal offences of a given degree of seriousness, as specified in Annex II of the Act.
The bill builds on the AI Act by categorising infringements in three categories: minor, severe, and very severe. Any use of an AI practice the law prohibits, including RBI use outside of the draft law’s only exception, is deemed very severe. Failure to notify users when they directly interact with an AI system, or to label AI-generated content in line with the AI Act’s requirements, will constitute a “severe” infringement.
Italian Draft Law Aspires to Set Limits on AI Uses in Critical Sectors
An Italian government law decree approved by the Senate sets general conditions for the use of AI, delineating and limiting the uses of AI in critical sectors.
The law specifies that minors below 14 years old may only access AI systems with parental consent. It identifies key areas that stand to benefit from AI – such as the healthcare sector and the working environment – and also emphasises key safeguards, such as creating an AI Observatory within the Ministry of Labor, and limiting uses of AI systems in the judicial sector to administrative purposes specifically excluding legal research.
Further, the law amends aspects of Italian criminal law to cover the use of AI in committing criminal offenses. The law notably introduces a new legal offense for dissemination of AI-generated content – ostensibly including deepfakes – without a person’s consent, resulting in unjust damage, that is punishable by imprisonment from one to five years.
The law is in draft form and will need to be approved by the Italian Chamber of Deputies.
AI Identified as a Key Priority in Europe’s Defence Strategy
A joint white paper released last week by the European Commission and the High Representative for Foreign Affairs and Security Policy on AI in European Defence identified AI as a key area of priority defense capability, noting that new ecosystems and value chains for cutting-edge technologies such as AI “can feed into civilian and military applications”. The paper highlights AI-powered robots as a concrete area of opportunity.
The white paper announces a strategic dialogue with the defence industry to identify regulatory hurdles and address challenges ahead of presenting a dedicated Defence Omnibus Simplification proposal by June 2025. This new simplification proposal adds to the five recently announced simplification initiatives — reviews of legislation from the digital, agricultural, and other domains — outlined in the Commission’s communication on simplification.
The paper further announces a forthcoming European Armament Technological Roadmap to be published this year — “leveraging investment into dual use advanced technological capabilities at EU, national and private level” — that will focus on AI and quantum in an initial phase.
In Other ‘AI & EU’ News
- Digital rights NGO noyb filed a second complaint against OpenAI after a Norwegian user queried ChatGPT for information related to his name, and the chatbot inaccurately responded that the individual by that name was a convicted murderer. The complaint, filed with the Norwegian data protection authority Datatilsynet, argues that OpenAI violates GDPR’s data accuracy principle by allowing ChatGPT to create defamatory outputs about users.
- A proposed amendment to the Hungarian Child Protection Act seeks to allow using facial recognition to identify Pride protest attendees, and to ban Pride events. The proposal would likely be precluded by the AI Act’s prohibition on conducting remote biometric identification for law enforcement purposes in publicly accessible spaces, which became applicable in February this year.
- The European Commission is building a network of model evaluators to define how general-purpose AI models with systemic risk should be evaluated in accordance with the legal requirements of the AI Act and the GPAI code of practice.
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.
- CSIS, The Future of Transatlantic Digital Collaboration with EU Commissioner Michael McGrath (transcript of event)
- CIVIO, Spanish prisons use a 30-year-old algorithm to decide on temporary releases
- EUACT, AI Project: Call for Experts
- The Atlantic, The Unbelievable Scale of AI’s Pirated-Books Problem
- Tech Policy Press, The EU AI Policy Pivot: Adaptation or Capitulation?