Skip to Content

AI Policy & Governance, European Policy

CDT Europe’s AI Bulletin: February 2025

Policymakers in Europe are hard at work on all things artificial intelligence, and CDT Europe is here with our monthly Artificial Intelligence Bulletin to keep you updated. We cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. To receive the AI Bulletin, you can sign up here.

EU Mantra at the French AI Summit: Innovation and Deregulation

The third global summit on AI — and the first since the Artificial Intelligence Act entered into force — presented an ideal opportunity for the European Union to promote its hard-fought regulatory framework before a global audience. 

Disappointingly, the opposite happened. The European Commission’s statements at the Summit emphasised innovation and deregulation, as opposed to robust implementation and enforcement of the AI Act. Introducing a panel tackling the Code of Practice process, Commissioner Henna Virkkunen — responsible for tech policy — promised an “innovation-friendly” implementation of the AI Act. These remarks were consistent with European Commission President Ursula Von der Leyen’s later remarks to the AI Summit, which emphasised innovation in making the case for European leadership in the global AI race. The AI Act was only mentioned in passing as von der Leyen enumerated the EU strengths enabling AI development, while promising to cut red tape for companies. 

In that speech, von der Leyen also announced €200 billion for AI investment, including €20 billion for AI gigafactories that would provide the infrastructure for training large AI models. 

The French AI summit statements have been followed by further indications of the EU bloc’s shifting approach to AI. This week, von der Leyen announced plans to boost defence spending for targeted European capability areas, including military uses of AI. 

First AI Act Implementation Guidelines Published

Early February saw two separate AI Act implementation milestones: the publication of the guidelines outlining prohibited AI practices, and guidelines defining AI systems under the AI Act. 

The prohibited AI practices guidelines build on and further interpret the prohibitions set out in Article 5 of the AI Act. The guidelines provide several examples of the prohibited AI practices likely banned under Article 5, as well as practices falling outside of their scope. The guidelines overall apply a robust interpretation of the AI Act’s prohibitions, and clarify the interplay between the AI Act and existing legal frameworks such as the General Data Protection Regulation, the Law Enforcement Directive, the Unfair Commercial Practices Directive, and the Digital Services Act.  

The guidelines defining AI systems clarify which types of AI systems come within the scope of the AI Act. The guidelines exclude four types of systems, notably systems for improving mathematical optimisation, basic data processing systems, systems based on classical heuristics, and simple prediction systems. Early reactions have criticised the AI systems guidelines for their lack of clarity, and raised questions as to the extent of the exclusion. 

While neither set of guidelines is binding, they will likely steer the interpretation of the AI Act by regulators and courts.

AI Liability Directive To Be Withdrawn

On 11 February, hot on the heels of the French AI Summit, the European Commission announced in an annex to its 2025 work programme the intended withdrawal of its proposal for an AI Liability Directive (AILD), stating that there was no foreseeable agreement on the file. As we’ve explained, the proposal aimed to address the difficulties individuals face in making liability claims for AI-induced harms. Its withdrawal inevitably delays development of robust avenues enabling the right to an effective remedy. 

The largely unexpected withdrawal was announced just as European Parliament discussions on the file resumed, and after the file’s rapporteur launched a public consultation to collect multistakeholder input. Even more glaringly, Commissioner Michael McGrath defended the AI Liability Directive to MEPs the same day the withdrawal was announced, raising questions as to the level of internal communication on the merits and viability of the file.  

The withdrawal is not yet final: under the interinstitutional agreement for better law-making, co-legislators could still ask the Commission to revisit its decision or reissue a proposal. As a first step in this direction, lawmakers have invited Commissioner Virkkunen to explain the withdrawal before Parliament. Further, as the withdrawal notice stated, the Commission will also assess whether another proposal should be tabled or another type of approach should be chosen. 

Third Code of Practice Draft Delayed

The publication of the third draft of the Code of Practice, set to take place last week, was delayed following the AI Office’s announcement that it approved the drafters’ request for additional time to ensure that the third draft — the final draft to be put to public consultation — reflected stakeholder groups’ comprehensive feedback. The AI Office is yet to announce a potential publication week for the third draft, but the CoP process’ official timeline suggests that it could be published as late as March. Despite this timeline shift, the deadline for finalizing and publishing the Code of Practice is still 2 May 2025 —  a deadline set by the AI Act itself. 

Postponement of the draft’s publication comes as industry players such as Meta and Google expressed discontent with the draft Code of Practice, and civil society organisations threatened to withdraw from the process altogether. 

In Other ‘AI & EU’ News 

  • The AI Act’s impact on businesses will be assessed as part of the European Commission’s simplification agenda. Among other simplification proposals, the European Commission will undertake a broader assessment of the “digital acquis” — which includes several laws beyond the AI Act, including the General Data Protection Regulation — to establish whether it “adequately reflects the needs and constraints of businesses” such as small and medium enterprises, and others. 
  • The European Parliament Research Service published a short brief on the interplay between the AI Act and the GDPR on the subject of algorithmic discrimination, noting the possible legal bases that could be leveraged under GDPR to enable the processing of sensitive data for the purposes of detecting and addressing discrimination — itself an objective under the AI Act.
  • The Italian data protection authority ordered DeepSeek to block its chatbot in the country two days after it requested information, and the relevant companies reportedly argued that they had no obligation to provide information to the Garante as they were not subject to its jurisdiction. The Garante cited the “entirely unsatisfactory” response from the companies in the order requiring DeepSeek to block the chatbot, noting that they would open a formal investigation. Several other data protection authorities are currently already investigating DeepSeek
  • The French data protection authority, the CNIL, published new guidance on how the GDPR applies to AI models, focusing on the exercise of data subject rights.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.