Skip to Content

AI Policy & Governance, European Policy

CDT Europe’s Second Contribution to the Code of Practice Process on GPAI Models 

The Centre for Democracy and Technology Europe has provided feedback to the second draft of the Code of Practice for General-Purpose AI (GPAI) Models by the AI Office. This round of feedback, submitted by CDT Europe through a closed survey as a participant to the Code of Practice process, follows our first set of comments and responds to the second of at least three drafts that will be produced in the coming months. The final version of the Code is expected to be announced in May 2025.

In this round of feedback, we focussed our comments on the systemic risk taxonomy offered by the draft code. We stressed the following points: 

  • The addition of several new considerations underlying the identification of systemic risks can easily cause confusion, not least because they depart from the systemic risk definition set in the AI Act and may be read as an exhaustive list of considerations.  The draft should state that the listed elements are merely indicative and that a risk can be considered systemic within the meaning of the AI Act for reasons not listed, and even if it does not satisfy the listed considerations. 
  • The scoping of the risk of “large-scale” and “illegal” discrimination is unduly narrow. The notion of “large-scale” discrimination is anathema to the rationale underlying anti-discrimination law, which seeks to protect minority groups, and the focus on “illegal” discrimination fails to capture the full breadth of characteristics leading to actual discrimination. 
  • The “large-scale, harmful manipulation” systemic risk continues to be broadly scoped and raises significant freedom of expression concerns. For instance, the example provided of “coordinated and sophisticated manipulation campaigns leading to harmful distortions” could be broadly interpreted and legitimise censorship. Every political or advertising campaign is an attempt to persuade: yet the risk as drafted gives undue latitude to the developers to decide what constitutes manipulation or a harmful distortion, enabling undue restrictions on the right to freedom of expression.  
  • Privacy and data protection risks should be included in the mandatory “selected systemic risks” category as opposed to being listed in the optional “additional risks category”. Privacy and data protection risks are included in risk taxonomies in multiple global AI governance instruments and their relevance in the AI model context from a regulatory perspective was underscored in the European Data Protection Board’s opinion on AI models.