Skip to Content

AI Policy & Governance

Third Draft of the General-Purpose AI Code of Practice Misses the Mark on Fundamental Rights

The third draft of the General-Purpose AI Code of Practice was published last week. 

As we expressed in our recent statement, CDT Europe is disappointed about the recent changes made in the third draft Code of Practice to the systemic risk taxonomy. 

Under the AI Act, two sets of obligations apply to general-purpose AI (GPAI) models and those that pose systemic risks. Models will be understood to pose a systemic risk based either on a case-by-case basis by reference to predetermined criteria, or presumed to do so where they exceed a specified benchmark in terms of training compute. If a GPAI model poses systemic risks on either basis, additional risk assessment and mitigation obligations apply on the GPAI model providers. However, the Act does not specify the specific risks that providers should assess and mitigate for – instead, it leaves this important task to the Code of Practice, which sets out to define these precise risks in its systemic risk taxonomy. 

Inclusion of fundamental rights risks in the Code of Practice’s systemic risk taxonomy is crucial to compel GPAI model providers to assess and mitigate risks to fundamental rights that their models may pose. Since its first draft, the Code has followed a two-tiered approach to systemic risks, operating a “selected systemic risks” list – which are mandatory for providers to assess – in Appendix 1.1 and optional risks “for potential consideration” in Appendix 1.2. Most fundamental rights risks are included in the optional list under Appendix 1.2, with the third draft presenting the novelty of having added the risk of illegal, large-scale discrimination to this list. 

This analysis specifically considers the implications of the systemic risk taxonomy as currently scoped, and addresses some of the arguments raised in the text of the Code to justify the latest version of the taxonomy.

Read CDT Europe’s full analysis.