AI Policy & Governance, European Policy
General-Purpose AI Models and the EU Code of Practice: a Process for Civil Society to Watch
In a world first, the EU Artificial Intelligence Act created a new framework for regulating AI technologies, including general-purpose AI (GPAI) models. Also referred to as foundation models, GPAI models are subject to unique treatment in the AI Act: while the law is now final, the text allows for acceptable compliance methods to be identified and agreed post-adoption through a multi-stakeholder consultative process.
This Code of Practice process, steered by the European Commission’s AI Office, brings together general-purpose AI model providers, downstream providers, academics, rightsholders, and civil society organisations including CDT Europe to come up with a final Code by May 2025, ahead of the AI Act’s provisions on GPAI models becoming enforceable in August 2025.
Once finalised and approved by the European Commission, the Code of Practice will play a complementary role to the AI Act. While adherence to the Code of Practice itself will be optional, compliance with the measures set therein will allow GPAI model providers to demonstrate compliance with their obligations under the Act until a harmonised standard is published, which likely will take several years.
CDT Europe is honoured to have been selected to participate in the working groups that will be helping to develop the Code of Practice, and is eager to help ensure that it robustly protects fundamental rights.
General Purpose AI Models in the AI Act
Provisions governing GPAI models were brought within scope during the latter stages of negotiations of the AI Act, following the surge in popularity of large language models such as that underpinning ChatGPT. GPAI models raise important questions, not least owing to their propensity to produce illegal or inaccurate (also referred to as “hallucinated”) content, use and reproduce copyrighted materials in whole or in part, show bias — whether based on religion, race, or gender — and regurgitate personal data, leading to privacy and data protection implications currently being explored by regulators.
As discussed in our initial AI Act explainer, the AI Act sets distinct rules for GPAI models using a two-tiered system, with default rules applicable to all GPAI models (with an exception for those released under a free and open-source license) and specific rules applicable to models presenting “systemic risks.” The default rules for GPAI models centre on documentation, whereas rules for GPAI models with systemic risks focus on requiring risk assessments, mitigations, and notifications to regulators. Under the Act, a model may present systemic risks automatically based on quantitative criteria — upon reaching a set level of training compute — or upon designation by the European Commission following the fulfilment of mixed quantitative and qualitative criteria, where the model is considered to have a significant impact based on actual or likely effects on public health, safety, public security, and fundamental rights.
The Code of Practice Process: Ensuring GPAI Model Requirements in the AI Act Live Up to Their Potential
Codes of Practice are not a novel regulatory instrument. Previously, the European Commission steered the development and strengthening of the Code of Practice on Disinformation in 2018 and 2022 respectively. As its predecessor on disinformation did, the Code of Practice on GPAI models will include specific objectives, commitments, and key performance indicators. However, the Code of Practice on GPAI models will be the first to enable entities regulated by EU law to comply with legislation by following the rules set in the Code — a role previously reserved to harmonised standards produced by European standardisation organisations.
According to the AI Act, the Code of Practice can cover any aspects relating to the obligations set by the Act for GPAI model providers. The process as announced will focus on four thematic areas – each covered by a dedicated working group — focussing on transparency and copyright, risk identification and assessment, risk mitigation, and internal governance. Once finalised, the resulting Code of Practice must be approved by the European Commission in order to be applicable.
Although the Code of Practice will not be the only avenue for GPAI model providers to achieve compliance with their obligations, the Code will have an important role in providing legal certainty to GPAI model providers that they are effectively complying with the requirements set in the Act. These rules become applicable in August 2025, though GPAI models placed on the market prior to that date have until August 2027 to come into compliance. Adhering to the Code of Practice will demonstrate compliance with the provisions in the AI Act governing GPAI models for several years, until a harmonised standard is published.
A Role for Civil Society
The AI Act explicitly invites the involvement of multiple stakeholders in the Code of Practice process, calling on the AI Office to take into account the needs and interests of all interested parties, including affected persons. Civil society participation will only be directly possible for those organisations and individuals having already been formally approved as participants following an open call for expressions of interest.
Active civil society involvement in the process will be crucial to ensure that the rules governing GPAIs models in the AI Act are robustly observed and to develop proposals that promote high levels of transparency, thorough risk mapping, and robust mitigations and safeguards. Setting a high bar for GPAI models, in line with the rules set by the AI Act, is both a necessary prerequisite for individuals to be able enforce their rights, and for GPAI models to be held accountable in a manner serving the public interest — not least because the resulting Code of Practice will likely influence any subsequent harmonised standards.
CDT Europe will be officially participating in the development of the Code of Practice and is committed to following the process closely, together with other civil society organisations, with a view to ensure that voices advocating for fundamental rights and the public interest are heard.