AI Policy & Governance, European Policy
CDT Europe’s AI Bulletin: November 2024
Consultation on Applying the AI Act’s AI System Definition and Prohibited AI Systems
This month, the AI Office launched a combined multistakeholder consultation on how to apply the AI Act’s definition of an AI system and its list of prohibited AI practices. The consultation inputs are intended to inform forthcoming guidelines from the European Commission, which will be prepared before the relevant provisions of the Act enter into application on 2 February 2025.
Unlike previous European Commission consultation exercises on guidelines relating to digital regulations — such as the consultation for guidelines on integrity of electoral processes, under the Digital Services Act — the current consultation does not include a draft for stakeholders to comment on. Instead, respondents are asked to identify areas for clarification, in relation to both the existing definition of an AI system and the identified prohibited practices, and to identify AI systems that would fall within the prohibitions.
The consultation is now open, and closes on 11 December.
The Draft General-Purpose AI Code of Practice
The AI Act’s final text established that methods for complying with the law would be identified and agreed through a multi-stakeholder consultation process, and this month, the AI Office published the first draft of the result of that process: the draft General-Purpose AI Code of Practice lays out concrete measures that general-purpose AI model providers must take to comply with the AI Act. The publication was followed by closed working group discussions, each dedicated to a core thematic area of the Codes. You can read more about the process in a CDT explainer.
The draft Code addresses several key compliance issues, including:
- Transparency: The measures in the draft Code focus on two dimensions of transparency, namely documentation to be provided by GPAI model providers to the AI Office, and documentation to be provided to downstream providers. While the measures do not currently make providing information to the general public compulsory, the AI Office will prepare – as required by the AI Act – a template for providers to make sufficiently detailed information available on their training data. The content of the template, as indicated by the AI Office, will be informed by the Code of Practice discussion on this issue.
- Risk identification and assessment: One of the draft Code’s novel creations is a taxonomy of systemic risks. The proposed taxonomy – which differs from the systemic risks identified in the Digital Services Act – prominently features future-looking risks including loss of control of models, and chemical, biological, radiological, and nuclear risks. Persuasion and manipulation, as well as large-scale illegal discrimination, are also listed as systemic risks; the draft raises questions as to the impact of the former on the right to freedom of expression, and the rationale behind limiting instances of discrimination to legally protected categories. Large-scale privacy infringements and mass surveillance are suggested but absent from the list, and environmental risks and child sexual abuse material are entirely absent.
- Risk mitigation: The draft Code asks providers to map systemic risk indicators and detail safety mitigations in a Safety and Security Framework, as well as to create a Safety and Security Report for any GPAI model they develop with systemic risk. Importantly, those reports are to detail mitigations undertaken, the results of any effectiveness assessments, and cost-benefit analyses to justify the given model’s deployment.
- Internal governance: The draft Code requires providers’ executives to be responsible for oversight of systemic risks produced by GPAI models, and asks GPAI model providers to enable meaningful independent expert assessment of risks and mitigations throughout the lifecycle of a GPAI model as appropriate. The draft Code also asks providers to identify and report serious incidents, and implement whistleblowing channels.
Individuals and entities formally participating in the Code of Practice process are required to provide feedback by Thursday, 28th November.
In other AI Act developments:
- The consultation on rules for establishing and operating the scientific panel of independent experts, which we covered in our previous bulletin, has now closed. In consultation responses, civil society highlighted the importance of clarifying and strengthening the independence requirements for selected experts, including by strictly requiring them to be financially independent both from AI providers and deployers, and asking for provisions enabling the effective operation of the scientific panel.
- The appointment of fundamental rights authorities, who will have powers to request and access any documentation created pursuant to the AI Act, has been slow and inconsistent across member states, despite the deadline imposed by the AI Act having expired. Only seven countries have publicly announced their nominated authorities, including Cyprus, Ireland, Greece, Lithuania, Malta, and Poland. Several of these countries (Cyprus, Ireland, Greece, and Malta) included the national data protection authority, an ombudsperson, and/or at least one national human rights body.
In Other ‘AI & EU’ News
- The European Data Protection Board (EDPB) hosted a stakeholder event on AI models on 5 November 2024 to gather inputs for a forthcoming EDPB opinion requested by the Data Protection Commissioner. The opinion, which is due by the end of the year, is set to address the extent to which personal data is processed at the various stages of an AI model’s training and operation, and consider the appropriate legal basis for data controllers to carry out that processing.
- The AI Office will hold a webinar on the architecture of the AI Act on 28 November, to be followed by another webinar on 17 December. The event will be openly available. Separately, the AI Office will host a specialist workshop on GPAI models and systemic risks on 13 December. The event will be accessible to organisations or university-affiliated research groups with “demonstrated experience” in GPAI model evaluations. Those interested in attending must apply to do so by 8 December.
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.
- European Disability Forum, A Disability Inclusive AI Act: A guide to monitor implementation in your country
- Lighthouse Reports, Sweden’s Suspicion Machine
- Tech Policy Press, The Real “Brussels Effect” and Responsible Global Use of AI
- The Digital Constitutionalist, The AI Act’s Right to Explanation: A Plea for an Integrated Remedy