AI Policy & Governance, European Policy
CDT Europe’s AI Bulletin: September 2024
The AI Act’s First Few Implementation Steps
Less than two months after the AI Act entered into force, oversight bodies have started convening around its implementation. The AI Board held its first official meeting, where members discussed the Commission’s first deliverables that relate to the Act’s implementation, and exchanged on best practices for national approaches to AI governance, among other topics. The AI Board remains the only oversight body created by the AI Act that is currently active; no public information is available about the development of the multistakeholder Advisory Forum or implementing act that would set up the Scientific Panel.
Despite these key oversight actors’ absence, implementation of the AI Act keeps moving forward, with co-regulatory efforts — which are conducted under institutional oversight, but largely rely on engagement of regulated entities — taking centre stage. Last week, the consultation on Codes of Practice for General-Purpose AI (GPAI) models, which will address information and documentation requirements and measures for risk assessment and mitigation, closed. The consultation responses will serve as a starting point for separate co-regulatory codes specifying voluntary obligations for GPAI providers, and inform the 30 September kick-off plenary for the Codes of Practice process. While all stakeholders approved by the AI Office — including those from civil society — will participate in the process, GPAI model providers are anticipated to dominate.
In parallel, several hundred companies have reportedly expressed interest in joining the AI Pact — an information exchange and knowledge-sharing network with voluntary pledges, designed to foster early compliance with the AI Act — ahead of a signatory event taking place on 25 September.
A Pro-Industry EU Agenda on AI for the Next Mandate
Earlier this month, a report by former European Central Bank president Mario Draghi cast the AI Act and the General Data Protection Regulation (GDPR) as obstacles to European competitiveness due to their perceived “complexity and risks of overlaps and inconsistencies”. The report was significantly informed by the input of private companies, and ostensibly written without any input from digital rights organisations, except for one consumer rights organisation. In a joint open letter, some of the report’s private-sector contributors endorsed it, decrying the “huge uncertainty” created by European data protection authorities and calling for a modern interpretation of GDPR.
Last week, when Henna Virkkunen was nominated as leader of the European Commission’s Tech Sovereignty, Security and Democracy portfolio, which includes enforcement of the AI Act, the mission letter addressed to her endorsed several of the recommendations in Draghi’s report. Namely, it endorsed Draghi’s proposals to draft an EU Cloud and AI Development Act to increase computational capacity, and create an EU-wide framework for providing “computational capital” to innovative small and medium-sized enterprises. The mission letter also asked the commissioner-designate to ensure access to tailored supercomputing capacity for AI startups and industry through the recently launched AI Factories Initiative within her first 100 days, to develop an Apply AI strategy to boost industrial uses of AI and improve delivery of public services, and to help set up an European AI Research Council.
Other European Commission portfolios will touch on AI-related issues: the Commissioner for People, Skills and Preparedness will work on algorithmic management, and the Commissioner for Intergenerational Fairness will work to develop an AI strategy for the creative industries.
A Renewed Push for the AI Liability Directive
Last week, the European Parliament Research Service published its much-anticipated impact assessment of the AI Liability Directive (draft proposed rules for non-contractual civil liability in connection with artificial intelligence systems). The report, which comes after negotiations on the Directive were put on hold during the last stages of AI Act negotiations, highlights the Directive’s unique potential to provide redress for a range of harms otherwise not catered to, including to fundamental rights. It suggests not only taking the Directive forward, but expanding it into a Software Liability Regulation with provisions on strict liability for failure to comply with the AI Act’s rules on prohibited AI systems. The report also suggests broadening the existing presumption of a causal link between harmful AI systems’ outputs and the provider’s non-compliance with their obligations.
In Other ‘AI & EU’ News
- The Irish Data Protection Commission launched an inquiry into whether Google complied with the General Data Protection Regulation in processing personal data to train its foundational AI model, PaLM 2. The probe will specifically investigate whether Google carried out a data protection impact assessment, which is required by law when the processing of personal data is likely to result in a high risk to individual rights and freedoms.
- The United Kingdom’s data protection agency, the Information Commissioner’s Office (ICO), announced that Meta would resume its plans to train its generative AI on Facebook and Instagram UK user data, months after it requested that Meta suspend these plans. The ICO noted that, while it had “not provided regulatory approval for the processing”, Meta had applied changes to its approach, including making it simpler for users to object to processing.
- The Dutch data protection agency published its third report on AI and algorithmic risks, which highlights some of the challenges in implementing the AI Act. The report notes with concern that some high-risk AI systems intended for public sector use are not required to comply with the AI Act until 2030, and calls for a three-step transition plan for member states to require compliance earlier than required by the Act.
🆕 Job Opportunity: Legal and Advocacy Officer (Equity and Data)
Interested in working with our team in Brussels to promote human rights in the digital age? We are seeking a Legal and Advocacy Officer (Equity and Data), a position that offers an exciting opportunity to engage with the AI Act in its implementation phase as well as other legislative efforts, assessing the current legal landscape’s ability to effectively guard against and provide effective remedies for AI-enabled human rights harms, and formulating recommendations for robust regulation and governance of AI. The position will be based in CDT Europe’s office in Brussels, Belgium, and the deadline to apply is 27 September.
Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.
- ECNL, Towards an AI Act that serves people and society
- Tech Policy Press, Challenging the myths of generative AI
- EPRS, Briefing on the AI Act
- Access Now, Why human rights must be at the core of AI governance