AI Policy & Governance, European Policy
CDT Europe’s AI Bulletin: Summer 2024
Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.
The AI Act Comes Into Force
The final text of the AI Act was published in the Official Journal of the European Union on 12 July, and comes into force on 1 August, which means that different sections of the Act become applicable in the next few months. For a breakdown of when key sections enter into applicability, see the April issue of our AI bulletin.
A New Political Ecosystem Around the AI Act
While the AI Act is set to enter into force under a different Parliament than the one that negotiated its text, signs point to continued parliamentary scrutiny. The Parliament has established a new cross-committee group — made up of representatives from the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees — to monitor the law’s implementation.
The European Commission is also set to keep its focus on AI, according to the political guidelines for the next Commission. Unlike the stance expressed by Parliament, the Commission guidelines are largely silent on implementation of the AI Act, only mentioning deepfakes’ impact on elections and the need to ensure observation of the Act’s transparency requirements. Given that the AI Act’s roadmap includes forthcoming implementing and delegated acts, the guidelines’ lack of further details is noteworthy.
The guidelines instead emphasise the importance of AI innovation, announcing two new AI initiatives. The AI Factories initiative and the Apply AI strategy respectively aim to ensure access to supercomputing capacity for AI start-ups, and boost new uses of AI to — among other things — improve the delivery of public services. The guidelines also hint at the creation of an European AI Research Council that emulates the existing European Organization for Nuclear Research (CERN), leading to speculation about the source and level of funding likely to be available under this programme. Lastly, the guidelines emphasise the need to support AI development and other frontier technologies through “the untapped potential of data”, highlighting the need to ensure high data protection standards and improve access to open data.
Codes of Practice: A Litmus Test for the Act’s Implementation
The Commission has officially called for individuals and entities to express interest in participating in the process for drawing up the first AI Codes of Practice, which will produce co-regulatory standards for general purpose AI (GPAI) models. The highly-anticipated call comes after European lawmakers asked the AI Office to robustly involve civil society in the Codes’ development, following concerns that civil society participation would be insufficient and reports that consultancies would be involved.
Civil society organisations, academia, and independent experts, among other stakeholders, will be able to participate in developing the Codes of Practice, but the Commission’s call sets eligibility criteria that depend on the applicant’s nature. For example, while independent experts are not required to be based in the EU, they must show proof of relevant expertise. Conversely, “other stakeholders” — the category including civil society organisations — must have a presence in the EU, show a legitimate interest in participating, and be representative of a relevant stakeholder group. Interested stakeholders must express interest in participation by filling out the online form provided in the call no later than August 25, 2024.
The process for developing the AI Codes of Practice will be structured around four working groups, which will look respectively at transparency and copyright-related issues, risk identification and assessment, risk mitigation, and internal governance processes. Each working group will have a designated chair and co-chair, to be selected from the participating independent experts, who will hold workshops with providers of general purpose AI systems with the goal of getting those providers’ buy-in.
Simultaneously, the Commission launched a multistakeholder consultation on GPAI models, also covering the subjects that the Codes of Practice process will address. The consultation results are expected to “form the basis” of the initial draft Codes of Practice. Participation in this consultation process only consists of a one-off submission, and is not subject to eligibility requirements. The deadline for contributions is 10 September.
In Other ‘AI & EU’ News
- In a statement on the role of data protection authorities in the AI Act, the European Data Protection Board (EDPB) recognised that the AI Act complements pre-existing laws such as the General Data Protection Regulation and the Law Enforcement Directive. The EDPB also highlighted that processing of personal data is core to AI-fuelled technologies, and recommended that data protection authorities be designated as market surveillance authorities under the Act.
- The AI Pact, a Commission initiative to foster collaboration among industry actors on AI best practices and encourage early compliance with the AI Act, made a series of draft pledges available. Three of those pledges are mandatory for actors seeking to formally join the Pact: adopting a strategy to foster the uptake of AI in the organisation and work towards future compliance with the Act; mapping which of their AI systems are developed or used in high-risk areas; and promoting AI literacy among staff and other persons dealing with the deployment of AI systems, and understanding of which groups of people those systems affect. Participating organisations are asked to publicly share which commitments they intend to meet, and to report on their implementation of these commitments within 12 months of making them.
Content of the Month
- Tech Policy Press, AI lawsuits worth watching, a curated guide
- EDRi, How to fight biometric mass surveillance after the AI Act — a legal and practical guide
- Access Now, Generative AI and election disinformation: Much ado about nothing?
- Ada Lovelace Institute, Under the radar: Examining the evaluation of foundation models