Skip to Content

AI Policy & Governance, European Policy

CDT Europe’s AI Bulletin: June 2024

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

An Updated Timeline for the AI Act

The Artificial Intelligence Act is set to be published in the EU Official Journal on 12 July, and to enter into force on 1 August. These dates will determine when the various sections of the Act will begin to apply, and trigger deadlines for specific action items under the Act. 

  • The section on prohibited AI systems will start to apply six months after entry into force. 
  • In the shortest deadline established by the Act, Member States must, within 3 months — or, as we might expect, by November — nominate the public authorities protecting fundamental rights under the Act. Those authorities will have powers to access any documentation produced pursuant to the Act in connection with high-risk AI systems listed under Annex III, where such access is necessary in connection with the authorities’ mandate.
  • Within 12 months — by August 2025 — Member States must designate their national market surveillance authorities (MSAs), the key regulators under the Act. This will be a key area to watch, particularly as Member States’ prospective choices are already raising concerns about independence of these authorities.

    The Italian draft law on AI, for instance, appoints the Agency for Digital Italy and the National Cybersecurity Agency as MSAs, prompting civil society outcry and leading the Italian data protection authority (DPA) to call for independence of these data protection agencies given their important role in enforcing the Act. Other countries would agree: A French parliamentary report on artificial intelligence earlier this year recommended that the national DPA, the CNIL, be the national regulator on artificial intelligence. More recently, the Dutch DPA and Dutch Authority for Digital Infrastructure jointly asserted that the DPA would be the appropriate regulator for AI systems not already captured by product safety legislation. 
  • In addition to formal deadlines, internal documents made public suggest that within the first six months post-entry into force, the European Commission will issue an implementing act establishing the scientific panel of independent experts — an entity with technical expertise primarily tasked with monitoring general purpose AI (GPAI) models and supporting the AI Office in this regard — as well as guidelines on how to practically implement the prohibitions on AI systems posing unacceptable risk under the Act.  

A Start for the AI Act’s Regional Ecosystem

The AI Act’s governance ecosystem has also been set in motion. The AI Office — the regional entity tasked with enforcing the AI Act, the structure of which has been formally announced — hosted its first webinar on the AI Act’s approach to AI governance through the lens of risk management. The discussion focussed on the interplay between the Act and the EU processes through which EU-wide standards for products and services are established, which will play a significant role under the Act despite the fact that they remain largely inaccessible to a large subsection of civil society organisations.

The AI Board, another entity created by the Act and steered by Member State representatives, held its first meeting last week to discuss first deliverables and priorities, as well as internal rules of procedure. In light of the various deadlines and tasks the Act reserves for the AI Board, priorities in the first few months are likely to include development of the Codes of Practice as well as designation of fundamental rights authorities.  

There are two other remaining governance entities under the AI Act which are yet to be formally set up. As stated above, within six months of the Act entering into force, the Commission plans to issue an implementing act formally establishing the scientific panel of independent experts. As for the Advisory Forum, at the time of writing, there are no public updates as to any steps, planned or already taken, to launch it. This raises questions as to the Forum’s current position in the governance ecosystem, which is at odds with the importance of its mission: bringing a multistakeholder perspective — as the only entity set to include civil society representatives — to the Commission and the AI Board.

Civil Society Scrutiny 

As the AI regulatory environment takes shape, civil society organising and coordination will remain crucial to protecting human rights. While the AI Act text is now final, the text still provides for the Commission to issue a multitude of delegated and implementing acts, and develop Codes of Practice and Codes of Conduct. 

On 18 June, CDT Europe hosted a workshop that brought together civil society organisations to discuss collective priorities and next steps in the AI Act’s implementation roadmap. A consensus emerged that key moments for engagement will include the development of the following outputs required under the AI Act: 

  • The guidelines on prohibited practices;
  • The template for fundamental rights impact assessments, which will be mandatory for deployers of high-risk AI systems who are also public authorities;
  • The Codes of Practice to govern GPAI models, which will allow providers to demonstrate compliance with the Act; 
  • The guidelines on high-risk AI systems, which will include a list of practical examples of use cases.

A key shortcoming of the text of the AI Act is that it does not compel the various entities tasked with producing the abovementioned written outputs to hold open consultations, leaving the decision to provide meaningful engagement opportunities for civil society organisations and the public entirely to the discretion of the EU institutions. Ensuring that these engagement avenues materialise will be a key advocacy priority for civil society organisations going forward, as well as an opportunity for the EU institutions to set the bar higher than the Act by consulting openly and robustly.

In Other ‘AI & EU’ News 

  • Meta paused the use of personal data belonging to users of its services for AI training purposes, only weeks after the company announced a change to its privacy policy that would enable it to use public content on its platforms to train AI. The move came after society organisation noyb filed multiple complaints against this change to DPAs in Europe, leading the Irish DPA and the UK DPA to welcome the resulting pause.
  • Data protection agencies are grappling with the impacts of AI on data protection. The UK-based Information Commissioner’s Office recently closed a series of consultations on AI. France’s CNIL opened a consultation, which will remain open until 1 September.

Content of the Month 📚📺🎧
CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.