Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

EU AI Act Brief – Pt. 1, Overview of the EU AI Act  

EU AI Act Brief–Overview of the EU AI Act. Dark blue gradient background, white and yellow text.
EU AI Act Brief–Overview of the EU AI Act. Dark blue gradient background, white and yellow text.

NEW AI ACT SERIES! Over the coming weeks CDT Europe will be publishing a series of blog posts and briefing papers on the EU AI Act and what it means for human rights. To receive these briefings in your inbox, don’t forget to subscribe to our AI Bulletin here. Below is the first post of the series where we provide a summary and overview of the structure of the AI Act as well as the key human rights issues at play.

***

The European Commission first proposed the AI Act in April 2021. Nearly three years, five trilogue negotiation rounds, and countless amendments later, the final text was adopted in the European Parliament today in plenary session. The AI Act has undoubtedly come a long way since its first version, but the resulting compromise text leaves many of civil society’s previously articulated concerns and demands unaddressed. As the dust settles, interested observers and advocates collectively face the difficult task of unpacking a complex, lengthy, and unprecedented law. 

In this blogpost, we highlight and summarise some of the key features of the AI Act and some of the human rights considerations that are crucial to place the AI Act in context.

Our more detailed briefing paper [PDF version] looks closely at the relevant obligations created for actors throughout the AI supply chain, and maps the various mechanisms and actors that play a role in ensuring oversight of AI systems. 

A Risk-Based Approach for AI Systems

The novelty of the AI Act lies in its horizontal approach to regulating AI. Adopting a risk-based model, the AI Act categorises AI systems as posing either unacceptable, high, or minimal risk. The Act recognises eight specific AI practices as posing an unacceptable risk and therefore being banned. While these AI practices aim to provide a straightforward understanding of what is and is not allowed, the wording of many of these falls short of this objective. A clear example is the prohibition of law enforcement use of live facial recognition (named as “real-time biometric identification” in the Act). The ban, which the European Parliament, further to calls by civil society, argued should be total, now contains a number of exceptions that threaten to swallow the prohibition altogether. Other prohibitions require a showing of at least a risk of “significant harm” or “detrimental treatment” – two notions that remain undefined by the Act.

Unlike AI posing “unacceptable risk”, the categorisation of AI as high risk does not rely on its specific functions, but rather depends on the area the AI is sought to be deployed in. Annex III of the Act identifies eight such areas, which can be subject to modification, additions, or removals. Even if an AI system squarely falls in a high risk category by virtue of being deployed in a given area, the Act allows for AI providers to self-assess out of the high risk category based on the specific features of the AI system, without having to report this assessment to any authority. 

Where an AI system is high risk, a series of obligations apply on the several entities involved in its development and deployment, including a general obligation to show compliance with the Act’s substantive requirements by way of a conformity assessment carried out by a third party, or in some cases, the AI provider. Importantly, a high risk AI must generally be registered in a publicly accessible EU database.

Governance

The AI Act creates a number of actors to oversee the AI Act implementation. While the Act reserves a key role for the Commission for the purposes of issuing further rules in the form of delegated acts and the AI Office for monitoring and developing standards for general purpose AI models, it also envisages a role for three new entities: an AI Board, an advisory forum, and a scientific panel of independent experts. The AI Board and the scientific panel are empowered to issue guidance of their own initiative. The advisory forum, which will comprise members of the EU Fundamental Rights Agency and standardisation bodies CEN, CENELEC and ETSI, can only issue guidance upon request of the Board or the Commission.

A Different Approach to GPAI Models 

General purpose AI (GPAI) models are regulated differently. The Act sets baseline requirements for all GPAI models to prepare and make available technical documentation and information, unless the model is open source – i.e. made accessible to the public under a free and open license that allows for the access, usage, modification, and distribution of the model, and whose parameters are made publicly available.

The Act imposes additional obligations in relation to GPAI models that are deemed to pose systemic risk. GPAI models fall within this category if they have high impact capabilities – which are currently presumed when the cumulative amount of compute used for their training measured in floating point operations (FLOPs) is greater than 10^25 – or are designated as presenting a systemic risk by the Commission. Obligations for providers of GPAI models with systemic risk focus on model evaluation, as well as risk identification and mitigation. The Act acknowledges the changing nature of AI and enables the Commission to adopt further rules to fine-tune the definition of GPAI models posing systemic risk.

Where Do Human Rights Come In? 

Human rights have rightly featured heavily in the debates surrounding the AI Act. However, this was not always the case. The AI Act was originally solely grounded in Article 114 of the Treaty for the Functioning of the European Union (TFEU), which is the general legal basis for the adoption of measures to ensure the establishment and functioning of the EU internal market. Subsequently, after civil society pressure, the Act adopted as an additional legal basis Article 16 TFEU, which acknowledges the individual right of the protection of one’s personal data. Since that key change, the Act has included several references to human rights throughout the text and created mechanisms for individuals subjected to an AI system to gain more information and to put forward complaints. 

A key feature of the AI Act, introduced after sustained civil society advocacy, is the obligation for high risk AI deployers to conduct fundamental rights impact assessments (FRIAs). However, this obligation is limited in scope as it only applies to public sector bodies and a narrow subset of private bodies. While the result of a FRIA must be reported to a national authority, nothing in the Act makes the deployment of a high risk AI conditional on the FRIA being reviewed or approved by authorities. In other words, once carried out and reported on, the FRIA does not seem to have any meaningful impact in the roll-out of a high risk AI. 

In addition, the AI Act creates powers for national public authorities and bodies overseeing compliance with human rights to request and access any information created or maintained pursuant to the AI Act where the documentation is necessary for effectively fulfilling their mandate. 

While the above provisions are the only two to tackle human rights head on, human rights considerations run through the entire Act. The right to privacy inspired the – now much diluted – prohibition on the law enforcement use of facial recognition; the prohibition on manipulative AI seeks to protect freedom of opinion and expression; and the prohibition on predictive policing based on individuals’ personal characteristics builds on a large body of research documenting discrimination risks.

Notwithstanding the above, the AI Act’s impact on human rights cannot be solely determined by reference to the human rights protections it includes – a comprehensive analysis must also look to what it deliberately excludes. The Act creates what could easily be an over-exploited exemption for national security. The prohibitions also fall short of comprehensively protecting human rights. The limitations on the use of live facial recognition only apply to law enforcement use in publicly accessible spaces, and explicitly exclude borders, which are known sites of human rights abuse. In a similar vein, the ban on the use of emotion recognition technologies only applies to education and the workplace. Lastly, while the creation of a publicly accessible EU database on high risk AI is positive in terms of transparency and accountability, the Act excludes from the public eye high risk AI systems deployed in the area of law enforcement, migration, asylum and border control management, which are due to be registered in a non-public section of the database that is only fully accessible to the Commission.

Conclusion

The AI Act creates an unprecedented framework of regulation for AI, which will undoubtedly set the tone for future tech legislation to come. It will be crucial for members of the public and civil society to achieve a robust understanding of this regulation, and of the opportunities and challenges it creates for effective human rights advocacy.

The Commission faces a busy few months as it concludes the hiring process for the newly established AI Office and prepares to issue guidelines on the practical implementation of the Act. The Act’s final version fuels a reasonable concern among advocates that too much ground may have been lost in the compromise negotiations. Close coordination with experts and civil society will be crucial to ensure that the interpretation and application of the AI Act going forward ensures its effectiveness and is consistent with the Act’s own articulated goals: protecting fundamental rights, democracy, and the rule of law. 

Read the full explainer [PDF].