Skip to Content

AI Policy & Governance, European Policy, Government Surveillance, Privacy & Data

CDT Europe’s AI Bulletin: June 2023

Also authored by CDT Europe’s Rachele Ceraulo and Vânia Reis.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

In the two years since the EU AI Act was first published by the European Commission, debate on the draft Regulation has touched upon some of the most complex and challenging topics for digital and tech policy. The European Council and European Parliament have finally reached a compromise and approved their positions on the text, and are now ready to enter the inter-institutional negotiations commonly known as the Trilogues, which will bridge the respective positions of the Council and Parliament into the version of the AI Act that will become EU law. 

In this special edition of CDT Europe’s AI Bulletin, we take a deep dive into the positions of the European Parliament and European Council ahead of these important negotiations, and explore how vital human rights issues such as access to justice and safeguards for vulnerable groups have been and still need to be addressed. 

European Council 

The Presidency of the Council reached its general approach – the political agreement between all Member States, which defines the Council’s negotiating position for the inter-institutional negotiations – on 6 December, six months ahead of the European Parliament. The text adopted significantly restricts the scope of the original Commission proposal, particularly by creating exemptions for law enforcement and migration authorities from requirements for high-risk AI systems, and weakening the prohibition on the use of “real-time” remote biometric identification (RBI) systems. 

More specifically, the Council’s general approach differs from the Commission’s draft as follows:

  • Extends allowed uses of untargeted remote biometric identification (RBI) systems by law enforcement and migration authorities, to include cases where there are threats to critical infrastructure and health of individuals; 
  • Authorises the use of “real-time” remote facial and other biometric identification systems to investigate and prosecute all offences carrying a sentence of at least 5 years in EU Member States, and explicitly excludes border control areas from the definition of “publicly accessible spaces”, thus allowing for use of remote biometric surveillance tools in these areas;
  • Completely waives the obligation to register AI systems used in the areas of law enforcement, migration, asylum, and border control management in the public EU-wide database of high-risk AI systems. Also exempts law enforcement authorities from the obligation to request and obtain “informed consent” of individuals when testing AI systems in real-world conditions outside of regulatory sandboxes (supervised pre-market environments that allow providers to test and experiment on AI systems);
  • Introduces a number of exemptions to prevent “sensitive operational data” relating to law enforcement activities from being disclosed. This means that law enforcement authorities, among others, do not have to: inform providers when they become aware of risks and/or incidents related to the use of their high-risk systems; make public annual reports on lessons learned from regulatory sandboxes; and collect and analyse data about how AI systems behave after they’re put on the market;
  • Under exceptional circumstances, such as public security or threat to life, empowers law enforcement authorities to use high-risk AI systems that have not undergone the conformity assessment procedure – a pre-market procedure to demonstrate legal compliance of an AI system – without judicial authorisation. 

The Council also amended the all-important Annex III, which lists high-risk applications and use cases of AI. The initial proposal automatically categorised any system listed in Annex III as high-risk, but the Council added a new means of making that determination: whether the system’s output is significant in the decision-making process. Now, an AI system can only be considered high-risk when its output plays a major role in shaping a final decision and is not purely accessory. Later, the European Commission will be tasked with outlining the circumstances under which outputs of AI systems would be purely accessory to the decision-making process. 

The Council’s general approach also substantially strengthens the European AI Board’s role: it extends the number of tasks assigned to the new body, and bolsters its ability to support Member States and the Commission in implementing and enforcing the AI Act. 

European Parliament 

After months of intense debate, the European Parliament approved its final text during its June 2023 plenary session. While co-rapporteurs initially adopted a cautionary approach, agreeing only on the least controversial parts of the Regulation in the hope of building momentum before tackling more contentious issues, in early October they did address the thorny issue of biometric recognition systems. Overall, the Parliament aimed to ensure that the AI Act upholds human rights and the rule of law, primarily by enhancing transparency and accountability to individuals affected by the use of AI systems, particularly those from marginalised or vulnerable communities who face greater risks. 

Overall, the Parliament’s report: 

  • Imposes a full ban on use of “real-time” remote biometric identification systems in public spaces, and bans deployment of retroactive, or “post” remote biometric identification in public spaces. The only exceptions are when a use either has been pre-approved by a judge, or is part of an investigation following a serious crime; 
  • Expands the list of prohibited practices to include use of AI-powered predictive policing systems; use of emotion recognition systems in the fields of law enforcement, border management, employment and education; deployment of AI systems for mass scraping of social media or CCTV footage to feed into facial recognition databases; and biometric categorisation systems on the basis of sensitive data; 
  • Following extensive pressure from civil society, redrafts and significantly expands Annex III — which lists high-risk applications and use cases of AI — to more comprehensively list high-risk use cases, particularly in the context of migration and border control. Additions include AI systems used for monitoring and surveillance at borders, and AI-based predictive analytic systems deployed to forecast and predict migratory movements and border-crossing
  • An obligation for deployers of high-risk AI systems to, before deploying those systems, conduct a fundamental rights impact assessment that includes: identifying the categories of persons likely to be affected by the system; assessing the system’s impact on fundamental rights as they are defined by the EU Charter, particularly potential harms to marginalised or vulnerable communities; and drafting a plan for mitigating the risks posed by the system. Deployers must also inform employees when they will be subject to a high-risk AI system at work;
  • An individual right to receive explanations of decisions made based on the output of a high-risk AI system, when those decisions affect health, safety, fundamental rights, or socioeconomic well-being;
  • An individual right to lodge a complaint with a national supervisory authority based on the belief that an AI system violates the Regulation, as well an individual right to effective judicial remedy for a decision by a national supervisory authority. 

How the Council’s and Parliament’s Positions Differ

These priorities, though not extensive, highlight some areas of divergence between the Council and Parliament’s positions and indicate that reaching a compromise may prove challenging for the co-legislators. Prohibitions on remote biometric surveillance, and exemptions for law enforcement agencies, will for instance likely be central and contentious issues in the upcoming inter-institutional negotiations. The Parliament’s text substantially deviates from both the Council and Commission positions, which proposed broad exceptions to the ban on law enforcement uses of biometric identification

Negotiators will also need to bridge the gap on the definition of AI. The Parliament defines the term broadly, in alignment with the OECD definition, which encompasses machine-based systems that operate with varying degrees of autonomy from human control, whereas the Council’s definition only allows for machine learning and logic- or knowledge-based approaches.

Evidently, the Parliament’s and Council’s positions only converge on AI Regulatory Sandboxes. Both institutions worried that the AI Act’s strict requirements for providers of AI systems would unduly deter innovation and stifle the development of the EU AI market. To address these concerns, they provided more detail on how sandboxes and competent authorities — the authorities tasked with supervising the application and implementation of the regulation at the national level — can assist providers in ensuring regulatory compliance. 

Namely, competent authorities can provide assistance by identifying risks a system poses to fundamental rights, health, and safety, and effective measures for mitigating those risks. They can also enable and facilitate pre-market testing of innovative AI systems solutions to determine compliance with the AI Act, and use evidence gleaned from regulatory sandboxes as a base for future regulation.

EU lawmakers will need to reach an agreement on the text whilst navigating an environment of strong disagreement both between and within the institutions. Germany, for instance, will likely keep pushing to improve the Council’s text during interinstitutional negotiations. Discord on the Parliament’s position may also persist: despite successful pushback on efforts by the European People’s Party to reintroduce exceptions to the ban on the use of real-time remote biometric identification, the Parliament’s agreement is still tenuous, and thus vulnerable to erosion during upcoming negotiations by the Council’s push for carve-outs for law enforcement.

What still needs to be addressed from a human rights perspective? 

AI systems pose significant risks to human rights, making it crucial that this legislation provides a strong framework for protecting individuals and their rights. It is imperative that the final text of the AI Act reflects the values and fundamental rights enshrined in the EU Charter, and that it integrates a strong human rights approach.  

With this in mind, while the Commission’s proposal omitted a right to remedy from harms caused by an AI system, the European Parliament and Council have addressed this to a certain degree. CDT welcomes both institutions’ efforts to provide a right to remedy, and urges the Council to go further by expanding on their proposal for complaint mechanisms. We believe that the Council should include a mechanism for individuals to lodge a complaint with a national authority if their rights have been infringed by an AI system, and penalties for these harms, as proposed by the European Parliament.

However, a primary area of concern for civil society is the use of untargeted facial recognition AI systems by law enforcement and migration control authorities, which the Council has only expanded. This is a serious cause for concern as law enforcement’s use of facial recognition AI systems can pose a particularly high threat to human rights, given the risks of the improper deprivations of liberty that may result from such use, including racial profiling and indiscriminate surveillance. The Council’s General Approach would dilute the prohibition of untargeted use of facial recognition to a point that it would become the exception, and expose EU citizens to severe risks of mass surveillance and control. 

In line with our analysis, CDT calls on the EU Council to revise its approach and take heed of the European Parliament’s position regarding the risk that law enforcement’s use of AI systems poses to already at-risk and vulnerable groups. CDT urges the Council to include a ban on law enforcement and migration authorities’ use of untargeted facial recognition, and moratoriums on targeted use by law enforcement of facial recognition systems until robust safeguards and effective limitations are in place. Negotiators should similarly reject the Council’s efforts to exempt AI systems used in the areas of law enforcement, migration, asylum and border control management from the obligation to register in the public EU-wide database of high-risk AI systems. 

Together with other civil society organisations, CDT calls for law enforcement and migration control authorities to be fully transparent about and accountable for their uses of AI systems, and for regulators to prioritise a human rights-based approach in the negotiations to come. While the Trilogue negotiations are known to be an opaque process with little room for civil society consultation, it would be pertinent for negotiators to opt for more transparency on this occasion and consult with civil society experts. 

In Other ‘AI & EU’ News 

  • Ahead of key votes on the AI Act, campaigning organisation Avaaz published “From Harms to Hope”, an initiative to raise awareness on the human cost of AI systems and the need to have a human-centric framework for governing AI. The booklet outlines clear deliverables on how EU policymakers ought to regulate the future EU AI ecosystem, and calls for regulators to require multi-stakeholder human rights impact assessments that consider the specific risks of harm to marginalised and vulnerable people.
  • On 17 May, France’s Constitutional Court sanctioned a controversial provision of the Law on the 2024 Olympic and Paralympic Games, which allows the use of untargeted algorithmic-driven video surveillance in public spaces until March 2025. In its decision, the Court provided that the contested provisions do not infringe upon the right to private life, and that the algorithmic processing of images aligns with the Constitution because it aims to prevent breaches of public order. The Court further provided some guardrails for implementing its decision, notably that the type of surveillance at hand will require permanent human oversight and risk management measures to prevent and correct biases and misuse, as well as mechanisms for individuals to exercise their right to notice before collected images are algorithmically processed.
  • On 31 May, during a stakeholder panel event on Generative AI at the fourth TTC Ministerial Meeting in Sweden, European Commission Executive Vice-President  Margrethe Vestager announced Commission plans to start working on a Code of Conduct for Artificial Intelligence — a set of, ideally, global standards providing safeguards on the application of AI systems. Speaking on the same panel, Alexandra Reeve Givens, CDT’s CEO, urged EU and U.S. policymakers and standard-setting bodies to carefully consider impacts on fundamental rights when developing AI risk-management frameworks.
  • At a plenary committee session of the Council of Europe’s Committee on Artificial Intelligence (CAI) held in Strasbourg from 31 May to 2 June, the U.S. administration – which has observer status with the organisation – presented a proposal to narrow the scope of the Convention to only apply to public bodies, which would significantly water down the first international treaty on AI. As a compromise, it suggested an “opt-in” solution for private parties, which would leave it up to individual countries to decide whether the Treaty would also apply to companies within their jurisdiction.