Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s AI Bulletin: February 2023

Also authored by CDT Europe’s Rachele Ceraulo.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

📚 NEW BRIEF: The EU AI Act Must Properly Recognise Human Rights Risks of Using AI-Based Facial Recognition Tech in Policing and Immigration

1. The Latest on the EU’s Proposed Artificial Intelligence Act

Negotiations on the EU AI Act (AIA) are at a critical stage in the European Parliament. Rapporteurs have sped up the timeline for discussions on the file, in hopes of reaching a common position in the coming weeks after months of negotiation, but the latest developments shed light on remaining points of contention.

The negotiating team reportedly reached a compromise on requirements for nationally-designated ‘notified bodies’, which are tasked with verifying that high-risk AI systems conform to the requirements for these systems set out by the AIA. The two co-rapporteurs also held technical discussions that covered what AI applications the AIA prohibits, what criteria define a high-risk AI system, how national competent authorities would go about setting up regulatory sandboxes (tools created under the AIA to test and experiment on AI systems under supervision), and how to define several concepts core to the future legislation.

The most significant outcomes of these negotiations were as follows: 

  • According to a new set of compromise amendments from co-rapporteurs Brando Benifei (S&D) and Dragos Tudorache (Renew), the latest list of prohibited applications of AI would include social scoring tools, intentionally manipulative AI systems, and AI-powered predictive policing models. The scope of the social scoring ban was expanded beyond individuals, to add groups ‘targeted over inferred personal characteristics that could lead to preferential treatment’. A majority of the Parliament’s negotiating team seems to also favour a total ban on real-time remote biometric identification in public spaces, in contrast to the Council, which proposed broad exceptions to the ban on law enforcement uses of biometric identification
  • Benifei and Tudorache have significantly revised the rules for classifying AI systems as high-risk, and for amending the AIA’s Annex III, the section of the legislation that lists high-risk uses of AI. The changes would authorise providers that believe their AI system does not pose a significant risk of harm to people’s health, safety, or fundamental rights to submit a “reasoned request” to the national supervisory authority (the authority in charge of overseeing the AIA at the national level) or to the AI Office (the EU body tasked with streamlining enforcement at the EU level) to exempt their AI system from the obligations related to high-risk systems. The revised list of high-risk systems in the all-important Annex III would include:
    • Categorisation of individuals or groups based on biometric data, in publicly accessible spaces (though subsequent amendments suggest prohibiting these systems entirely);
    • Real-time and retroactive (‘ex-post’) remote biometric identification in privately accessible spaces, and retroactive remote biometric identification in publicly accessible spaces;
    • Emotion recognition systems, which the draft proposal classified as limited-risk, and ‘biometric-based systems’ such as the app Lensa, which were missing from  the original proposal; 
    • Generative AI systems such as ChatGPT;
    • Certain AI systems ‘likely to influence democratic processes like elections’;
    • AI systems that ‘may have serious effects on a child’s personal development’;
    • AI systems which deploy subliminal techniques for scientific and therapeutic research purposes;
    • Some additions in high-risk areas such as critical infrastructure, education, employment, law enforcement and migration, and access to public services.
  • The co-rapporteurs also clarified the objectives of regulatory sandboxes, namely that they should:
    • Assist in regulatory compliance;
    • Enable and facilitate the testing of innovative solutions; and
    • Test potential adaptations of the Regulation in a controlled environment. 

Under the latest amendments, providers would be authorised to use regulatory sandboxes to test AI systems before assessing whether they are high-risk. Benifei and Tudorache suggest that the Commission should wait a year after the AI Act enters into force before defining how sandboxes should be established and supervised. More controversially, they also authorise developers of AI systems to process personal data, and data covered by intellectual property law, for public interest reasons such as the protection of the environment or disease detection. 

  • The two co-rapporteurs revised the conformity assessment procedures, which aim to determine whether providers and developers of high-risk AI systems meet requirements set out in the AIA for placing these systems on the EU market. 
    • Rapporteurs also clarified under what circumstances providers of high-risk AI must resort to an assessment by a ‘notified body’ (a third party that would approve the quality management system and technical documentation established by high-risk providers), or through internal control (which doesn’t involve external audit). The co-rapporteurs agreed that any provider that does not use harmonised standards ‘in full’ would be required to conduct a third-party assessment, and would not be authorised to resort to internal control. 
    • The new text also proposes allowing AI developers to ask for a third-party assessment, regardless of the level of risk of their AI systems, and requiring third-party bodies to consider ‘the specific interest of small AI providers’ in the calculation of compliance fees. 
    • The co-rapporteurs suggested granting the Commission the power to amend the conformity assessment procedures, if it provides ‘substantial evidence’ for doing so after consulting with the AI office and affected stakeholders.
    • They also reallowed national authorities to deviate from the conformity assessment procedure and put a high-risk AI system into service within their national territory, for exceptional grounds such as the ‘protection of life and health of persons’. They did, however, remove ‘public security’ from these grounds, to limit possible harms. Such deviation would also require mandatory authorisation from judicial authorities, notification to the other member states and the Commission.
  • Rapporteurs held a critical political discussion on 15 February, with the goal of concluding negotiations on highly sensitive amendments that cover how an AI system is defined, to which entities the Regulation applies, which systems should be categorised as high-risk AI, which AI practices are prohibited, when a provider must register the use of an AI system with the EU,what general principles apply to all AI systems, and whether to require AI developers and deployers to ensure AI literacy for their staff.

    The planned meeting agenda also included discussion of new amendments that would prohibit the use of biometric categorisation systems in publicly accessible spaces and real-time biometric identification in privately accessible spaces; these uses were previously only listed as high-risk. 

    Unfortunately, the 14 rapporteurs barely managed to get through a third of the agenda, failing to reach a conclusion on any of the above points. In light of this, Tudorache told Contexte that the negotiating team was willing to go beyond the deadline for the joint committee vote — originally set for 28 and 29 March — to obtain an agreement on these issues. 
  1. In Other ‘AI & EU’ News 
  • The Swedish Presidency decided to put the proposal on an AI Liability Directive on hold while the work on the AI Act progresses, and to concentrate on the revised Product Liability Directive.
  • On 27 January, the U.S. Administration and the European Commission signed an Administrative Agreement on Artificial Intelligence for the Public Good. Signalling a renewed willingness to step up transatlantic collaboration in the field of AI, the agreement will bring together experts from across the two blocs to further AI research and development projects across five areas of global and societal focus, including climate forecasting and emergency response management. In the spirit of leading global efforts to further research on societal applications of AI, the Commission stressed that, as part of the agreement, they will share findings and resources with like-minded international partners that may lack the necessary capabilities to manage these challenges. 
  • The Commission’s ‘Initiative on Virtual Worlds’ (aka the Metaverse) has been pushed back from May 3 to 31 May, according to a newly published tentative agenda for the forthcoming Commission meeting.
  1. The Council of Europe’s CAI Process

At its third Plenary session in January, the Committee on Artificial Intelligence (CAI) endorsed the proposition made by the U.S — which has observer status in the Council of Europe — to delegate the CAI’s work to a drafting group that would exclude civil society organisations, a move strongly denounced by civil society organisations. 

The CAI’s fourth Plenary session took place from 1-3 February, where the draft group is said to have discussed the preamble, provisions on the implementation mechanism of the future legislation, and the cooperation mechanisms between the negotiating parties. Following the Plenary, the CAI published the draft Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, the document that serves as the basis for negotiations. 

In an attempt to promote effective multilateralism and strong cooperation between the European Union and the Council of Europe, the Council of the European Union published its list of priority issues to cooperate on with the Council of Europe in 2023-2024, which includes supporting the Council of Europe’s work on human rights and AI. In the context of the CAI Committee’s draft Convention on AI, the EU recalled that the Council of Europe’s draft Convention on AI must be consistent with existing EU law and the EU AI Act, taking into account developments during the legislative process. 

The next CAI Plenary will take place on 19-21 April 2023, with the agenda not yet announced. Watch this space!

Content of the Month📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.