AI Policy & Governance, European Policy
Public Authorities: What Role in the AI Act?
The EU’s Artificial Intelligence Act (AI Act) covers a broad taxonomy of actors, ranging from providers and deployers of AI systems to importers and distributors. The obligations pertaining to each of these actors have been the subject of extensive compliance coverage. Often lost in the conversation are the obligations imposed on public authorities in the European Union by the AI Act, including the changes they may need to incorporate when considering the use of AI.
Public authorities and entities acting on their behalf are not exempted from any of the obligations of the Act — rather, they are subject to additional obligations. In this brief, we explore key considerations for public authorities contemplating the deployment of AI.
Providers vs deployers: a permeable divide
Public authorities can be providers and/or deployers under the AI Act. While both providers and deployers have clear, concurrent obligations, providers bear the largest share of obligations under the Act. Resource-mindful public authorities contemplating the use of AI may instead prefer the comparatively lighter compliance burden involved for deployers (in theory, they can do this by acquiring an AI system rather than developing it themselves). However, authorities should consider that deployers of AI may nonetheless be categorised as “providers” of a system if they:
- Make a substantial modification to a high-risk AI system such that it remains high-risk
- Modify the intended purpose of the AI system, including General Purpose AI, that was not formerly high-risk but becomes so
- Put their trademark/name on the high-risk AI system, e.g. a local authority using their name – whether registered as a trademark or not – on a high-risk AI system integrated to their products or services, without prejudice to contractual arrangements to the contrary
Public authorities as deployers of high-risk AI systems
Despite the possibility for public authorities to act as providers under the Act, they will largely fall in the category of deployers. In this regard, there are two sets of obligations for deployers to be mindful of: obligations that apply to deployers generally, and obligations that apply to deployers that are specifically public authorities.
The general obligations that public authorities acting as deployers must observe are varied in terms of complexity and resourcing. A few of the most resource-intensive obligations for deployers include:
- Follow instructions for use, exercise human oversight over high-risk AI systems, and report on risks. The AI Act requires deployers of high-risk AI systems to ensure appropriate measures are in place to follow the instructions prepared by providers, and for human oversight at deployer-level by individuals with the necessary competence, training and authority. Deployers must also consider risks to health, safety or fundamental rights of persons, and take action to suspend and notify the provider and market surveillance authority – the AI regulator at national level – should these risks materialise.
- Ensure input data is sufficiently representative. Where deployers exercise control over the input data, – i.e. the data provided to or acquired by an AI system on the basis of which an output is produced – they must ensure this is sufficiently representative in view of the intended purpose of the high-risk AI system.
- Observe approval processes for uses of biometric identification used for law enforcement purposes. Different authorisation requirements apply depending on whether the biometric identification proposed to be undertaken is in “real-time” or “post” (for more information, see CDT’s AI Act explainer on security and surveillance). In any case, deployers must obtain an authorisation prior to deployment, observe the safeguards outlined in the Act, and report on the use of biometric identification technologies annually.
- Provide individual notifications in cases of high-risk AI use in support of decision-making. Deployers must inform individuals if a high-risk AI system included in the AI Act’s Annex III is used to make decisions or assist in making decisions in relation to them.
- Provide explanations on decision-making upon request. The AI Act creates a right for individuals to obtain a clear and meaningful explanation for decisions taken on the basis of an output of most high risk AI systems. This obligation applies to the majority of high-risk AI systems listed in the AI Act’s Annex III, where the output produces legal effects or significant effects on a person’s health, safety and fundamental rights.
- Disclose the use of AI systems. – Deployers must disclose the use of the following AI systems, unless their use is permitted by law to detect, prevent or investigate criminal offences:
- The use of emotion recognition or biometric categorisation systems
- The use of deepfakes, namely AI-generated or manipulated images resembling existing people, objects or events that would falsely appear as authentic
- The use of text-generating AI with the purpose of informing the public on matters of public interest, unless the AI has undergone a process of human review or editorial control and a natural/legal person holds editorial responsibility for the publication
Additional obligations apply specifically for deployers who are public authorities:
- Refrain from using a high-risk AI system not already registered in the EU database. The AI Act requires providers of high-risk AI systems to register them in an EU-wide database set up and managed by the European Commission, and prohibits deployers who are public authorities or persons acting on their behalf from using a high-risk AI system if it is not so registered. Further, public authorities who are deployers must take the additional step to register themselves, select the relevant high-risk AI system from the database and register its use. This obligation is optional for all other deployers.
- Undertake a fundamental rights impact assessment. Public authorities are required to undertake a fundamental rights impact assessment prior to deploying any high-risk AI system, following a template to be published by the AI Office. The impact assessment will address, among other aspects, the deployer’s processes pertaining to the AI and its intended purpose, and a description of the period of time and frequency for each high-risk AI system to be used. Crucially, the assessment must foresee the categories of natural persons likely to be affected, specific risks of harm, as well as include a description of human oversight measures and mitigations, such as internal governance and complaint mechanisms. Once undertaken, the FRIA results must be shared with the market surveillance authority.
- Submit specific information to a regional database of high-risk AI systems. The AI Act requires deployers of high-risk AI systems who are public authorities, as well as those who act on their behalf, to input information about the system as detailed in AI Act Annex VIII. This complements information already submitted by the provider of the system. Deployers will need to include their relevant contact details, summary of the findings of the fundamental rights impact assessment, as well as a summary of their data protection impact assessment.
Checklist for deployers
Considering the range of obligations imposed by the AI Act, deployers should be mindful of several factors prior to deploying an AI system – particularly if the AI system is high-risk under the Act. CDT offers a three-step framework for public authorities considering deployment of AI systems.
Step 1: Assessment of risk level in accordance with the AI Act
- Establish whether the AI system being considered is high-risk within the meaning of the AI Act. AI systems will be considered as high-risk under the Act if they are listed in Annex III or if they are otherwise used as a safety component for a product – or the AI itself is a product – covered by legislation in Annex I and independently subject to a conformity assessment. This assessment of risk is crucial to understanding and observing the relevant obligations under the Act.
Step 2: Ensure that providers have taken all relevant steps prior to deployment
- Verify that the high-risk AI being considered for adoption has been registered by the provider in the EU-wide database. Should a public authority ultimately choose to deploy such an AI system, it will need to register itself in the database.
- Ensure instructions of use for high-risk AI systems are sufficiently clear and robust. Instructions for use prepared by providers must be concise, complete, clear and accessible. Instructions must cover, among others, the capabilities and limitations of AI, any known or foreseeable circumstance – including misuse – which may lead to risks to safety or rights, as well as necessary maintenance and care measures to ensure the proper functioning of the AI system. Deployers should ensure that they hold providers to the high standards set in the Act before proceeding with any AI system.
Step 3: Assessment of AI impacts as well as internal capacity for compliance with the various obligations for deployers of high-risk AI systems
- Undertake a fundamental rights impact assessment for high-risk AI systems. The FRIA must not only be reported to the relevant market surveillance authority – a summary of its findings must also be included in the EU database of high-risk AI systems.
- Assess institutional readiness for human oversight to be properly conducted. Human oversight will be crucial to ensure to prevent and minimise risks to health, safety, and fundamental rights.
- Consider the future availability and effectiveness of mechanisms to ensure individuals are sufficiently informed and are able to exercise their rights.
- Ensure there is a process in place for individual notifications to be provided when a high-risk AI system is used to make decisions about individuals.
- Assess whether disclosure of the operation of an AI system will be required and develop a process for ensuring disclosure happens. As previously mentioned, deployers may have to observe disclosure obligations depending on the nature of the high-risk AI system being considered.
- Explore whether there is a team and process in place to receive and manage requests for explanations.
- Assess whether there is a mechanism for fielding complaints for fundamental rights harms.
- Notify the relevant workers and representatives if an AI system is planned to be deployed in the workplace.
The decision to deploy an AI system is a complex one for any actor even in the absence of the AI Act, not least due to the importance of ensuring a system’s effectiveness and cost-efficiency, and the need to ensure compliance with existing legal frameworks such as the General Data Protection Regulation. In light of the obligations imposed by the AI Act, public authorities should be prepared to take a especially considered approach to the deployment of AI, focussing on likely risks and necessary mitigations, as well as institutional readiness to robustly and effectively implement the processes required by the AI Act.