Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

CDT Europe’s Response to the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) Consultation on a Legal Framework on AI

The Centre for Democracy & Technology (CDT) was grateful for the invitation to submit comments in response to the consultation process set up by the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI). The consultation process is intended to examine the feasibility and potential elements of a legal framework for the development, design and application of artificial intelligence, based on the Council of Europe’s standards for human rights, democracy and the rule of law. The consultation survey covers a very broad range of potential applications of AI and raises many complex policy questions. Due to how the survey was structured, it did not allow for the nuance and in-depth analysis that addressing these topics will ultimately require. Accordingly, we used this opportunity largely to share some of our overarching concerns about ways in which AI can result in discrimination and exploitative uses.  CDT understands that this is an initial starting point for policy conversations and looks forward, alongside our civil society partners, to engaging and contributing to these discussions going forward. In order to provide further context and clarity to our responses, we have summarised some of our key points and concerns below.

Legal Mechanisms and Binding Instruments 

The survey asks respondents to reflect on which elements could be useful to include in a Council of Europe legal framework on AI. As CDT stresses, given the broad potential applications of AI, at times the appropriate approach will simply be to close gaps in existing legislation, whilst in other instances a fresh approach will be necessary. The survey seeks responses to important overarching questions of the merits of different legal mechanisms and tools. CDT has considered and responded to these questions and offered some ideas, including the factors which should be considered in a risk-based approach, human rights impact assessments, and the use of auditing.

Risk-Based Approaches

A risk-based approach evaluates the risks that an AI application poses to help set the parameters for whether and how that application should be regulated. Although risk-based approaches can be useful, risk needs to be analyzed in a nuanced way.  

Key factors for inclusion in a risk assessment:

(1) the likelihood/probability that the use of the application will result in harm;

(2) the impact of that harm; an application of AI that may appear not to cause significant harm may in fact do so. For example, recommender systems in music streaming might be categorised as causing little harm in the sense that they may result in the user hearing a song they don’t particularly like, but should a streaming-app use speech recognition to detect emotional state, gender, etc. the invasive nature of that evaluation might itself cause more significant dignitary harm.

(3) user choice; whether an individual has the ability to choose not to be subject to the AI application and thereby avoid the risk of harm or has no choice but to take that risk (e.g., in applying for a job that you need, you have little choice but to be subject to a recruitment process that may deploy AI).

Auditing and Impact Assessments of AI

Risk-based approaches are based on predicted outcomes. Given the complexity of and constant evolution of the applications of AI, in addition to such an ex ante analysis, ex post human rights impact assessments can be a crucial tool to assess the actual impact. These impact assessments should be analysed for trends that can inform future risk assessments.

Auditing applications of AI for discriminatory and other adverse impacts is also an important tool, including potentially for use as part of a risk-based approach. National authorities/regional laws can and should set out the parameters that the audit should entail, as well as which specific types of harms that audit should seek at minimum to uncover. Companies may have overall responsibility that such an audit is carried out, but an independent third party with relevant expertise should conduct the audit. Governments should set out clear rules to ensure the independence and competence of such third party auditors. The obligation and basic procedures to guarantee a multistakeholder consultative process should also be mandated by law. There will be situations where it is more appropriate that state authorities have investigatory powers to check certain applications of AI. For example, national equality bodies could be mandated to investigate discrimination in the allocation of social security benefits by a Government Department. 

In addition, the Convention should provide a legal framework that enables privacy-preserving access to research data for third parties such as academic researchers and civil society. This can ensure an additional layer of oversight.

Discriminatory Impact of AI

The survey repeatedly asks respondents to provide examples of how AI can help promote and protect democracy and human rights. While lawmakers should have as a goal AI systems that protect and advance human rights, the first and immediate step must be to understand and avoid the potential for harm these systems are already exhibiting. Through our work, CDT has unfortunately found repeated examples of where the use of AI can perpetuate and even cause discrimination. 

CDT has done research on the use of AI in hiring tools and in access to disability benefits and found evidence of discrimination in both cases. Because algorithms learn by identifying patterns and replicating them, algorithm-driven tools can reinforce existing inequalities in our society. Algorithmic bias can also be harder to detect than human bias, because many people think of technology as “neutral.” So although AI can help with increasing efficiency of certain tasks, in order to ensure that the risk of discrimination is mitigated against, it will be important to ensure humans’ ability to understand, question, test, verify, and challenge the output and function of systems and also to recognise that the use of such technologies is not neutral and will need further safeguards in place to protect human rights. 

In our response, we therefore advise that across all applications of AI there should be dedicated focus and consideration in any legal framework to provide the processes and tools (including auditing, explainability and transparency) necessary to identify the risk of bias and discrimination, and to mitigate, prevent and provide access to remedy to those impacted by the discriminatory impacts of AI.

Content Moderation and AI

Content moderation and the use of recommender systems is only briefly touched upon in the survey. However in CDT’s view this area is one where there are particularly pertinent questions to consider in relation to protecting democracy and human rights. 

AI/machine learning and other forms of automation are sometimes incorporated by online intermediaries/social media platforms to enable them to manage the massive quantities of user-generated content that people upload onto their systems. These automated tools can be useful for some aspects of sorting and organizing user-generated content, but they also have distinct limitations that can present human rights risks. For example, in the automated analysis of online content, tools or techniques may not be robust; that is, they may perform well in an experimental or training environment but poorly in the real world. Data quality issues can mean that tools are trained on unrepresentative data sets that end up baking bias into the algorithmic processes. Automated tools for analyzing user-generated content typically assess a limited degree of context; they may evaluate a given image, for example, but not understand crucial information about the caption, account, or commentary around the image that is essential to its meaning. (CDT has a forthcoming report on the technology behind automated multimedia content analysis that discusses these limitations in greater detail.)

In addition to these technical limitations in the use of AI for content moderation, it is important to recall that “automation” in these circumstances is typically a form of content filtering. Content filtering raises significant threats to human rights, particularly when mandated by law. Filtering is a form of prior restraint on speech, where all statements by anyone using a service must be pre-approved by the filter in order to be posted. Filtering requires a form of total surveillance of people’s communications to ensure that whatever is being said abides by the filter’s standards. Filtering should never be mandated in law, and policymakers should focus on ensuring that any private, voluntary use of content filters is accompanied by robust safeguards for human rights.

CDT also stresses the importance of robust data protection rules and enforcement of those rules in the context of content moderation, and notes that whilst EU member states have the GDPR, other Council of Europe states do not currently have as high a standard on privacy and data protection. This is relevant to content moderation and democracy because recommender systems should not be based on personal nor pervasive tracking. Such abuse of user data and privacy can drive the spread of disinformation online. In addition, given the risks that micro-targeting and profiling in the context of elections in particular pose in a democracy, CDT has further agreed with the EU EDPS that advertising based on pervasive tracking should be phased out.

Biometric Identification Including Facial Recognition

At various points throughout the survey, rather binary choices are offered on whether certain technologies that pose a high-risk to human rights and democracy should be banned or not. In CDT’s view it is important, in order to truly tackle human rights harms, that we be more specific and nuanced to draw out exactly which applications of the technology (not just the technology itself) that we refer to. In the Council of Europe context in particular, CDT strongly cautions against the use of technologies such as remote biometric identification systems and facial recognition in high-risk applications. 

The European Data Protection Supervisor (which has jurisdiction over EU countries, not all Council of Europe member states) has called for a moratorium on the use of remote biometric identification systems – including facial recognition – in publicly accessible spaces. This arises from the data protection body’s concern that a stricter approach is needed to automated recognition in public spaces of human features – of faces, but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals – whether these are used in a commercial or administrative context, or for law enforcement purposes. A stricter approach is necessary in the view of the EDPS in light of the extremely high risks of deep and non-democratic intrusion into individuals’ private lives. In non-EU, Council of Europe member states there is an even higher risk of use and adverse impact of these technologies given the lack of equivalent data protection rules. For example, it has recently come to light the extent to which the Russian authorities are using facial recognition to identify and arrest people that attend protests, including those who were simply peacefully protesting. Such use of the technology has a chilling effect on freedom of association and expression.  Such developments in non-EU states makes it even more pertinent that a Council of Europe Convention ensures a higher layer of protection for human rights across the Council of Europe jurisdiction and potentially beyond. 

CDT concurs that in particular law enforcement’s use of facial recognition can pose a particularly high threat to human rights, given the risks of racial profiling and indiscriminate surveillance. It therefore would be desirable, in such cases where there is a high-risk of rights violations, to consider a moratorium until such a time that robust safeguards and effective limitations are in place.