Skip to Content

AI Policy & Governance, European Policy

CDT Europe’s Response to the European Commission’s Questionnaire on Prohibited AI Practices

The Centre for Democracy and Technology Europe recently participated in the public stakeholder consultation on prohibited AI practices under Article 5 of the AI Act. This consultation feeds into the development of guidelines by the European Commission on the practical implementation of these prohibitions which will already apply from 2 February 2025 onwards. We regret that no full draft of the guidelines was made public beforehand and that submissions were limited to responding to a questionnaire with pre-set questions and character limits.

Despite these challenges, CDT Europe responded to the consultation. In our response, we highlighted that the prohibitions as set out in the final text of the AI Act require additional clarification to ensure that they cover all potential scenarios where fundamental rights may be  impacted, and that exceptions to these prohibitions must be interpreted narrowly. 

In particular, we called for a greater alignment of the prohibition of subliminal, purposefully manipulative or deceptive practices with the Digital Services Act (DSA), specifically its ban of dark patterns under Article 25(1). The guidelines should clarify that dark patterns, defined under the DSA as “practices that materially distort or impair, either on purpose or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions”, also fall under the prohibition of the AI Act. We furthermore advocated for the explicit inclusion of the examples of manipulative behaviour highlighted in the 2021 Commission Notice concerning unfair business-to-consumer commercial practices in the internal market as prohibited practices under the AI Act. 

With regard to the prohibition of AI systems that exploit vulnerabilities, we stressed that the guidelines should provide a non-exhaustive list illustrating examples of the wide array of vulnerabilities that may be derived from a person’s or a group’s  “socio-economic situation”. In particular, we referred to anti-discrimination laws in different EU Member States which have recognised grounds such as level of education, wealth, property status, housing assistance, social status and social origin. 

We advocated for a broad definition of the term unacceptable social scoring practices which acknowledges that social scoring can be a dynamic value instead of a precise number, includes any type of categorisation, and also covers situations where individual or groups’ relative performance against others is classified. For example, while some fraud-detection AI systems deployed in the welfare system rely on scores – such as those in the Netherlands and Sweden –, other recently investigated AI systems are known to use a set of metrics. Moreover, under the definition of the AI Act, one element of social scoring is that it leads to “detrimental or unfavourable treatment […] in social contexts that are unrelated to the contexts in which the data was originally generated or collected”. With regard to this definition, we argued that clarification is needed as to what makes contexts unrelated to each other, highlighting that, for example, a shared objective between government agencies does not mean that the contexts are related, and emphasising that the guidelines should endorse a genuine differentiation between social contexts.

We also highlighted that the scope of the ban of AI systems to make individual crime risk assessments and predictions should be broadly defined. Hence, it should be clarified that it also applies to re-offending predictions in the administration of justice context, such as parole hearings. We also called for the exception to the ban by the AI Act in circumstances where AI is used to support human assessment to be expanded upon, arguing that the guidelines should clarify the minimum standard for a human assessment to fall within this exception, as well as any necessary safeguards such as internal controls, approval/ or review processes and accountability measures. 

Finally, we argued that the guidelines should clarify the meaning of “criminal activity” to ensure that data such as being a suspect, having been arrested, being a victim of a crime or living in a high-crime area would not be covered. Failure to exclude these types of involuntary contact with crime would mean that systems such as ProKid, which predicts offending for children and young people based on whether they have been a crime victim or a witness, and Top400, which considers factors such as being placed under surveillance, being arrested as a suspect or being a victim of domestic violence, would fall outside the ban.

We called for a clarification of the meaning of “untargeted” scraping of facial images, which is prohibited under the AI Act. In this sense, it should be highlighted that scraping does not become targeted as soon as any type of restriction is in place. In particular, we argued that the guidelines should ensure that mere compliance with robots.txt instructions, which indicate whether and to what extent a webmaster agrees for a website to be crawled, would not result in the removal of an AI system from the scope of the ban. Moreover, we argued that the meaning of “targeted” should be interpreted by reference to the GDPR data minimisation principle. It therefore needs to be adequate, relevant, and limited to what is necessary in relation to the purposes for which the images are processed.

The AI Act bans the use of emotion recognition systems except for medical or safety reasons. In our submission, we advocated for a narrow interpretation of this exception with a particular emphasis on proportionality. Hence, to fall under the exception, evidence of the system’s effectiveness both in terms of identifying emotions and advancing medical and/or safety ends should be required. Furthermore, the guidelines should clarify how medical and safety reasons should be identified, justified and documented prior to the deployment of any emotion recognition technology.

We also argued that the guidelines should affirm the applicability of Article 6(3)(c) of  Directive 89/391/EEC, which imposes an obligation on employers to ensure that the introduction of new technologies is subject to consultation with the workers and/or their representatives. Likewise, the guidelines should recall the obligation in Article 50(3) AI Act to notify individuals of the deployment of emotion recognition technologies. Finally, the guidelines should outline mitigations that can be taken by deployers to counterbalance that certain groups, such as people with disabilities, are disproportionately flagged by some emotion recognition systems.