On International Human Rights Day, we should celebrate the key wins that have been made to further protect people’s human rights in relation to technology in the EU, and as a knock-on effect globally. The EU’s landmark Digital Services Act deserves a special mention. The law will provide a once in a generation opportunity to protect democracy and human rights in the digital age. It introduces new rules on transparency, accountability and due diligence obligations for large tech companies. It is imperfect, and so to mitigate the remaining risks to human rights, thoughtful monitoring, and effective enforcement and implementation will be key.
The international day to celebrate human rights is also an opportunity to reflect on how we as a society are protecting the rights of the most vulnerable. In what feels like an extraordinary situation, this week, CDT Europe has had to join with our civil society partners to appeal to EU leaders not to preclude the protection of perhaps some of the very most at-risk and vulnerable groups – migrants, asylum-seekers and refugees – from the protections that the EU’s AI Act will offer. To give just one example, the draft law includes a carve-out for AI systems that form a part of the EU’s large-scale IT systems used in migration such as Eurodac and the Schengen Information System.
The entire purpose of the AI Act, is, in theory, to protect fundamental rights when it comes to AI systems. Yet, we find ourselves in a situation where exceptions are being sought for the very most vulnerable. This problem stems from a more general issue with the structure of the draft Act, that CDT highlighted when it was first proposed. Namely, whilst a risk-based approach can be helpful in ensuring proportionate regulation, in order to appropriately protect human rights, you need to integrate a rights-based approach. This combined method would recognise that risk increases when the likelihood, or the seriousness, of infringement on rights increases.
The draft AI Act at times focuses on the technology and at times on the context. But it needs to consistently account for both. The particular context of migration has deep implications for human rights. People arriving at Europe’s borders are often fleeing conflict, poverty or persecution. The migratory route to Europe is one of the deadliest in the world, which gives an insight to what difficult a situation people are in before putting their families in a boat to risk the journey. Depending on where they arrive, people are at risk of detention and violence at the borders. They are also already at a high risk for racism, xenophobia and discrimination.
As the work of CDT and others has repeatedly demonstrated, AI decision making is often not neutral. In fact, it can compound and exacerbate discrimination, as demonstrated by historic and current examples of rights abuses in relation to the use of AI systems by immigration authorities. The use of AI systems for biometric categorisation, predictive purposes, social media scraping or facial recognition is current practice in migratory and anti-trafficking contexts. These uses raise strong fundamental rights concerns as they tend to be directed at those who have already overwhelmingly been targeted by state surveillance under the auspices of fighting trafficking or ‘detecting risks of tensions related to migration’.
The AI Act has the crucial and laudable aim of protecting fundamental rights with regard to AI applications. However, if the law fails to protect the most vulnerable, it will fail in this overall objective. It is therefore urgent that EU lawmakers ensure that the context of a given situation is taken into account in the AI Act, and in this case that should result in the correct categorization of many AI applications in the context of migration as either being very high or unacceptable risks.