Skip to Content

AI Policy & Governance, European Policy, Government Surveillance

Landmark EU AI Act Must Reject Russian and Chinese-Style Surveillance Laws

China’s ruling Communist Party has previously claimed that its facial recognition system is capable of scanning the faces of China’s 1.4 billion population in a matter of seconds. The chilling outcome is the effective criminalisation of dissent and the elimination of the right to protest. Similarly, in Russia, hundreds of protestors have been detained for simply joining anti-government protests, having been surveilled by the state’s facial recognition systems.

The EU has consistently positioned itself as a guardian of human rights, vociferously distancing itself from what might be seen as the Orwellian surveillance of China and Russia. However, EU leaders are currently toying with the idea of placing core rights at risk by legalising the untargeted use of facial recognition by law enforcement in its landmark AI Act.  

Picture facial recognition technology deployed during Pride marches in Hungary or anti-racism demonstrations in France or the Netherlands. The very essence of free expression and the right to protest would be compromised. The EU AI Act risks allowing untargeted scanning with exceptions that, in effect, permit constant surveillance. The EU should resist the temptation to borrow pages from China and Russia’s playbook on surveillance. The decision on mass facial recognition is not just a technological debate; it is a fundamental question about the kind of society we want to live in. 

Although EU governments have repeatedly pushed the boundaries of the use of facial recognition, Europe’s national and regional privacy regulators have been consistent and clear – there is no legal basis in Europe for invasive scanning. The European Court of Human Rights in the Russian Glukhin case, a case that involved a peaceful protester arrested following identification through facial recognition,  ruled  “The processing of Mr. Glukhin’s personal data in the context of his peaceful demonstration, which had not caused any danger to public order or safety, had been particularly intrusive.”

In other words: you cannot process innocent people’s personal data, in particular those exercising their lawful right to protest. Yet if the EU’s AI Act passes without an explicit prohibition on untargeted facial recognition,  the law pitched to protect people’s rights against the harms of AI, would ironically end up legalizing one of its most nefarious uses. 

The intense negotiations in Brussels are now circling on which exceptions to allow for the use of this high-risk technology by law enforcement. But the focus on the list of crimes, rather than the risks inherent to the technology itself means that negotiators are missing the point. Untargeted facial recognition technology poses, by its very nature, an unacceptable risk to human rights. This is because it has to scan everyone – including innocent people – to function and because it is known to have deeply discriminatory outcomes. Persistent errors in this dragnet surveillance technology will put innocent individuals in harm’s way and chill participating in public activities that are critical to a democratic society.

In 2017, the use of facial recognition at the Notting Hill Carnival by the UK Police led to the mistaken arrest of an individual attending the event and also threw up many false positives. This echoes repeated studies where it has been found that some algorithms are 100 times more likely to misidentify Asian and Black individuals than white men. And even if algorithmic bias issues in facial recognition are addressed, surveillance technologies will perpetuate existing disparities in policing. At a time of reckoning in Europe regarding systemic racism in policing, it is surely reckless to legalize a technology infamous for compounding discrimination. 

The Chinese authorities seem to agree with some EU governments’ proposed approach. Over the Summer, China’s regulator introduced new rules framed as curbing the use of facial recognition technology – but crucially the new rules allow for wide exceptions for national security and public safety. It is deeply concerning that the EU AI Act’s text appears to be going in a similar direction, with such broad exceptions being pitched for its use that it could allow for continuous surveillance of European public spaces and a consequent infringement on privacy, free expression, and other fundamental rights. 

Understanding the risks to democracy and human rights, the European Parliament earlier this year took the commendable stance and passed a vote to formally prohibit such invasive uses of facial recognition. However, under the political pressure of striking a deal, Parliamentarians appear to be at risk of caving, and the prospective use of this dystopian technology to surveil society is back on the table.