EU Tech Policy Brief: Summer 2024
Welcome back to the Centre for Democracy & Technology Europe’s Tech Policy Brief, the last one before the summer break! We will come back in September.
This edition covers the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels: Silvia Lorenzo Perez, Laura Lazaro Cabrera, Aimée Duprat-Macabies, and David Klotsonis.
👁️ Security, Surveillance & Human Rights
Weighing in on the EU-U.S. Data Privacy Framework
This month, CDT Europe met with the European Commission and submitted comments for the first-annual review of the EU-U.S. Data Privacy Framework, which facilitates the transfer of personal data between the EU and the U.S. while aiming to maintain high data protection standards. Its predecessor, the “Privacy Shield” agreement, was invalidated by the Court of Justice of the European Union in 2020 due to concerns about U.S. surveillance laws and practices not aligning with EU privacy standards, and the lack of effective legal remedies available for EU citizens. Our comments shed light on significant updates in U.S. legislation that may impact the implementation of this framework, with a particular emphasis on the April 2024 reauthorisation of Section 702 of the Foreign Intelligence Surveillance Act (FISA 702).
What’s new with FISA 702? The reauthorisation has broadened its reach, allowing nearly any U.S. company offering services and managing communication data to be required to comply with FISA 702 directives. This expansion has stirred considerable uncertainty regarding the scope of FISA 702 surveillance and heightened concerns over its inadequate safeguards. Consequently, serious concerns arise about whether U.S. surveillance laws can genuinely align with the privacy and data protection standards in the EU.
Calling on the EU to Limit Use of Spyware Technologies
Governments from the European Union and beyond have been illegally using spyware against their citizens, democratically elected politicians, journalists, and human rights defenders. Given the widely-recognized harms this technology causes, we call upon the new Parliament to push for the EU to urgently introduce stringent limits on the use of spyware, and adopt stronger safeguards and guardrails to protect the rights of citizens.
Other governments are choosing different paths, and the harms to rights are explicit: This month, the government of Pakistan adopted new legislation to enable generalised, warrantless interception and tracing of citizens’ communications by its intelligence services in the “interest of national security or in the apprehension of any offence.” The move has sparked intense debate amongst opposition leaders, citizens, and digital rights activists, with the Human Rights Commission of Pakistan stating that the new law grossly violates citizens’ basic rights to liberty, dignity, and privacy, and it is likely to “be used to clamp down on political dissent through means of blackmail, harassment and intimidation.”
The EU should avoid, as in this case, using “national security” as a blank check to justify draconian surveillance measures. Implementing robust guardrails on the use of surveillance technologies by law enforcement and intelligence services is crucial to prevent the abuse of national security as a pretext for infringing fundamental rights. Without stringent regulations and oversight, such technologies can be misused to stifle dissent, monitor political opponents, and intimidate journalists and activists, ultimately eroding public trust in government institutions.
Recommended read: Politico reported on a European Commission document that takes a stance on spyware.
💬 Online Expression & Civic Space
Highlighting Civil Society’s Important Role in Application of the DSA
As the new European Parliament forms, it is important that MEPs continue to monitor enforcement of the Digital Services Act (DSA) through the dedicated working groups to ensure effectiveness and coherence with application of existing regulations. In doing so, continued collaboration with civil society organisations will be key to meaningful oversight.
Civil society research has also been instrumental in informing the European Commission’s request for information and investigations under the DSA, by presenting evidence of possible non-compliance and raising awareness around likely risks, notably in the cases of Temu, LinkedIn, and Meta.
A research report by AI Forensics and Interface published just this month raised important concerns for the way that online platforms continue to shape civic discourse, and highlighted the importance of continued collaboration between the European Commission, Digital Services Coordinators (which are responsible for supervising, enforcing and monitoring the DSA) and civil society in identifying blind spots in DSA enforcement, even during existing investigations. The report found that TikTok search suggestions promoted Germany’s far-right party, Alternative for Germany, to young voters ahead of the EU elections. Following the report, the European Commission opened formal proceedings under the DSA, and requested information to assess TikTok’s compliance with the law in areas including protection of minors.
Recommended watch: In a short video, CDT Europe interviewed representatives of civil society organisations — who participated in our Fifth DSA Civil Society Roundtable Series event — about implementation of the DSA.
⚖️ Equity & Data
Analysing the AI Act’s Obligations for Public Authorities
In our most recent analysis on the EU’s AI Act, we dove into the obligations that the AI Act creates for public authorities. The brief focuses on how the AI Act applies for local authorities, and identifies the key obligations that public authorities must account for in deploying high-risk AI systems. The AI Act imposes obligations on all deployers of high-risk AI, many of which will require particular attention from resource-mindful public authorities. These authorities must upskill staff to adequately monitor functioning of AI systems, increase internal capacity to comply with notification and transparency obligations, and introduce new systems to field complaints and queries.
Additional obligations apply to public authorities by virtue of their role, including ensuring that any high-risk AI adopted is listed in the EU’s database, that they register as deployers, and that they carry out fundamental rights impact assessments. These assessments must describe the specific harms that deployment of the high-risk AI system could cause, which groups of people are likely to be affected, and how possible harms would be mitigated.
Next Steps for the AI Act’s Implementation
While the text of the AI Act is final and its entry into force is imminent, several aspects of the law are yet to be fully developed, as they will be covered in guidelines, Codes of Practice, and implementing and delegated Acts. The Parliament’s role in ensuring that these documents robustly affirm human rights will be crucial, and the recent creation of a cross-committee working group to continue monitoring the AI Act’s implementation is a step in the right direction.
Recognising the AI Act’s Limits in Preventing Harms
Despite the AI Act’s many safeguards and compliance exercises, the law only focuses on mitigation and cannot prevent an algorithm — even a high-risk AI system — from performing poorly and creating problems as a result.
Recently, a New York Times investigation demonstrated how major harms can result from insufficient oversight of an AI system despite compliance with the AI Act. It found that an algorithm called VioGén, used by law enforcement authorities in Spain to determine the likelihood of domestic violence victims being abused again, wrongly assessed victims as facing low or negligible risk of harm — including in cases where victims were ultimately killed by their partners. Despite being trained to overrule VioGén’s recommendations depending on the evidence, Spanish police accepted the automated risk scores 95% of the time.
Clearly, overreliance on algorithms — particularly those like VioGén, which the AI Act would classify as high-risk — is insufficient to perform essential law enforcement functions. AI safeguards are no cure for under-resourced authorities, deeply ingrained institutional approaches, and cultural stigma around domestic violence that may affect the performance of an AI system as a result of input data quality.
Recommended read: EDRi developed a living guide for how to keep resisting biometric mass surveillance practices now and in the future.
🆕 Job Opportunity: Communications Trainee
Interested in working with our team in Brussels to promote human rights in the digital age? We are seeking a Communications Trainee to help support the office’s work in strategic communications. Gain valuable experience in social media management, event organisation, and strategic advocacy, and join a growing team of experts in digital rights. This is a paid opportunity starting in early September, and the deadline to apply is 8 August. For more information, visit our website.
⏫ October Event: Tech and Society Summit
In partnership with EDRi and several other civil society organisations, CDT Europe is co-hosting the Tech and Society Summit on 1 October. This event aims to foster dialogue and debate between civil society and recently elected EU decision-makers, focusing on the intersection of technology, society, and the environment. In our panels, we will be focusing on spyware regulation as well as engagement of civil society in the enforcement of the Digital Services Act.