Skip to Content

AI Policy & Governance, European Policy, Free Expression, Government Surveillance

EU Tech Policy Brief: April 2025

Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief! This edition covers the most pressing technology and internet policy issues under debate in Europe and gives CDT’s perspective on the impact to digital rights. To sign up for CDT Europe’s AI newsletter, please visit our website. Do not hesitate to contact our team in Brussels.

👁️ Security, Surveillance & Human Rights

Citizen Lab Unveils Surveillance Abuses in Europe and Beyond                                       

​The recent Citizen Lab report regarding deployment of Paragon spyware in EU Member States, particularly Italy and allegedly in Cyprus and Denmark, highlights a concerning trend of surveillance targeting journalists, government opponents, and human rights defenders. Invasive monitoring of journalist Francesco Cancellato, members of the NGO Mediterranea Saving Humans, and human rights activist Yambio raises serious concerns about press freedom, fundamental rights, and the broader implications for democracy and rule of law in the EU. 

The Italian government’s denial that it authorised surveillance, while reports indicate otherwise, indicates a lack of transparency and accountability. Reportedly, the Undersecretary to the Presidency of the Council of Ministers admitted that Italian intelligence services used Paragon spyware against Mediterranean activists, citing national security justifications. This admission highlights the urgent need for transparent oversight mechanisms and robust legal frameworks to prevent misuse of surveillance technologies. 

Graphic for Citizen Lab report, which reads, "Virtue or Vice? A First Look at Paragon's Proliferating Spyware Options". Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.
Graphic for Citizen Lab report, which reads, “Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Options”. Graphic has a yellow background, and a grayscale hand reaching through great message bubbles.

Lack of decisive action at the European level in response to these findings is alarming. Efforts to initiate a plenary debate within the European Parliament have stalled due to insufficient political support, reflecting a broader pattern of inaction that threatens civic space and fundamental rights across the EU. This inertia is particularly concerning given parallel developments in France, Germany, and Austria, where legislative measures are being considered to legalise use of surveillance technologies. In light of the European Parliament’s PEGA Committee findings on Pegasus and equivalent spyware, it is imperative that EU institutions and Member States establish clear, rights-respecting policies governing the use of surveillance tools. Normalisation of intrusive surveillance without adequate safeguards poses a direct challenge to democratic principles and the protection of human rights within the EU.

Recommended read: Amnesty International, Serbia: Technical Briefing: Journalists targeted with Pegasus spyware

 💬 Online Expression & Civic Space

DSA Civil Society Coordination Group Publishes Analysis on DSA Risk Assessment Reports

Key elements of the Digital Services Act’s (DSA) due diligence obligations for Very Large Online Platforms and Search Engines (VLOPs/VLOSEs) are the provisions on risk assessment and mitigation. Last November, VLOPs and VLOSEs published their first risk assessment reports, which the DSA Civil Society Coordination Group, convened and coordinated by CDT Europe, took the opportunity to jointly assess. We identified both promising practices to adopt and critical gaps to address in order to improve future iterations of these reports and ensure meaningful DSA compliance.

Our analysis zooms in on key topics like online protection of minors, media pluralism, electoral integrity, and online gender-based violence. Importantly, we found that platforms have overwhelmingly focused on identifying and mitigating user-generated risks, as a result focusing less on risks stemming from the design of their services. In addition, platforms do not provide sufficient metrics and data to assess the effectiveness of the mitigation measures they employ. In our analysis, we describe what data and metrics future reports could reasonably include to achieve more meaningful transparency. 

Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members' logos. In black text, graphic reads, "Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act".
Graphic with a blue background, with logo for the DSA Civil Society Coordination Group featuring members’ logos. In black text, graphic reads, “Initial Analysis on the First Round of Risk Assessments Reports under the EU Digital Services Act”.

CDT Europe’s David Klotsonis, lead author of the analysis, commented, “As the first attempt at DSA Risk Assessments, we didn’t expect perfection — but we did expect substance. Instead, these reports fall short as transparency tools, offering little new data on mitigation effectiveness or meaningful engagement with experts and affected communities. This is a chance for platforms to prove they take user safety seriously. To meet the DSA’s promise, they must provide real transparency and make civil society a key part of the risk assessment process. We are committed to providing constructive feedback and to fostering an ongoing dialogue.”

Recommended read: Tech Policy Press, A New Framework for Understanding Algorithmic Feeds and How to Fix Them 

⚖️ Equity and Data

Code of Practice on General-Purpose AI Final Draft Falls Short

Following CDT Europe’s initial reaction to the release of the third Draft Code of Practice on General-Purpose AI (GPAI), we published a full analysis highlighting key concerns. One major issue is the Code’s narrow interpretation of the AI Act, which excludes fundamental rights risks from the list of selected risks that GPAI model providers must assess. Instead, assessing these risks is left as an option, and is only required if such risks are created by a model’s high-impact capabilities.

This approach stands in contrast to the growing international consensus, including the 2025 International AI Safety Report, which acknowledges the fundamental rights risks posed by GPAI. The Code also argues that existing legislation can better address these risks, but we push back on this claim. Laws like the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act lack the necessary tools to fully tackle these challenges.

Moreover, by making it optional to assess fundamental rights risks, the Code weakens some of its more promising provisions, such as requirements for external risk assessments and clear definitions of unacceptable risk tiers. 

In response to these concerns, we joined a coalition of civil society organisations in calling for a revised draft that explicitly includes fundamental rights risks in its risk taxonomy.

Global AI Standards Hub Summit 

At the inaugural global AI Standards Hub Summit, co-organised by the Alan Turing Institute, CDT Europe’s Laura Lazaro Cabrera spoke at a session exploring the role of fundamental rights in the development of international AI standards. Laura highlighted the importance of integrating sociotechnical expertise and meaningfully involving civil society actors to strengthen AI standards from a fundamental rights perspective. Laura emphasised the need to create dedicated spaces for civil society to participate in standards processes, tailored to the diversity of their contributions and resource limitations.  

Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit
Image featuring Programme Director for Equity and Data Laura Lazaro Cabrera speaking at a panel with three other panelists on the role of fundamental rights in standardisation, at the Global AI Standard Hub Summit

Recommended read: Tech Policy Press, Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice

🆕 Job Opportunities in Brussels: Join Our EU Team

We’re looking for two motivated individuals to join our Brussels office and support our mission to promote human rights in the digital age. 

The Operations & Finance Officer will play a key role in keeping our EU office running smoothly—managing budgets, coordinating logistics, and ensuring strong operational foundations for our advocacy work. 

We’re also seeking an EU Advocacy Intern to support our policy and advocacy efforts, with hands-on experience in research, event planning, and stakeholder engagement. 

Apply before 23 April 2025 by sending your cover letter and CV to [email protected]. For more information, visit our website

🗞️ In the Press

⏫ Upcoming Event

Pall Mall Process Conference: On 3 and 4 April, our Director for Security and Surveillance Silvia Lorenzo Perez will participate in the annual Pall Mall Process Conference in Paris.