European Policy, Free Expression, Government Surveillance
EU Tech Policy Brief: January 2025
Welcome back to the Centre for Democracy & Technology Europe‘s Tech Policy Brief, where we highlight some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and give CDT’s perspective on the impact to digital rights. To sign up for this newsletter, or CDT Europe’s AI newsletter, please visit our website.
📢 2025 Team Update
CDT Europe’s team is back together! We’re thrilled to kick off the new year with the full team back in action. This January, we welcomed two team members, Joanna Tricoli, joining the Security, Surveillance and Human Rights Programme as a Policy and Research Officer, and Magdalena Maier, joining the Equity and Data Programme as a Legal and Advocacy Officer. Plus, our Secretary General, Asha Allen, has returned to the office – we’re so glad to have her back!

👁️ Security, Surveillance & Human Rights
PCLOB Dismissals Put EU-U.S. Data Transfers At Risk
On 27 January, the Trump Administration dismissed three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an independent government entity that facilitates transparency and accountability in U.S. surveillance. This lost the body its quorum, preventing it from commencing investigations or issuing reports on intelligence community activities that may threaten civil liberties. It is unclear when replacements will be appointed and operations will resume, but based on past instances, the process is likely to take a long time.
The PCLOB plays a crucial role in protecting privacy rights and keeping intelligence agencies in check. It is also a key part of the EU-U.S. Data Privacy Framework (DPF), established in 2023 after years of negotiations following the Court of Justice of the EU’s invalidation of Privacy Shield. The DPF provides EU citizens with rights to access, correct, or delete their data, and offers redress mechanisms including independent dispute resolution and arbitration. Under the Framework, the PCLOB is responsible for overseeing and ensuring U.S. intelligence follows key privacy and procedural safeguards. As we pointed out in a Lawfare piece, weakening this Oversight Board raises serious concerns about DPF’s validity, since the EU now faces greater challenges in ensuring that the U.S. upholds its commitments — with the entire DPF and transatlantic data flows at risk.
Venice Commission Asks for Strict Spyware Regulations
In its long-awaited report released last December, the Venice Commission addressed growing concerns about spyware use, and the existing legislative frameworks regulating the technology in all Council of Europe Member States. The report is based on the Commission’s examination of whether those laws provide enough oversight to protect fundamental rights, and was done in response to a request from the Parliamentary Assembly of the Council of Europe following revelations about concerning uses of Pegasus spyware.
In the report, the Commission emphasised the need for clear and strict regulations due to spyware’s unprecedented intrusiveness, which can interfere with the most intimate aspects of our daily lives. To prevent misuse, it laid out clear guidelines for when and how governments can use such spyware surveillance tools, to ensure that privacy rights are respected and abuse is prevented.
Recommended read: The Guardian, WhatsApp says journalists and civil society members were targets of Israeli spyware
💬 Online Expression & Civic Space
Civil Society Aligns Priorities on DSA Implementation
Last Wednesday, CDT Europe hosted the annual DSA Civil Society Coordination Group in-person meeting at its office, bringing together 36 participants from across Europe to strategise and plan for 2025 on topics including several aspects of Digital Services Act (DSA) enforcement.

The day began with a focused workshop by the Recommender Systems Task Force on the role of recommender systems in annual DSA Risk Assessment reports, which Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) must complete to assess and mitigate the systemic risks posed by their services. The workshop addressed key challenges in interpreting these reports, particularly in the absence of data to substantiate claims about how effective mitigations are.
That session was followed by a broader workshop on DSA Risk Assessments. With the first round of Risk Assessment and Audit reports now published, constructive civil society feedback on those reports can help improve each iteration, pushing towards the ultimate goal of meaningful transparency that better protects consumers and society at large.
Transparency and Accountability Are Needed from Online Platforms
Recently, at a multistakeholder event on DSA Risk Assessments, CDT Europe’s David Klotsonis facilitated a session on Recommender Systems. With the first round of Risk Assessment reports widely considered unsatisfactory by civil society, much of the conversation focused on how to foster greater and more meaningful transparency through these assessments. Participants highlighted that, without data to underpin the risk assessments, robust and informed evaluation by the public is impossible. Even in the absence of such data, however, the discussion underscored that consistent and meaningful engagement with relevant stakeholders—including those from digital rights organisations in the EU—remains crucial. Civil society reflections are key to ensure that these reports could be even more useful, and to drive the transparency and accountability necessary for better platform safety.
Recommended read: Tech Policy Press, Free Speech Was Never the Goal of Tech Billionaires. Power Was.
⚖️ Equity and Data
CDT Europe Responds to EC Questionnaire on Prohibited AI Practices
CDT Europe participated in the public stakeholder consultation on which practices the AI Act prohibits, to inform the European Commission’s development of guidelines for practically implementing those prohibitions (which will apply beginning 2 February 2025). In our response, we highlighted that the prohibitions — as set out in the final AI Act text — should be further clarified to cover all potential scenarios where fundamental rights may be impacted. We also argued that exceptions to these prohibitions must be interpreted narrowly.
Second Draft of the General-Purpose AI Code of Practice Raises Concerns
In December, the European Commission published the second draft of the General-Purpose AI (GPAI) Code of Practice (CoP). Despite significant changes and some improvements, several aspects of the draft continue to raise concerns among civil society. The systemic risk taxonomy, a key part of the draft that sets out the risks GPAI model providers must assess and mitigate, remains substantially unchanged.
In earlier feedback, CDT Europe suggested key amendments to bring the draft in line with fundamental rights, such as including the risk to privacy or the prevalence of non-consensual intimate imagery and child sexual abuse material. On a different front, organisations representing rights-holders have called for critical revisions to the draft to avoid eroding EU copyright standards, noting that the CoP in its current form fails to require strict compliance with existing EU laws.
Our comments on the second-draft systemic risk taxonomy’s approach to fundamental rights are available on our website. CDT Europe will continue to engage with the process, with the next draft expected to be released and simultaneously be made available for comments to CoP participants on 17 February.
EDPB Opinion on Personal Data and AI Models: How Consequential Is It?
In an early January IAPP panel, our Equity & Data Programme Director Laura Lazaro Cabrera discussed the role of the latest EDPB opinion on AI models and GDPR in closing a long debate: Does the tokenisation process underlying AI models prevent data processing, in the traditional sense, from taking place? Ultimately, this line of reasoning would take AI models entirely outside of the General Data Protection Regulation (GDPR)’s scope.

The panel unpacked the opinion’s nuances, noting that it allowed for situations where a model could be considered legally anonymous — and thereby outside the GDPR’s scope — even when personal data could be extracted, if the likelihood of doing so using “reasonable means” was “insignificant”. As the panel highlighted, the opinion is strictly based on the GDPR and did not refer to the AI Act, but will inevitably inform how regulators approach data protection risks in the AI field. Those risks are currently under discussion in several AI Act implementation processes, such as those for the GPAI Code of Practice and the forthcoming template for reporting on a model’s training data.
Recommended read: POLITICO, The EU’s AI bans come with big loopholes for police
🦋 Bluesky
We are on Bluesky! As more users join the platform (including tech policy thought leaders), we’re finding more exciting content, and we want you to be part of the conversation. Be sure to follow us at @cdteu.bsky.social, and follow our team here. We also created a starter pack of 30+ EU tech journalists, to catch the latest digital news in the bubble.
🗞️ In the Press
- Euractiv, Civil society rallies for human rights as AI Act prohibitions deadline looms
- The Record, Politicization of intel oversight board could threaten key US-EU data transfer agreement
- Lawfare, Trump’s Sacking of PCLOB Members Threatens Data Privacy
- IAPP, First EU AI Act provisions now in effect
⏫ Upcoming Events
AI Summit: On 10 and 11 February, France will host the Artificial Intelligence Action Summit, gathering heads of State and Government, leaders of international organisations, CEOs, academia, NGOs, artists and members of civil society, to discuss the development of AI technologies across the world and their implications for human rights. CDT President Alexandra Reeve Givens and CDT Europe Programme Director Laura Lazaro Cabrera will attend the conference. Laura will be making the closing remarks at an official side event to the Summit hosted by Renaissance Numérique. Registration is open here.
RightsCon: Our Security, Surveillance and Human Rights Programme Director Silvia Lorenzo Perez will participate in a panel discussion on spyware at the 2025 RightsCon Edition, taking place from 24 to 27 February in Taipei. Each year, RightsCon convenes business leaders, policy makers, government representatives, technology experts, academics, journalists, and human rights advocates from around the world to tackle pressing issues at the intersection of human rights and technology.