Skip to Content

Cybersecurity & Standards, European Policy, Free Expression, Government Surveillance, Privacy & Data

What Should the EU Do to Better Protect Democracy in the Digital Age? – CDT’s Key Takeaways

On November 12, the Centre for Democracy & Technology Europe Office organised a high-level discussion in partnership with the United Nations Human Rights, Regional Office for Europe, on ‘What Should the EU Do to Better Protect Democracy in the Digital Age?‘ [recording available].

There are a number of conclusions we can draw from the questions that were posed during the debate. First, lawmakers in Europe have the opportunity to set the stage for the next 10 years of what the internet will look like globally. The EDAP and the DSA, as well as current debates about terrorist content, CSAM, and disinformation are highly interconnected – what legislators institute in one affects the other. Therefore, we must take a broad view and realize that we cannot weigh against one another free expression and minority rights, or any other rights. All human rights are universal and should be protected equally. The EU thus needs to be very precise with their definitions, and establish due process and clear procedures that respect fundamental rights and are rooted in the rule of law.

Secondly, lawmakers should seek solutions that go beyond pure content takedowns and do not threaten the right to free expression nor associated rights. The EU must think globally as it moves to set these rules. Across the world, we see trends of the criminalisation of journalism and cases of human rights defenders being imprisoned for sharing their opinions online, and these cases are the result of overly broad definitions of illegal content. It is important to remember that there are ongoing rule of law challenges in EU member states, and ill-defined laws could have profound negative consequences. That is why a clear liability framework for content moderation. To be rights-compliant, such a system will need to provide adequate information to users and ensure an effective appeals process. People should be empowered to have more control over their data and the content they see online.

Thirdly, there is a lot to be gained by increasing meaningful transparency. For this to be meaningful, we must ask transparency for whom and for what purpose. We could start with  further transparency over algorithms. It is well documented at this stage that algorithms replicate and even compound offline prejudices and discrimination. Given the role they play in our information ecosystem, it is vital that we ensure that victims of such discrimination have effective access to remedy. Furthermore, we understand the extent of the use of behavioural advertising and challenge in particular the deep well of personal data which is being used to drive such algorithms. Has GDPR failed to protect us against such complex use of data, or are data protection rules simply not being rigorously enough enforced? 

We also call for increased transparency on content moderation from EU Institutions and governments, especially when the request or encouragement to remove content comes from them. Finally, a careful approach will need to be taken with regard to political advertising. While we should aim to better understand the trends in political ads and bring more transparency into financing, we should caution against attempting to distinguish “political” from “non-political” advertisements as that could pose high risks for the fundamental rights of individuals and civil society organisations, by raising consequential questions about what advocacy issues count as “political”. Future debates in this topic will require a multistakeholder approach involving companies, states, and civil society.

Watch the full discussion on CDT’s Youtube channel.

Learn about CDT’s positions on the EDAP and the DSA in its responses to the European Commission’s public consultations.