Skip to Content

European Policy, Free Expression, Government Surveillance

CDT Responds to EC Public Consultation on Tackling Illegal Content Online

In October of last year, the European Commission published its Communication on tackling illegal content online. We criticised it for pushing private companies to police on government’s behalf for content that may be considered harmful or illegal. A few months later, Commission issued a Recommendation doubling down on this approach. It relies heavily on several mechanisms — trusted flaggers with government backing, Internet Referral Units, and hash databases — for faster takedowns. Safeguards and appeal processes are mentioned, but as voluntary measures.

The Commission has now conducted a public consultation aiming to ‘gather evidence and data on current practices, respondents’ experiences and organisations’ policies for tackling illegal content online’. In our response to the preliminary Inception Impact Assessment, we called on the Commission to conduct and publish a comprehensive collection of data about the nature and volume of content it targets. Without such analysis, we deem it premature to propose legislative action. Moreover, when the Commission assesses ‘progress’ in tackling various types of content, it cannot only measure rapid takedown of more content. The Commission must also consider the extent to which its policy captures legal content and results in de facto censorship of lawful political speech.

In our response to the public consultation we reiterated the above mentioned points and emphasised the following principles:

  • The need to protect and encourage free speech online should be prioritised against the perceived need to prevent and remove possibly illegal content. The principles of limited liability embedded in the E-Commerce Directive (ECD) are the foundation for the internet as a space for free expression, access to information, entrepreneurship, and innovation. In contrast, if intermediaries such as hosts and platforms are discouraged from allowing users to post content because of liability risk or content-policing burdens, then the full benefits of the information society will remain unrealised.
  • The role of courts cannot be circumvented when determining the legality of someone’s speech. A fundamental principle of the rule of law is that courts and judges, not private companies, determine when speech violates the law. Such decisions about online expression demand expert evaluation and public scrutiny. This principle should be respected by any suggested mechanisms to speed up content removal of allegedly illegal speech, including “trusted flaggers”, Internet Referral Units, and hash databases. In this respect, we have been consistently critical of the Commission’s approach in the debates surrounding the Audiovisual Media Services (AVMS) Directive, the Hate Speech Code of Conduct (CoC), and most recently, the Commission’s Communication and Recommendation on Tackling Illegal Content.
  • Ensuring “effective and appropriate” safeguards which companies should put in place against risk of abuse of their mechanism for content removal should not be voluntary. Thorough consideration on the risks of censorship should be given to any measures undertaken surrounding speech online. Recommendations should not focus on the fast removal of “illegal” content, but rather in ensuring a solid scheme for mitigating abuse of those mechanisms.
  • Use of automated content analysis tools to detect or remove illegal content should never be mandated in law. Thorough consideration and understanding should also be given to the important limitations of the utility of automated content analysis tools. Parsing context in various forms of human communication is a complex challenge. The legality of content often depends on context, including intent or motivation of the speaker, something currently outside of the scope of recognition of these tools. Moreover, the accuracy of natural language processing (NLP) tools for analysing the text of social media posts and other online content requires clear, consistent definitions of the type of speech to be identified. Such precise definitions simply do not exist for extremist content and hate speech. For these reasons, human review is fundamental to content moderation that employs automated tools.
  • Unrealistic time limits for takedowns should not be mandated in law. The European Commission in its recent Recommendation set out a one-hour time limit to take down terrorist content. This timeframe is basically incompatible with the “human-in-the-loop” principle, which the Recommendation says to embrace when using automated filtering tools. Emphasis should be placed on ensuring the accuracy and quality of takedowns, and not quantity. Therefore, any time limits need to be flexible enough to take into account these nuances.
  • Obligations for hosting service providers to promptly inform law enforcement authorities of any evidence of alleged serious criminal offences should not be mandated in law. Such proposition jeopardises fundamental rights to privacy and freedom of expression, and would not be likely to yield useful information for law enforcement. Mandatory reporting laws would create strong incentives for providers to over-report their users’ information and communications to law enforcement, in order to avoid penalties under the law. This would inundate law enforcement with information that would likely provide little value to investigators of serious crimes.