Skip to Content

European Policy, Free Expression, Government Surveillance

EC Recommendation on Tackling Illegal Content Online Doubles Down on Push for Privatized Law Enforcement

Today, the European Commission published its “Recommendation on measures to effectively tackle illegal content online”, which puts forward a number of non-binding guidelines and principles for online platforms and hosts of user-generated content in responding to this issue. These sets of recommendations go beyond the ill-defined approach the Commission took in its “Communication on Tackling Illegal Content Online”, published in September 2017, despite the widespread criticism received across the board. In a joint letter, we recently urged the Commission to reflect more on its approach to tackling illegal content online.

While we recognize the Commission’s interest in seeking effective enforcement of national law, we continue to have significant concerns about the Commission’s overall approach and a number of its specific recommendations.

Companies are not the appropriate arbiters of the illegality of content

A major theme that runs throughout the European Commission’s broader Digital Single Market (DSM) Strategy is the push for internet companies to police and monitor their platforms for content that may violate various laws restricting speech. However, a fundamental principle of the rule of law is that courts and judges, not private companies, determine when speech violates the law. Such decisions about online expression demand expert evaluation and public scrutiny. In this respect, we have been consistently critical of the Commission’s approach in the debates surrounding the Audiovisual Media Services (AVMS) Directive, the Hate Speech Code of Conduct (CoC), and most recently, the Commission’s “Communication on Tackling Illegal Content”.

The Recommendation particularly states that “online platforms should be able to take swift decisions as regards possible actions with respect to illegal content online”. It not only circumvents the role of courts, it also makes vague reference to “effective and appropriate” safeguards which companies should put in place voluntarily. It therefore forgoes any sort of remedy from the government when speech has unjustifiably been deemed illegal. The Commission fails to describe the processes and criteria these companies are to use when they take such important decisions on content that may or may not be illegal. With incentives skewed towards takedowns, and a mere hope that companies will put in place adequate safeguards for users, the approach the Commission suggests is bleak and unsustainable from the outset.

The Commission recommends a variety of structures for speeding up content removal that lack adequate accountability mechanisms

In its pursuit of faster removal of allegedly illegal speech, the Commission recommends several approaches that circumvent the traditional court-order process, including “trusted flaggers”, Internet Referral Units, and hash databases. (See our in-depth look at these structures here.) But these methods for speeding up content removal can undeniably be subject to abuse. For one, by increasing pressure on online platforms to take down more faster, the Commission is creating incentives for companies to err on the side of caution and suppress flagged content without proper review. Moreover, these content removal structures are particularly subject to abuse, given that the Commission assumes that flagged content is generally illegal despite the lack of available public data.

Against this background, it is safe to assume that these structures will be used in attempts to silence lawful speech that some may simply disagree with. Despite this reality, however, the Recommendation once again makes scant and vague reference to measures to safeguard against potential abuse: “effective and appropriate measures should be taken to prevent [..] notices or counter-notices that are submitted in bad faith and other forms of abusive behaviour”. It is clear that the Commission has not given thorough consideration to the risks of censorship that its recommendations present. It focuses its recommendations on the fast removal of “illegal” content, but it should spend the same amount of effort, if not more, in ensuring a solid scheme for mitigating abuse of those mechanisms. A rebalancing of priorities and responsibilities is in order.

Emphasis on speed and use of automation ignores limits of technology and techniques

The Commission also places much emphasis throughout its Recommendation on encouraging the use of automatic filtering technologies to detect and prevent sharing of “illegal” content. As we previously pointed out, it is wrong for policymakers to give the impression that solving all content moderation problems is just a matter of investing resources in these tools and sharing best practices. They are not a “silver bullet”. Regardless of how far this type of technology advances, important limitations will always exist on the utility of automated content analysis tools, as parsing context in various forms of human communication is a complex challenge. The legality of content often depends on context, including intent or motivation of the speaker, something far outside of the scope of recognition of these tools. Moreover, the accuracy of natural language processing (NLP) tools for analyzing the text of social media posts and other online content requires clear, consistent definitions of the type of speech to be identified. Such precise definitions simply do not exist for extremist content and hate speech. For these reasons, human review is fundamental to content moderation that employs automated tools — a point not emphasized strongly enough in the Recommendation.

The lack of understanding of how these tools function and what their limitations are is further evident in the Recommendation in its “clarifications” on liability of platforms. The Commission encourages providers to implement automated filtering technologies by stating that their proactive use “do[es] not automatically lead to the hosting service provider concerned losing the benefit of the liability exemption provided for in Article 14” of the eCommerce Directive. This is certainly misleading, because while the implementation of such tools does not lead to a loss of the liability exemption, the moment online service providers obtain actual knowledge of illegal activity by means of these tools, they will become liable and must act. Online service providers need legal certainty to be able to operate efficiently and to support a broad range of users’ speech.

The Commission goes on to encourage cooperation between hosting service providers “through the sharing and optimisation of […] technological tools, including such tools that allow for automated content detection”. The intention is to help smaller platforms that do not have the necessary resources to uptake such technology. In practice, however, the strength of the big players will be consolidated, and their policies will determine the way forward for every other market player. Reiterating the limitations of such technology, it would not be wise to encourage such cooperation.

The Commission also emphasizes removal speed, particularly in the case of terrorist content, where “referrals should be assessed and, where appropriate, acted upon within one hour, as a general rule”. When you combine the aforementioned risks of overbroad censorship posed by the limitations of automated filtering tools (particularly with extremist content and hate speech), the pressures of unrealistic time limits for takedowns, and a general lack of safeguards against abuse, you have a recipe for disaster. A one-hour time limit is also basically incompatible with the “human-in-the-loop” principle, which the Recommendation says to embrace when using automated filtering tools. The Commission should demonstrate the importance of ensuring the accuracy and quality of takedowns, and not just quantity. Therefore, any time limits need to be flexible enough to take into account these nuances.

Requiring reporting on users to law enforcement invades privacy and chills free expression

Finally, The Commission calls on Member States “to establish legal obligations for hosting service providers to promptly inform law enforcement authorities […] of any evidence of alleged serious criminal offences” they obtain as they police content on their sites. The proposition of mandatory reporting to law enforcement officials jeopardizes fundamental rights to privacy and freedom of expression, and would not be likely to yield useful information for law enforcement.  (The U.S. Congress considered and ultimately rejected such a mandate for apparent “terrorist activity” in 2016; CDT and a coalition of human rights and civil liberties organizations and trade associations opposed the bill.)

National law may already permit hosting providers to report information, including private communications, to law enforcement officials in cases of emergency. (For example, U.S.-based hosting providers are permitted to make such voluntary disclosures to U.S. authorities under 18 U.S.C. § 2702(b)(7)-(8).) The difference between voluntary and mandatory reporting is stark. Mandatory reporting laws would create strong incentives for providers to over-report their users’ information and communications to law enforcement, in order to avoid penalties under the law. This would inundate law enforcement with information that would likely provide little value to investigators of serious crimes — substantially expanding the haystack makes it more difficult to find the needle.

At the same time, such laws would present enormous incursions into individuals’ fundamental rights. The notion that a provider may turn over even private communications to law enforcement at any time will exert a strong chilling effect on people’s communications. Examples already abound of misinterpreted jokes, greetings, and satire leading to scrutiny of innocent individuals. Vaguely-worded reporting obligations could give authorities leverage to pressure platforms into reporting on political organizing and advocacy by opposition parties.  And, given the Commission’s push for providers to use more (faulty, overbroad) automated monitoring tools, the amount of user communication and activity that would come under scrutiny and be forwarded to law enforcement under a better-safe-than-sorry approach would be staggering.

It is appropriate for data privacy laws to include exceptions that enable providers to share information with law enforcement under emergency circumstances, where there is an imminent risk of death or serious bodily harm to a person. But a legal mandate to police and forward anything that hints of risk or suspicion is too broad an approach, and one that will not yield positive outcomes for either law enforcement or users. The Commission should investigate whether there are insufficient permissions for voluntary disclosures in emergency circumstances under current national laws, and should clarify the rules around voluntary sharing of information with law enforcement. Providers should, as a general rule, not disclose user information absent appropriate legal process.

What’s next? EC to assess progress made and once again raises possibility of legislation

While these recommendations are not legally binding, online platforms now have three months to take measures in relation to terrorist content which satisfy the Commission’s stance: take more content down and faster. On illegal content in general, the Commission will assess progress within six months. Based on the “progress made”, the Commission will decide whether it will legislate in this field. France has already warned to push for EU rules on removing online illegal content if platforms do not remove illegal content in general within one hour.

Thus, companies are facing added pressure to “do something” to speed up content removal. In addition, Member States will be asked to “report to the Commission on the referrals submitted […] and the decisions taken by hosting service providers”. Hosting service providers are also instructed to submit to the Commission “upon its request, all relevant information to allow” for monitoring of progress.

As we have argued repeatedly, it is essential to ensure independent review by courts of content that is flagged and/or removed as illegal. There is no indication that the Commission aims to ensure this sort of judicial oversight. This means that it remains impossible to judge whether content removal tracks the limits of the law, which should be a key objective of the Commission’s policy.

This extremely pressurized environment can only lead to unfair decisions in content moderation, and ultimately, overbroad censorship of internet users. The Commission is failing to demonstrate that it values and safeguards free expression, and thus failing in its responsibility to protect fundamental rights. We’ll continue to voice these concerns in the upcoming public consultation that the Commission plans to launch shortly.