Skip to Content

European Policy, Open Internet

Tackling ‘Illegal’ Content Online: The EC Continues Push for Privatised Law Enforcement

The European Commission’s Communication on Illegal Content Online, released last week, is the latest in a long line of EU policy initiatives aimed at addressing the availability of (possibly) illegal content online. It envisages replacing decision-making by courts with that of private companies, and misses an important opportunity to provide EU-wide guidance on how notice-and-action processes should work. The Commission could have explained in more detail the obligations of those notifying content for removal, and the evidentiary standards notifications should meet to be actionable. The guidelines could have stressed the principle that decisions of legality and illegality are fundamentally for courts and judges to make, and that any notice-and-action process should provide for due process, including recourse to courts.

Instead, the Communication encourages more comprehensive and faster measures by hosts to prevent content from appearing that may (or may not) be illegal. It urges expanded use of technical measures, such as content filters, with scant attention paid to the shortcomings of available technologies. And it provides its recommendations for “voluntary” action by intermediaries alongside a promise of “possible legislative measures” to be issued by May 2018.

While the issues the Commission intends to address, notably illegal hate speech and terrorist content, are real and serious concerns for European societies, the Communication’s policy direction is problematic on a series of points and could have significant impact on online expression and debate in Europe.

Guidance presumes illegality of content and circumvents role of courts

The core problem with the Communication appears in the title: “Tackling Illegal Content Online”. Before any content can be tackled, a determination must be made about whether it is actually illegal. It is a fundamental principle in democratic societies that these decisions should ultimately be made by courts. As is the case with other legislative and non-legislative policy initiatives on online content (many of which are referred to in the Communication), the Communication ignores this principle. The absence of judicial oversight is particularly problematic for the categories of content that are at the centre of the Commission’s concerns: illegal hate speech and terrorist propaganda. These issues are top of mind for politicians and the public in many countries, and warrant serious attention from policy makers. But this type of speech poses difficult challenges in determining legality. It is far from straightforward to determine whether a statement constitutes hate speech (illegal or not) or is a strongly worded expression of a controversial political, ideological or religious view.

Further, in these areas, removal of content may also have negative consequences that the Communication does not consider. Removing expressions because they are considered illegal hate speech does not remove the views that lie behind those expressions, but it does eliminate the possibility for others to counter the statements. Similarly, certain religious statements may be considered terrorist content and removed. The speaker’s radical religious views do not disappear as a result, but society at large is deprived of the knowledge that the view exists, and others will not be able to engage in such counter-speech.

The Communication encourages and endorses intensified proactive measures by intermediaries to prevent, detect and remove content that may (or may not) be illegal. But it describes a regime of privatised law enforcement that does not attempt to draw a bright line between content that violates platforms’ terms of service (TOS) and content that breaks the law. In the Commission’s ideal world described in the Communication, sensitive decisions about legality and illegality will be made by platform moderators, prompted by “trusted” flaggers and regular users, under significant pressure from public authorities.

This creates considerable risk that platforms will be incentivised to err on the side of caution and to suppress flagged content without proper review, assuming that notices are valid and justified. The Communication’s reference to the Hate Speech Code of Conduct (and other Commission statements on the matter) makes it clear that the Commission assumes that flagged content is generally illegal. This assumption is not supported by publicly available data.
The Communication calls for a high level of transparency and accountability on the part of platforms, but it is silent on similar obligations for government entities seeking content removal, either directly (through Internet Referral Units) or through private entities such as NGOs (as is the case with the Code of Conduct). These two types of initiatives are characterised by very little transparency, no standards for due process and no judicial oversight. It is therefore not possible to assess whether the content removed under these schemes is illegal or not. The Communication does not point out these shortcomings and simply endorses both initiatives.

The thrust of the approach taken in the Communication is to remove public accountability in private entities’ decisions about online expression in areas that require detailed legal analysis and scrupulous political evenhandedness. The more widespread this approach becomes, the fewer disputes about controversial, offensive and possibly illegal speech will reach courts, and these important decisions about the limitations on free expression will be taken without public scrutiny.

Guidance relies heavily on use of automated filtering, which has its limitations

The Communication strongly encourages platforms to use automatic detection and filtering technologies to detect and prevent sharing of “illegal” content and advocates for increased investment and research in this field. While some content hosts have incorporated automation into various stages of their moderation processes, it is important to caution policy makers against assuming that these tools can work as a silver bullet for the challenges of content moderation at scale.

Automated filtering tools are notoriously prone to both over- and under-inclusive results, taking down perfectly lawful speech while being fairly easily circumvented. Due to the complexity of human communication and the weakness of existing tools, human involvement in content moderation remains essential. The Communication should state more clearly the need for human review in algorithmic decision-making across the board. Currently available filtering technologies meant to detect uploading of copyrighted content without permission are effective to a point, but have great difficulty in recognising copyright exceptions and limitations. These technologies are not sophisticated enough to sift controversial content in the categories that motivate the Communication.

The Communication also clearly states that “[i]llegal content, once detected and taken down, should not reappear online”. This amounts to a mandate for automatic technologies to enforce “notice-and-staydown” and would create a de facto obligation to monitor all content. This contradicts the intention stated in the Communication to avoid prejudice to the EU acquis, in particular the ECD. How guidance that instructs platforms to institute “notice and staydown” squares with Article 15 of the E-Commerce Directive (ECD) is not obvious.

Guidance creates heavy burdens for smaller intermediaries

The Communication claims that many elements of the guidance have been drafted taking into consideration the specific needs of smaller online platforms, and that the Commission plans to “explore further means to support take-up of the guidance for smaller platforms”. However, it is hard to see this consideration reflected in the requirements placed on intermediaries.

It will be costly for small businesses to employ the legal resources necessary to make sensitive and sophisticated determinations about the possible illegality of content in any country in which those companies’ content may be accessed. This will likely stifle innovation by curtailing the development of new services and platforms. It will probably lock in existing platforms with resources enough to try to navigate the very different speech restriction regimes in different countries.

The Commission also expects all online platforms to allow for context-related exceptions when carrying automatic stay-down procedures. Intermediaries of all sizes would therefore have to invest in context-sensitive automated filters, which in themselves are extremely complex, technical and costly tools for engineers to develop. This will be particularly unattainable for smaller platforms.

The “clarification” of liability safe harbour is really a call for increased takedown

Given online platforms’ increasing role in facilitating access to information services, the Communication repeatedly emphasises the societal responsibility online platforms have towards their users in this digital age. According to the Commission, platforms have a “duty of care” to identify and remove illegal content, and are strongly encouraged to take voluntary, proactive measures. The Commission clarifies in the Communication that when taking voluntary, proactive measures, platforms are not automatically considered to be playing an active role, and would still benefit from the liability exemption in Article 14 of the ECD.

While actively searching for illegal content does not render intermediaries liable, it is important to note that the moment the platform receives notice over illegal content, it has to act “expeditiously” to take down or disable access to this content to continue to benefit from the liability exemption. Regardless of the Commission’s “clarification” of the liability safe-harbour, the call for more proactive measures, coupled with the Commission’s presumption that any type of notice of content ought to be actionable, is highly problematic. Removing content on the basis of bare (or algorithmic) allegations of illegality raises serious concerns for free expression and access to information. Intermediaries are likely to err on the side of caution and comply with most, if not all, the notices they receive or generate, since evaluating notices is burdensome and declining to comply may jeopardise their protection from liability.

We continue to stress that protecting intermediaries from liability and from broad or open-ended obligations to control the behaviour of users and third-parties is crucial to fostering innovation, access to information services and free expression on the internet.

The Communication raises the possibility of legislation in 2018

It is clear that the Communication intends to exert substantial pressure on online platforms to take down more allegedly illegal content faster. This is clearly stated: “The Commission expects online platforms to take swift action over the coming months…it will monitor progress and assess whether additional measures are needed, in order to ensure the swift and proactive detection and removal of illegal content online, including possible legislative measures to complement the existing regulatory framework. This work will be completed by May 2018.”

This may be read as a warning to online platforms to duly follow the Commission’s guidelines or else face legislation. In practice, this seems likely to mean: installation of filtering technologies, and demonstrable ‘progress’ in removal or prevention of content that public authorities and flaggers – trusted or not – allege could be hate speech or terrorist content. Whether the content that is removed or prevented from appearing actually breaks the law is impossible to tell, and nothing in the Communication suggests that the question interests the Commission. There is no indication that the Commission intends to ensure transparency, accountability, due process and – most importantly – judicial oversight. Neither the IRU or the CoC maintains a public record of content that is flagged and removed, and no judge is ever consulted on the merits of the notifications. We and other organisations and policy-makers have raised these points repeatedly. The European Commission and Member State policy makers should ensure that the many policy initiatives designed to sanitise online content scrupulously track the limits of the law. The legal and technical systems we put in place today to handle illegal and problematic content will shape access to information and opportunities to speak for years to come.