Skip to Content

European Policy, Free Expression

Free Expression Online At Risk with EU High Court’s Pro-Filtering Decision

The highest court of the European Union has opened the door to global content-filtering orders with its decision last week in Eva Glawischnig-Piesczek v. Facebook Ireland. This decision, based on a politician’s demand to make Facebook proactively filter defamatory statements about her and her political party, demonstrates a distressing lack of technical analysis or consideration of broader freedom of expression principles. Along with the recent passage of the Copyright Directive, it points to a greater willingness of EU policymakers to embrace filtering mandates and further raises the stakes of the upcoming negotiations around the Digital Services Act.   

Background of the case

In 2016, Glawischnig-Piesczek, a politician and former chair of the Green Party in Austria, sued Facebook to get the company to take down a public post that linked to a news article about her and added commentary calling her “miese Volksverräterin” (lousy traitor), “korrupten Trampel” (corrupt bumpkin/oaf) and calling the Green party a “Faschistenpartei” (fascist party). Facebook complied with an initial court order, blocking access to the post in Austria. The politician appealed, seeking a blocking order for “identical and equivalent content” that would apply worldwide. The Supreme Court of Austria referred this case to the Court of Justice of the European Union (CJEU) to determine whether the E-Commerce Directive (ECD) applies to such blocking orders.

Article 15 of the ECD specifies that Member States cannot impose “a general obligation . . . to monitor” information on their services or “actively to seek facts or circumstances indicating illegal activity.” In other words, Article 15 is a strong prohibition against filtering mandates. The ECD notes that Member States may still issue injunctions “in a specific case,” but prior CJEU case law had found that orders for broad-based filtering for copyrighted works were not permitted. 

The Supreme Court of Austria asked CJEU to interpret whether the ECD prevented it from issuing injunctions that order the blocking 1) of identical content, 2) of equivalent content, and 3) of content worldwide.  Unfortunately, the CJEU answered “no” to all three questions, saying that the ECD and Article 15 pose no barrier to global blocking orders that require on-going monitoring and filtering by content hosts. 

The technical limits of filtering

Given the significance of this decision, the CJEU spends surprisingly few words engaging with the important technical questions that any discussion of filtering must raise. (As legal scholars like Daphne Keller have discussed, this is likely due in part to the limited briefing on the issue and an under-developed record in the Austrian courts.) 

The core of the concern with filtering is that, by its very nature, it requires scanning every bit of uploaded content to determine whether it contains the content being filtered out. A blocking order targeting a specific image or phrase means that the content host has to proactively monitor all content that users post. But, in evaluating the legitimacy of filtering orders for “identical” content, the CJEU simply asserts that, “in view of the identical content of the information concerned,” a blocking order “cannot be regarded as imposing on the host provider an obligation to monitor generally the information which it stores.” 

There is no indication that the Court has considered the current state of content filtering technologies or the capabilities of existing tools.

In discussing the question of filtering “equivalent” content, the CJEU likewise engages in minimal technical analysis, merely stating that an injunction ordering filtering of “equivalent” content is not “an excessive obligation” because intermediaries “ha[ve] recourse to automated search tools and technologies” and will be searching for “defamatory content of an equivalent nature [that] does not require the host provider to carry out an independent assessment.” The Court relies heavily on the idea that injunctions blocking equivalent content will “include specific elements … such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal.” This, the Court appears to reason, means that hosts receiving these injunctions will simply be able to program a reliable filter to cleanly and automatically suppress specific content and not have to worry about determining whether that content is actually illegal.

There is no indication that the Court has considered the current state of content filtering technologies or the capabilities of existing tools. It may be looking to keyword filters, URL blocking, and image hashing, as far as identifying “identical” content, but it never says as much. And it’s not at all clear what kind of tool the Court imagines will carry out this filtering for “equivalent” content. It’s possible that the CJEU intends for injunctions to proactively identify specific equivalent words and phrases to whatever was deemed defamatory, to create a sort of blacklist of terms that may not appear alongside a plaintiff’s name. But that raises other questions of fundamental rights (discussed below), and as anyone who has ever encountered this sort of filter online, it will be easy to circumvent. Alternate spellings of text, synonyms, or simply picking different insulting terms will all be employed by users who want to get around these blacklists.  

The CJEU’s brief reference to “automated search tools and technologies” may instead reflect its sense that there are more sophisticated tools available than simple keyword filters. While there are tools that rely on machine learning to attempt to identify novel examples of problematic content, they are by no means reliable enough for hosts to set and forget, as the decision seems to imply (see CDT’s full report, Mixed Messages: The Limits of Automated Social Media Content Analysis). Machine learning tools for image and natural language processing are, in a high-level way, designed to identify content “equivalent” to whatever they have been trained to detect. But such tools are limited in their utility: they fail to consider the broader context of the content they evaluate, work well primarily when assessing content similar to what they were trained on, and are vulnerable to errors that disproportionately affect already under-represented speakers. All of these factors cut against the CJEU’s conclusion that a filtering injunction for “equivalent” content can—or should—be carried out without any additional assessment from the content host.

Challenging free expression issues aren’t easily dismissed

The CJEU also gives minimal consideration to the implications of its decision for freedom of expression. In discussing “identical” posts, it says it is legitimate for courts to order blocking of information “irrespective of who requested the storage of that information,” glossing over completely the fact that the identity of the poster could be a significant factor in whether the content was illegal. Other users posting “identical” content could be doing so to comment on, analyze, refute, satirize, quote, or otherwise re-contextualize that content, in a way that transforms it from a statement intended to harass, defame, or incite violence, into something else entirely. We often see examples of social media companies failing to take context into account, when they block black activists from sharing—and speaking out against—the hate they face or when they decide a world-renowned photograph is inappropriate content and prohibit anyone from posting it. The blocking orders the CJEU permits with this decision will only increase these kinds of mistakes.

The CJEU gives minimal consideration to the implications of its decision for freedom of expression.

The CJEU’s discussion of “information with an equivalent meaning” raises additional problems. The CJEU states that the illegality of a statement does not stem from “the use of certain terms combined a certain way, but from the fact that the message conveyed by that content is held to be illegal.” This is deeply concerning: Shifting the focus of the legal analysis from the specific statement, conveyed in a specific context, to the overall “message conveyed” by a post will significantly broaden the amount of speech potentially deemed unlawful. What is the message conveyed by calling a politician a “korrupten Trampel” (corrupt bumpkin)? Is it the same as calling her a “fraudulent oaf?” What about merely “corrupt?” If the time, place, or identity of the speaker were key factors in determining that the message conveyed by a statement was unlawful (for example, in cases of incitement to violence), how do these factors translate into a court’s consideration of what is equivalently unlawful?

The CJEU embraces injunctions against “equivalent” speech due to its concern about the ease of circumvention of blocking orders. The CJEU aims to account for this by allowing for injunctions against posts with “equivalent meaning”, but this creates a tension: It cannot require hosts to assess whether posts have an “equivalent meaning” to statements deemed illegal, because this would be an inappropriate delegation of adjudicatory authority from courts to private companies; this sort of injunction could easily be considered an “excessive obligation” for a host.  But this leaves the CJEU bound to say that injunctions will “include specific elements . . . such as . . . equivalent content to that which was declared to be illegal.” It appears from this that the CJEU intends for injunctions to identify certain words, phrases, or images that are preemptively declared to be illegal and which are to be blocked from circulation on the site. This type of order, however, could amount to a prior restraint, or banning of expression before it is produced. The European Court of Human Rights (ECHR) has noted, “[T]he dangers inherent in prior restraints are such that they call for the most careful scrutiny on the part of the Court.” What exactly can be included in these equivalency injunctions will be the subject of significant litigation in Member State courts, and possibly the ECHR, in the future.

Continuing struggle over global takedown orders

The CJEU also concludes that the ECD does not describe any territorial limitations on injunctions and thus does not prevent blocking orders from “producing effects worldwide”–in other words, there is nothing in the ECD to prohibit the Austrian courts from insisting that Austrian defamation standards apply to all speech on a global platform. The Court goes on to say that Member State courts issuing such extraterritorial injunctions should ensure that they “are consistent with the rules applicable at the international level”, but this is an uncertain check at best, given that there is not a clear international consensus around when, if ever, it is appropriate for national courts to order extraterritorial censorship. Indeed, this very question, of the appropriate application of national law to a potentially borderless communications medium, is at the heart of many unresolved law and policy questions about the internet. 

For decades now, content hosts have had to determine how to implement orders to block and remove content from governments around the world, with many governments seeking global remedies to violations of national law. Many social media companies and other user-generated content hosts have developed essentially the same procedures as Facebook applied in this case: Companies typically assess content under their own Terms of Service/Community Guidelines first, and if they determine that the content does not violate the company’s rules, the company typically responds to a legal order to remove content by blocking it only in that jurisdiction. While this sort of geoblocking can raise its own concerns about freedom of expression online, in general this approach from content hosts aims to ensure the effect of a country’s more-restrictive laws about speech is felt primarily within the boundaries of that country. While this type of localized blocking has been cheered when the blocking orders come from Turkey or Thailand to prevent criticism of political leaders, increasingly Western democratic governments are seeking similar remedies for issues ranging from copyright enforcement to privacy and data protection–including, in the Glawischnig-Piesczek case, of defamation of a political figure and her political party. 

What next?

Coupled with its decision in Google v. CNIL a few weeks ago, which similarly found that nothing in the GDPR either required or prevented extraterritorial de-listing orders under Article 17’s “right to be forgotten”, the CJEU may be signalling that it will not render a judgment on the legality of extraterritorial injunctions until the EU legislative process addresses them directly. Thus, we may expect to see the question of extraterritorial injunctions feature prominently in the on-going debates around the Terrorist Content Regulation and the development of the Digital Services Act. 

It seems certain that this issue will appear again before courts across Europe before long. The CJEU left open many opportunities for unwieldy injunctions under its interpretation of the ECD, and the directions that various national courts take could well end up back before the CJEU. They might also end up in the European Court of Human Rights, in a test of how well the national legal framework allowing for equivalency injunctions with worldwide scope upholds the country’s obligations under Article 10 of the European Convention on Human Rights.

One bright spot in all of this is that, while the CJEU has ruled that Article 15 of the ECD does not prevent national courts from ordering these broad filtering injunctions, national law still might. As legal expert Graham Smith discusses in this thread, some EU Member States already have strong precedents against broad, imprecise filtering injunctions. So, while this decision from the CJEU is disappointing in its lack of technical analysis and evaluation of the impact on the fundamental right to freedom of expression, national courts can–and must–take these crucial factors into consideration.