European Policy, Free Expression
First report on the EU Hate Speech Code of Conduct shows need for transparency, judicial oversight, and appeals
When European Commissioner for Justice Věra Jourová opened a session of the High-Level Group on Racism and Xenophobia (HLG) on online hate speech, she was passionate about her desire to address illegal hate speech online, and convinced that the European Commission’s approach is the right one.
While the Commissioner’s intentions are sincere, and the problem she seeks to address is real and important, the strategy, its stated objectives, and its implementation are questionable on several levels. This blog post notes a series of concerns about the strategy and proposes – in fact, repeats – some recommendations for addressing them.
The meeting, held on 7 December, delivered the first progress report on the Code of Conduct on Countering Illegal Hate Speech Online, signed by Facebook, Microsoft, Twitter, and YouTube in May 2016. Under the CoC the companies take commitments, among other things, to review notifications of illegal hate speech within 24 hours and take appropriate action according to their own terms of service (TOS) and, where necessary, applicable law transposing the 2008 Framework Decision on Racism and Xenophobia.
When the CoC was agreed, CDT raised a number of concerns in a letter to Commissioner Jourová. We raised concerns about the scope of content to be targeted for removal. Most importantly, the text of the CoC conflates ‘illegal’ speech with content that, while legal, may violate a company’s TOS. We noted that Member State law and practice differs considerably in the boundaries it sets on permissible speech. (Moreover, Article 19 has analyzed the underlying 2008 Framework decision and found that it does not comply with international law.) We expressed concern that the CoC does not foresee any oversight by courts or independent arbiters. We stressed the need for full transparency as to the criteria used to flag and possibly remove content, and asked that the Commission ensures public records be maintained of content being flagged and/or removed. Our concerns in this matter echoed those expressed by Thorbjørn Jagland, Council of Europe Secretary General, that with vague and poorly defined legal standards, administrative decisions taken by government authorities or private companies risk capturing legitimate expressions of views.
In an admirably thorough response, Commissioner Jourová assured us that our concerns were unfounded. She was confident that ‘the definition of illegal hate speech is not unclear’, and that implementation can be aided by ‘a long list of case law, starting with the jurisprudence of the European Court of Human Rights’. But the Commissioner also stressed that ‘aggressive and abusive’ speech that may not violate the law can still have a chilling effect on the willingness of citizens, journalists, and others to engage in public debate online.
First report on takedown efforts raises many questions
What conclusions can then be drawn on the basis of the initial report and its presentation? We begin with our conclusions, and then we comment on the conclusions the Commission seems to draw.
The report has been drawn up on the basis of work done by 12 NGOs in 9 Member States. They notified what they considered to be illegal hate speech (based on the applicable national criminal codes). The report is based on a sample of 600 notifications made during a six-week period. We understand that the NGOs receive funding from the European Commission for this activity.
As we had expected, there appears to be little consensus about what constitutes illegal hate speech. The participating NGOs flagged only content they believed to be. Only 28.2% of notifications led to content removal. Some NGOs reported a ‘success rate’ of nearly 60%, others below 5%. Even ‘trusted flaggers’ achieved only between 29% and 68% notification/removal ratios. There were also wide differences between Member States, which is not surprising in light of the differences in the underlying laws. In more than two-thirds of the cases, not only did companies’ content reviewers not agree that the notified content was illegal, they did not even consider it to be in violation of the TOS. The latter is obviously a lower bar for taking down material, in that TOS can be considerably more restrictive than the law. None of the companies achieved the target of responding to all notifications within 24 hours.
The data do not justify firm conclusions about the reasons for the differences in assessment of legality (beyond an unspecified number of cases in which notifications were not responded to). Unfortunately, there is no record of successful and unsuccessful notifications. NGO presentations included some illustrative examples of notified material, much of it very offensive, but not obviously illegal. It is not clear whether any of the notifications were made to law enforcement authorities and could lead to court proceedings. No such example was mentioned. Unless the Commission ensures that a public record of flagged content can be scrutinized by judges, courts, and/or experts specializing in free expression and hate speech, it is not clear what accounts for the lack of consensus about the limits of the law.
On the basis of the report, different interpretations are possible. Perhaps companies’ processes for dealing with notifications are not sophisticated enough. Perhaps NGOs flag too much content that is offensive and provocative, but permissible. Without access to the content, it is not possible to say.
An NGO representative said that it was exceedingly difficult for them to make determinations in a short space of time, and noted that court rulings can run over dozens of pages. This comment illustrates a fundamental problem of the CoC approach: outsourcing to private actors difficult legal assessments that should rightly be carried out by courts. One company representative said that determining what is and what is not illegal speech is a highly subjective matter, and that there is a scarcity of case law to guide decisions. One illustrative example: in May 2016, a Danish High Court (appeals court) acquitted a citizen who in a Facebook discussion made comparisons between a particular religious ideology and Nazism. The local court of first instance in the town of Elsinore had found the person guilty of violating Art. 266b of the Danish Penal Code on denigration based on religious faith. That ruling was appealed and sparked a vigorous public debate among legal experts and commentators. It is of course ironic that had the CoC been in place at the time, such a dispute would never reach the court. The comment would most likely have been flagged and removed. And so, a nuanced and difficult judgment about free expression would have been left to company and NGO representatives, rather than courts.
The lack of data in the report has not stopped the European Commission from making some fairly strong statements. Commissioner Jourová has called for more vigorous enforcement by companies, both in terms of removal time and ‘success rate’, and threatened unspecified legislation if progress is not made. The Commission seems to assume that all flagged content is by definition illegal and must be suppressed. It is hard to see how the data supports this conclusion, and this response demonstrates the risk that the Code will create a presumption in favour of takedown. Jourová’s comments certainly give impetus to participating NGOs to intensify flagging, and incentives for companies to comply with removal more requests – and more rapidly.
We understand the difficult political and social reality European societies are facing, including the tensions and challenges related to the migration and refugee situation, and the threat of radicalization and terrorism. These issues are serious and difficult to manage, and like the rest of society, policy makers and companies need to deal with the problems. But we have fundamental objections to the Commission’s approach in this area, which we will discuss in more detail in future blog posts. For now, and given that Commissioner Jourová has apparently committed herself to this programme for the foreseeable future, we offer the following preliminary recommendations.
Transparency: A public record of flagged material should be compiled. It should include NGOs’ justifications for deeming the content illegal. It should include companies’ decisions to remove or not. The database should be accessible to, and analysed by, independent experts familiar with free expression law and policy, with regular reports published about the scope and impact of the program.
Judicial oversight: The CoC process should involve judges familiar with free expression and hate speech jurisprudence who can review implementation of the CoC and its compliance with free expression rights. NGOs should systematically copy notifications to law enforcement agencies so they can take decisions on whether to pursue the matter in court.
Remedy and appeal: The CoC process must evolve to include clear provisions for remedy to individuals whose speech is targeted and removed from these platforms unjustly. Companies should require NGOs or government officials operating pursuant to the CoC to identify themselves when flagging content, and should notify individuals whose content is flagged that this Code is the impetus behind the flag. Companies should provide content-level appeals of their takedown decisions and should include in their transparency reports information about when they have restored content following appeal. The CoC should include a mechanism through which individuals whose speech was unjustly flagged can notify the Commission; and contested flaggings/takedowns should be included in ongoing reporting.