Skip to Content

European Policy, Free Expression

EC Initiative on Disinformation Must Not Curb Free Expression

The European Commission has, as expected, published a Communication on “Tackling Online Disinformation: A European Approach”. The Communication comes on the back of a public consultation and a report from a “High-Level Group on Fake News”.

It is understandable that the Commission and Member State leaders take these issues seriously. There is significant evidence showing Russian state-sponsored efforts to influence the outcome of the U.S. 2016 Presidential Elections, although it is probably impossible to know the exact impact of those efforts with certainty. The Communication states that Russian agencies run various types of disinformation campaigns in European countries, efforts that the EU’s External Action Service attempts to counter. Presumably, EU Member States are taking additional coordinated action to deter the Russian regime’s actions. This would seem to be the most urgent priority.

Beyond what is known about Russian activities, there is very little European data on the volume of material that can qualify as disinformation, and more importantly, the impact it may have. In the absence of data, the Communication cites as justification results of surveys in which people have been asked if they believe they have encountered disinformation. Respondents overwhelmingly say this is the case, and that it is a problem. This does not meet any evidentiary standard. No attempt is made to verify whether the content people referred to was actually disinformation. It is likely that survey respondents categorise biased news reporting (most if not all reporting is, to some extent) as disinformation, especially if they disagree with the bias.

Making policy in the absence of evidence about the scale and scope of the problem is problematic, as we point out in our consultation response. However, the Commission argues that “inaction is not an option”. Whether this is true is hard to tell (because there is no data), but what is clear is that the actions the Commission proposes involve significant risks to free expression and access to information. Without a clear and precise definition of the content that is of concern, without an assessment of its volume and reach, the likelihood is that these initiatives will target all manner of news reporting and other content that may be considered offensive, biased or simply unwelcome.  

EC is wrong to use concerns about disinformation to justify its ancillary copyright proposal

First, the Communication highlights “the need to rebalance the relations between media and online platforms” and claims that the “swift approval of the EU copyright reform [will] improve the position of publishers and ensure a fairer distribution of revenues between right holders and platforms”. This is unfortunate and disingenuous that the Commission uses valid concerns about disinformation campaigns and the integrity of elections to push its widely discredited publishers’ rights/ancillary rights proposal. These schemes have been tried in both Germany and Spain, and they have failed completely in both cases. The Commission’s own Impact Assessment for the DSM Copyright Directive fails to produce any evidence whatsoever on the economic benefits the proposal might confer on news publishers. Most recently, in an open letter, more than 160 European scholars warned that there is no basis for believing that Article 11 will provide any protection against ‘fake news’, and that it might even “play into the hands of producers of ‘fake news’”. A neighbouring right would provide the publishers of disinformation with their own exclusive rights, and not prevent them from disseminating inaccurate news.

Caution is required in the setup of an EU-wide Code of Practice

In the short term, the Commission wants to set up “an ambitious Code of Practice” by July 2018, which will build on the Key Principles proposed by the High Level Expert Group. The aim is to commit online platforms and the advertisers that fund them to help users evaluate the veracity and reliability of news and content, while exposing them to different political views. These objectives will then form the basis of key performance indicators to assess progress.

Some of these ideas have merit, but extreme caution is needed. Most importantly, this self-regulatory initiative must include input and representation from free expression experts and digital rights advocates. This was lacking in the High-Level group, and in other EC processes such as the Hate Speech Code of Conduct and the EU Internet Forum. This is especially important with the tight timeframe the Commission has laid out, with the start up in July 2018 and review of results in October 2018. This is likely to lead to rushed decisions with unintended consequences. ‘Progress’ should not be measured in numbers of takedowns, deprioritisation, accounts suspended or demonetised, etc. Success means that only demonstrably and verifiably false information, presented as actual reporting with intent to deceive, is captured, and nothing else. The importance of targeted and narrow definitions is also relevant when it comes to the objective of identifying and closing ‘fake’ accounts. There are many reasons why people need to be able to express themselves online anonymously, and this measure should not curtail this possibility.

An independent European network of fact-checkers should span the political spectrum

Several social media platforms have attempted to deal with disinformation by collaboration with fact checkers and other third-party validators. We do not know how well this works. But the Commission is proposing to establish an independent European network of fact checkers. The Communication recognises that their “credibility depends upon their independence and their compliance with strict ethical and transparency rules”. With the right set-up and with full transparency and accountability, an independent fact-checking network could add value. The effectiveness and credibility of such a network would require clear and objective criteria for selection and qualification. The network would also need to have balanced representation and reflect the broadest possible range of political and ideological views. The performance of fact checkers should be monitored, and records must be kept on content they flag, and their precision and accuracy should be evaluated by experts.

There are many other issues to comment on in the Commission’s Communication. Overall, we understand the concern that motivates the initiatives it lays out. But we worry that the speed with which the Commission wants to proceed, and the lack of clarity regarding the scope of the problem it wants to address, will push online service providers, aided by technology tools and fact checkers, to curtail free expression, political debate and access to information. We will continue to engage with the Commission as it proceeds with these initiatives and push for strong safeguards in this regard.