Skip to Content

European Policy, Free Expression

Tackling Disinformation: Proposed EU Code of Practice Should Not Lead to Politically Biased Censorship

In April 2018, the European Commission published a Communication on “Tackling Online Disinformation: A European Approach”. The Communication outlines overarching principles and objectives that should guide short and long term actions to tackle the phenomenon of disinformation. Amongst its short term actions is the set up of “an ambitious Code of Practice”, building on the Key Principles proposed by the High Level Expert Group. The Commission’s aim is to commit online platforms and the advertisers that fund them to help users evaluate the veracity and reliability of news and content, while exposing them to different political views. These commitments will then form the basis of Key Performance Indicators (KPIs) to assess “progress”.

In our response to the Commission’s Communication, we cautioned against the potential risks to free expression of this self-regulatory initiative. The Working Group of the multi-stakeholder forum on online disinformation, composed of online platforms, leading social networks, advertisers and the advertising industry, has now delivered a draft Code of Practice to tackle online disinformation. The draft delivers commitments on five main areas: scrutiny of ad placements, political and issue-based advertising, integrity of services, empowering consumers and empowering the research community. While many of the specific commitments in the draft Code are benign, or even positive, we remain concerned that the overall process is oriented toward pressuring platforms to remove or suppress content and accounts without meaningful safeguards.

Online platforms and the ad industry representatives were under great pressure to deliver a draft of their commitments unreasonably fast. Once the Code is approved in late September of this year, they will also face pressure to periodically demonstrate progress in abiding by their commitments. The first hurdle is therefore defining what measuring progress means in the context of tackling disinformation. In any circumstance, it is important that progress is not measured in numbers of takedowns, deprioritisation, accounts suspended or demonetised, etc.

This should be a lessoned learned from the Commission’s Code of Conduct on countering hate speech. The progress reports’ under this similar initiative demonstrate that it is impossible to say whether the content flagged and reviewed under the code is, in fact, illegal. As the Commission presumes that flagged content equals illegal content, the progress rates based on takedown numbers do not provide us with any meaningful insights. On the contrary, what it does do is put pressure on social media platforms to take down more content and faster. These companies in turn will err on the side of caution and take down flagged content which may be perfectly legal, demonstrating higher take down rates. Commissioner Jourova has played into this narrative by demanding more and quicker takedowns on no factual basis. The proposed Code of Practice to tackle disinformation unfortunately runs the risk of going down this erroneous path. While the Commission thoughtfully does not jump the gun and proposes legislative measures to tackle disinformation, the results of the progress reports may provide justification to do so in the near future. This only adds a layer of pressure to censor.

Seeing that the Hate Speech Code of Conduct also demonstrates that there is little consensus about what constitutes illegal hate speech across Member States, one can only imagine the lack of consensus on defining speech, such as disinformation, which is not illegal per se. This highlights the importance of targeted and narrow definitions: only demonstrably and verifiably false information, presented as actual reporting with intent to deceive, ought to be captured, and nothing else. The importance of narrow definitions is also relevant when it comes to the objective of identifying and closing “fake” accounts. There are many reasons why people need to be able to express themselves online anonymously, and this measure should not curtail this possibility. We are pleased to see the draft Code of Practice recognize that “[r]elevant Signatories should not be prohibited from offering anonymous or pseudonymous services to the public.” But there is still a risk that cautious platforms will interpret the Code of Practice as pressure to require more stringent identity verification voluntarily.

It is evident that the proposed Code of Practice runs the grave risk of platforms facing a trade-off between intervening to counter alleged disinformation and safeguarding free speech online. Any policy initiative in this area therefore deserves much caution. Self-regulatory measures to tackle disinformation should commit online platforms to meaningful transparency obligations on how they develop and enforce their content policies. The focus should be on providing increased transparency about how these decisions are made and implemented, given that these decisions have a substantial impact on individuals’ rights to freedom of expression and everyone’s ability to access information. While the draft Code includes a description of an annual public report of KPIs relating to the specific commitments in the Code, it is essential that platforms provide more comprehensive reporting on their content moderation activity overall. Without this, it will be difficult to understand the Code’s KPIs in context, making the KPI report vulnerable to the same sort of political maneuvering that we have seen with the Commission’s reports under the Hate Speech Code of Conduct. While increased transparency is necessary, it will not mean much on its own without giving users the opportunity to appeal erroneous content takedowns. The draft Code of Practice lacks particular commitments for an appeal system or review of decisions made under this Code. Any commitment on deactivating accounts, deprioritizing and removing content, needs to go hand in hand with a commitment to improve appeals and review processes to identify systematic errors or remedy individual mistakes. We continue to remind that any meaningful content moderation system must include a robust appeals process that also prevents cases of abuse of the system.

Enhanced transparency and accountability will serve the ultimate aim of empowering users in understanding the context of the information these services provide, while not restricting their right to access information of all kinds. Additional voluntary measures such as investing and promoting media literacy are of course welcome and needed to foster a healthy information ecosystem. However, platforms’ own adherence to additional commitments should not be pegged with the threat of legislation. This will only lead to increased pressure for politically and ideologically biased censorship. All in all, while many of the commitments of the draft Code of Practice are viable, the way of deliverance is a recipe for disaster.

The Sounding Board of the multi-stakeholder forum, composed of associations representing the media sector, civil society and fact-checking organisations, as well as academia, will provide feedback on the Code by early September. We sincerely hope to see our concerns addressed in their forthcoming advice.