Skip to Content

European Policy, Free Expression

European Policymakers Continue Problematic Crackdown on Undesirable Online Speech

One of the biggest technology policy debates on the European agenda in 2017 was the question of how societies should respond to a variety of online speech issues. Terrorist content, hate speech, copyright infringement, and ‘fake news’ – however defined – were key topics. This debate is set to continue in 2018.

These issues certainly warrant attention from policymakers, the companies that host the speech, and society at large. But the direction these policy responses are taking raises serious concerns about censorship and free expression. European policymakers continue to push online platforms to determine upfront what speech is illegal and what is not, with little transparency about processes and criteria being used. Incentives are skewed towards takedowns, and there is no judicial oversight. It is not a sustainable approach for governments to delegate sensitive decisions on restrictions on speech to private companies, ‘trusted’ flaggers, and automated tools. We have cautioned against these proposals repeatedly, including in October when the European Commission issued its Communication on “Tackling Illegal Content Online”, with many have expressed similar concerns.

It appears as though companies will be compelled to remove more content faster to demonstrate proactive compliance with the Commission’s demands.

Unfortunately, European policymakers are clearly not taking these concerns on board. Quite the contrary: the European Commission convened a private meeting on 9 January with more than 20 internet companies to demand proactive prevention and faster takedown of content that might be deemed illegal, or else face legislation. According to one media report, EU Home Affairs Commissioner Avramopolous demanded that illegal content must be removed within 120 minutes. Evidently, for a large quantity of content, it is impossible make determinations about legality in such a timeframe. But, it appears as though companies will be compelled to remove more content faster to demonstrate proactive compliance with the Commission’s demands.

Pressure is mounting in Member States as well. The most prominent example is the German social media law, known as the “NetzDG”, which entered into force on 1 October of last year, and is being implemented as of 1 January 2018.  Among other things, it subjects social media companies and other providers that host third-party content to fines of up to €50 million if they fail to remove “obviously illegal” speech within 24 hours of it being reported. The law puts heavy pressure on hosts of third-party content to censor speech.

A week into the new year, there are now calls for the law to be repealed. This follows an incident in which the German satirical magazine, Titanic, was apparently blocked from social media platforms for parodying controversial statements made by politicians from the right-wing party Alternative for Deutschland. This is a clear-cut example of how the threat of a fine creates strong incentives for companies to err on the side of taking down speech. By authorising privatised law enforcement, “trusted flaggers” and private companies are now being put in the position of making decisions about the extent of and limits to free expression under the law.

Democratic countries need better ways to deal with online speech challenges.

As Germany struggles with the predictable consequences of NetzDG, French President Emmanuel Macron announced plans for a new law to tackle ‘fake news’ in France. It is reported that particularly during election periods, this law would give judges or possibly ‘authorities’ the power to remove content or even block entire websites deemed to carry such content. The new legislation would also include more transparency about sponsored content, forcing websites to state sources and quantities of funding.  Details about the prospective law are scarce, but the free expression risks are obvious. This is because the concept of ‘fake news’ lacks any agreed definition, and is easily misused to suppress legitimate political speech.

It is bad enough when liberal, democratic societies try to regulate speech in this way, even if the stated intentions are honourable. But when repressive regimes copy it for much more nefarious purposes, the consequences can be disastrous. At that point, European political leaders will have no argument to deploy against rulers that use EU-sanctioned methods to repress political opponents and dissenting voices. They are “just enforcing speech restrictions,” online as well as offline, an argument that proponents of the NetzDG have frequently brought forth.

Democratic countries need better ways to deal with online speech challenges. During 2018, we must make progress towards finding those solutions.