Skip to Content

European Policy, Free Expression, Open Internet

Doing the Wrong Thing for the Wrong Reasons: Article 13 Replaces Safe Harbors with Upload Filters, Won’t Help Artists but Will Hurt the Internet

For the last couple of years, the European Union has been trying to solve a made-up problem. Parts of the music and movie industry convinced European regulators that there is a “value gap” created by some online services. Basically, they were unsatisfied with their royalties revenues from sites like YouTube. More specifically, they felt that their cut was too small relative to the overall revenue earned by those services. Although several methods of closing the purported gap were proposed, the worst possible solution, in the form of a copyright reform package, survived and is now close to finalization in the trilogue procedure. As part of a larger renovation of EU copyright policies, Article 13 takes aim at some legal safe harbors internet companies depend on at a fundamental level and imposes obligations, such as automated content filtering, that would change the nature of services that profit from user-uploaded content.

What are safe harbors?

In the nautical world, a safe harbor is a protected place to moor your ship to avoid storm damage. In the legal world, safe harbors are provisions that reduce liability for a limited set of actions, usually preconditioned upon certain behavior by the party seeking legal protection. For example, section 512 of the Digital Millennium Copyright Act creates safe harbors for providers of services involving the transmission or storage of copyrighted works. In exchange for adhering to a “notice-and-takedown” procedure, service providers cannot be held liable for monetary or injunctive relief for any copyright infringement potentially committed by their users.

The notice-and-takedown provisions of section 512 require providers to “expeditiously remove or disable access to” any material about which the service provider has received notifications indicating its allegedly infringing nature. In the EU, the E-Commerce Directive sets up a similar safe harbor, reducing liability for providers so long as they remove “illegal content” about which they have actual knowledge.

Why do we have them?

These particular safe harbors (section 512 of the DMCA and the E-Commerce Directive article 14) make it possible for providers to allow users to upload materials or otherwise contribute to their websites or platforms. Without the safe harbors, no provider would willingly take on the potentially business-ending liability for hosting (or even transmitting — anything that requires making a digital copy) material that infringes copyright. The scale of the internet and the popularity of user contributions means that, even if a provider wanted to review each and every upload for potential infringements, it would be humanly impossible to accomplish.

The EU and U.S. safe harbors were designed to keep the incentives and obligations for each party aligned with their interests and abilities. Rightsholders are responsible for policing the uses of their works because they are best suited to identify them and any potential infringements. Service providers are obligated to respond to rightsholders’ notifications, as they ultimately control the availability of the content available through their services. Users face the consequences for uploading unauthorized material, whether that is having their post removed, facing lawsuits, or even account termination. They are also responsible for contesting the removal of material they believe to be non-infringing. Although this system has its own problems, it is still preferable to either making service providers liable for the copyright infringements of their users or requiring the use of automated filters to remove potentially infringing content.

Content ID and automated filtering

Some of the largest services focused on user-uploaded content have developed sophisticated, automated systems to compare each bit of uploaded content to a vast database of copyrighted works to identify any potential infringements. These systems, like YouTube’s Content ID, are impressive in their ability to monitor content at scale, but are imperfect in their assessments about the legality of content. For example, automated systems are not yet capable of assessing whether a given post includes a fair use of a copyrighted work. Even if automated systems were near-perfect, removing the small fraction of content incorrectly identified as infringing still amounts to a significant dent in otherwise legitimate free expression. For example, at roughly 300,000 new YouTube videos per day, a system that incorrectly flagged only one tenth of a percent of them would still remove or block 300 lawful posts. Every day.

Why voluntary measures are ok, but mandatory filters are bad

Voluntary measures to reduce the availability of infringing material online, such as automated content filters, may leave much to be desired for both rightsholders and users. However, the fact that companies are free to use them or not avoids the legal pressures associated with a legal obligation to prevent infringing activity. In contrast, forcing service providers into a policing role, especially with the threat of increased liability, shifts the service providers’ incentives towards more restrictive content policies and more aggressive take-down policies. For many, neither human reviewers nor automated systems are economically feasible, leading to an end of user contributions for those sites. YouTube spent upwards of $60 million developing Content ID, but mandating this technology gives big companies who have already invested in it a double advantage — they are already capable of automated filtering, and they can license their systems to would-be competitors, further entrenching their dominant positions. Even for those who can afford automated systems, the tendency would be to set them to block any post that presents any risk of liability, severely decreasing the value of user content-focused platforms as fora for free expression.

What the EU is trying to do: remove safe harbors and require filtering

Unfortunately, the EU is in the final stages of cutting back safe harbors and mandating automated content filters. Although the final outcome of the trilogue negotiations is uncertain, the fact that there is still a proposal to dramatically change the legal landscape for much of the internet is troubling. Article 13 of the proposed copyright directive (part of the EU’s Digital Single Market initiative) would require “online content sharing service providers” (OCSSPs) to conclude licensing agreements with the rightsholders of all potentially uploaded material. This obligation alone is likely impossible to satisfy; how can companies even identify the millions of individual rightsholders implicated? If I post a music video of my friends’ band, does the host site need to already have a license agreement with each band member? The only solution is the creation of an extended collective licensing system (a massive undertaking on its own), which would result in lots of money paid to collecting agencies, but little paid to actual creators.

But wait, it gets worse. In addition to the licensing obligation, OCSSPs must make sure no unauthorized content becomes available on their sites, or face direct liability for copyright infringement if they fail. This obligation entirely undermines the E-Commerce Directive’s safe harbor, at least for any service provider that “optimises” user-uploaded content in any way, such as any kind of sorting or organization. Since this is a directive, each member state will need to implement its own laws to accomplish the directive goals. But the proposed language sets up a conflict: member states must ensure that OCCSPs prevent the availability of unauthorized or infringing content without imposing upon them a general obligation to monitor content, as this would directly conflict with Article 15 of the ECD. In other words, the proposal sets up obligations that are only possible to satisfy with the full-time use of automated upload filters, which cannot be mandated by law. If this sounds like the proposal essentially asks both OCSSPs and member states to perform magic, that’s because it does.

The negotiating parties are still haggling over some of the provisions, such as exemptions for small businesses and what other kinds of measures an OCSSP might take to reduce their risk of liability, but these provisions cannot balance the dramatic shift in policy that Article 13 represents.

How this might affect the rest of the world

Here’s the thing about internet policy — legal changes in one country create global effects. In theory, OCCSPs can apply different policies and use different technologies to meet geographically diverse legal obligations. In practice, though, service providers have to make complex decisions about whether and to what extent they adjust their policies and practices to accommodate differing laws. For some, it may be desirable to comply with the most stringent of the 27 national implementations of the directive so that they can continue to offer services in the EU, but economically infeasible to institute different policies outside of Europe. Users of those services will feel the effects worldwide. Other services may choose to simply cease operations in the EU. For those companies capable of differentiating their services (assuming it is possible to comply at all), this represents a step toward a more fragmented internet — a legal wall put up in defiance of the idea of the internet as a borderless space for exchanging information.

A glimmer of hope

After years of debate, proposals, amendments, and negotiations, Article 13 appears to be nearly finalized, but no one outside of the trilogue participants are happy with it. We hope that Parliament will ultimately reject this proposal for what it is: a set of impossible obligations that won’t achieve their supposed purpose.