Skip to Content

Open Internet

CDT Letter in Response to Senator Tillis’ Digital Copyright Act Discussion Draft

A portion of the text of this letter is pasted below, and can be read in full here.

***

CDT appreciates the opportunity to comment on the Digital Copyright Act of 2021 (DCA) discussion draft.  As set forth below, in our view the approach proposed in the DCA discussion draft, particularly as to Section 512 of the Digital Millennium Copyright Act (DMCA), would undermine privacy and free expression rights and is based on incorrect premises about the internet ecosystem and the capabilities of automated mechanisms to detect infringement.  Although the draft proposes changes to other aspects of copyright law, any benefit those might bring pale in comparison to the harms that would result from the proposed changes to Section 512.  

Proposed Changes to Section 512

Our overall view of Section 512 remains that, although individual aspects of the law may offer incomplete solutions against infringement or insufficient protections against abuse of the notice-and-takedown system, it has generally achieved its objective of balancing the burdens and protections for internet users, intermediaries, and rightsholders.  Even minor adjustments to Section 512 could upset this balance, and the DCA discussion draft goes further than that.   Not only would it upend the way Section 512 works, it would significantly harm people’s rights to privacy and free expression in the pursuit of an impossible goal: eliminating online infringement.  Fundamentally, the changes proposed to Section 512 are based on a series of incorrect assumptions.  We offer our perspectives on those here: 

Service Providers on the Internet Vary Widely

Any change to Section 512 needs to account for the variation among the websites and other service providers involved in delivering content to users.  The incredible growth of the web as a whole has been overshadowed by that of a few websites, like YouTube, Facebook, and Twitter, which often distract attention from the other 1+ billion sites on the web.  Their size makes them the center of many conversations, the targets of many legislative proposals, and the default destinations for many creators and users of works.  But to treat them, through legislation or regulation, as though they are the entire web is a costly mistake, one only these giants can afford. 

 Legislation aimed at curbing unwanted activity should not tailor its approach to address the specific practices, technologies, or business models of only the largest social media platforms.  Those are not always common to other sites and may be commercially infeasible or practically impossible to adopt.  As a result, legislation that fails to account for the diversity of websites can cause sites to restrict or even eliminate their services and serve as a barrier to new entrants.  Treating the entire web as though it were only made up of the largest, most popular sites jeopardizes its diversity and reduces its value as a forum for creativity, innovation, and celebrating the differences among internet users’ preferences.

Just as websites offer vastly different kinds of content, services, and interaction models to diverse internet users, the infrastructure providers who enable and improve our uses of the internet also vary widely.  The positioning, technical capabilities, size, and function of these intermediaries differ from each other as much as they differ from the websites at the edges of the internet. Section 512 acknowledges these differences and broadly distinguishes providers based on their primary function, and, at least in part, sets out differing obligations for providers based on their ability to single out and address infringing activity.  This system is not perfect; for example, it places upon ISPs an obligation to terminate the accounts of so-called “repeat infringers” even though ISPs are poorly situated to combat infringement, and account termination is an excessive penalty that often impacts many innocent internet users alongside alleged infringers.  To further collapse Section 512’s division of providers would force many intermediaries to take excessively broad actions in response to notices of alleged infringement, such as refusing to resolve DNS queries for entire domains and thereby effectively blocking legitimate access to websites even if only a small portion of a domain contains allegedly infringing material. 

Moreover, proposals to assign even greater obligations to ISPs and other providers of “transitory network communications” as a condition of maintaining the limitations on their liability for copyright claims would force them to spy on all network transmissions, to make large portions of the internet inaccessible, or to terminate the accounts of their subscribers based on mere suspicion or allegations of infringement.  These are high prices to pay for marginal reductions in online infringement.

The Concept of Notice-and-Staydown is Based on Incorrect Assumptions About Copyright and Technology

The internet and digital technologies multiply the scope of both distribution and infringement of copyrighted works because they provide the ability to effortlessly reproduce and transmit digital copies at very low cost.  So while rightsholders have leveraged these abilities to reach larger audiences and monetize more copies at significantly lower marginal cost, they have also experienced more frequent acts of infringement.  Even though Section 512 provides a mechanism by which rightsholders can request the removal of infringing material from websites, they remain frustrated by the reappearance of infringing material.  While this “whack-a-mole” problem is real, each of the alternative proposals to address it, such as upload filters, “notice-and-staydown,” and DNS-based site blocking, disproportionately diminishes other rights, such as free expression and privacy, in exchange for marginal additional copyright protections that do not eliminate the “whack-a-mole” problem.

Section 512’s notice-and-takedown system balances the freedoms and obligations of internet users, rightsholders, and intermediaries.  It does so by assigning to each relevant party a role appropriate to their capabilities: rightsholders are best suited to identify their own works and to know whether uses of them were authorized, internet users are best suited to dispute allegations of infringement, and user-generated content hosting services are best suited to forward notices or counter notices and to disable access to posts when notified of alleged infringement. 

In contrast, the concept of notice-and-staydown, imposes on intermediaries additional obligations to proactively identify potentially infringing material based on flawed assumptions about both copyright and technological capabilities. 

About copyright, it assumes a) that works are inherently unique and easily distinguishable from other similar works, b) that assertions of copyright ownership are easily authenticated, and c) that infringing uses of a work are clearly apparent.  None of these is a valid assumption.  Many works, especially images, are so similar to others as to be virtually identical; other works are variations on a common theme.  The vast majority of works remain unregistered, making it very difficult to authenticate claims of ownership.

Even if copies of works could be correctly identified and matched with their authors in an environment in which billions of works and authors exist, determining whether the use of any of these works constitutes an infringement is neither obvious nor possible to perform proactively at scale.  Determining whether a work infringes copyright is intensely fact-based and contextual; although some instances of infringement may be relatively obvious, many more are not.  Determinations of fair use, for example, are subject to a four-part test that requires contextual analysis and regularly sparks disputes that require courts to intervene.  It would be unreasonable to expect intermediaries to make these determinations, even using human reviewers, at any kind of scale.

Nor can this problem be solved by automated systems.  In fact, merely identifying and matching identical or nearly identical copies of a work presents substantial technical complexity.  And while a few of the largest companies have developed and deployed automated matching systems, they are expensive, proprietary, and flawed.  So far, these systems have been deployed voluntarily to help rightsholders identify, control, and monetize uses of their works.  But because even the most sophisticated of these systems regularly flags non-infringing content as infringing, resulting in the erroneous removal of legitimate content, they have also been criticized for their negative impacts on free expression, fair use, and even uses of works in the public domain.   And matching is the easy part.  There are currently no automated systems capable of conducting at scale the kind of contextual analyses, such as for fair use, that are necessary to make a determination of infringement.  As a result, measures seeking to address the “whack-a-mole” problem through automated processes would negatively impact internet users’ privacy and freedom of expression.

Given the absence of an automated way to reliably screen out infringing content, mandating a staydown approach through legislation would have numerous deleterious consequences.  First, the resources needed to even begin to comply with such a mandate, potentially including developing or licensing multiple types of content matching systems, would put smaller companies at a significant competitive disadvantage.  Second, given the threat of liability and the cost of litigation, a provider would have little choice but to err on the side of taking down content whenever it could not be sure that content was not infringing—which would, among other things, likely mean the elimination of many otherwise fair uses of copyrighted content.  That would significantly expand the scope of harms to free expression.  Third, such a regime would not provide equal benefits even among rightsholders: for example, it would exclude many creators whose art either does not lend itself to automated identification or uses portions of other works (e.g., mash ups).  

Finally, the concept of notice-and-staydown imposes on service providers an obligation to proactively monitor all communications passing through their networks or posted to their sites.  For all providers, this forces them into a defensive posture in which every transmission carries potentially devastating liability.  For infrastructure providers, this would force them to proactively inspect the content of every transmission, undermining the privacy of their users. 

Automated identification of content is expensive, unreliable, and inequitable, and using it to determine infringement is virtually impossible.  Machines are not yet capable of extracting meaning or understanding context, both of which are crucial when assessing the validity of uses of copyrighted works.  Although a handful of judges and copyright experts may be able to make semi-consistent determinations as to whether something is “likely to be infringing,” most people and all machines are ill-suited to this task.  The one exception: the original authors of works are fairly well equipped to assess whether uses of their own works were authorized, even if they may dispute whether a use was fair. 

Hence, the structure of Section 512’s notice-and-takedown system, while imperfect in many respects, remains the fairest way to reconcile the competing rights of creators and users of works on the internet, because on the internet, most people are both.

Proposed changes to DMCA Section 1201

CDT has participated in the last 3 triennial rulemakings held by the Library of Congress under Section 1201 of the DMCA.  Each time, we have advocated for a broader exemption for computer security research providing more clarity and certainty to researchers.  Although the discussion draft contains some modest reforms to Section 1201, in CDT’s view more substantial changes are necessary to provide long-term mitigation of the problems the statute causes.

The most straightforward and effective way to address the broad range of problems caused by Section 1201 would be to adopt legislation to establish a “nexus” requirement between copyright infringement and liability under Section 1201.  As it is now, Section 1201 prevents many legitimate uses of copies of works purchased by American consumers, including for research, repair, modification, improved accessibility, and preserving the functionality of older software.  Section 1201 also enables makers of software and devices to implement vertical restraints on trade through consumer lock-in and liability-backed barriers to interoperability. These barriers harm competition, resulting in higher prices and fewer choices for consumers for everything from coffee pods to tractor repair.  But tying liability under 1201 to infringement of an exclusive right created under Section 106 would solve many of these problems by allowing consumers to make lawful uses of the works they purchase without fear of incurring liability.

Smaller reforms to some aspects of Section 1201 and its triennial review process might produce improvements for stakeholders, such as presumptive renewals of temporary exemptions, switching the burden of proof to opponents of exemptions, and addressing the usability issues raised by NTIA in 2018.  But the larger issue is that 1201 is unmoored from legitimate copyright concerns.  Reforms should focus first at this fundamental level, rather than just minor adjustments to address only a few of the statute’s problems.

Finally, any legislative fixes to Section 1201, large or small, would not justify a trade for changes to Section 512 such as those proposed in the discussion draft.  The scope of impact for changes to 512 dwarfs that of changes to 1201 in terms of numbers of constituents, economic cost, structure and function of the internet and the web, and more.

Read the full letter here.