Computer pioneer and programmer Grace Hopper found the first computer “bug” in 1947. That bug was an actual insect found stuck to one of the parts of the Harvard Mark II, an electromagnetic calculator weighing 23 tons that could perform up to eight additions per second (or about one multiplication per second). In the 73 years since then, both computers and bugs have come a long way. The chipset that powers your smartphone is smaller and lighter than the moth stuck in the Mark II, and performs computations two or three hundred million times faster. All that speed is necessary to process the millions of lines of code instructions your device reads to do everything from tell time, to take high-def video, to monitor your vital signs. Some people even use them for making phone calls!
But more software and hardware bugs come along with the millions of lines of code, and phones aren’t the only computing devices we depend on. As computers and computer programs become more and more complex, their designers face the increasingly difficult challenge of finding and fixing the inevitable flaws that come from building immensely complex systems (often with repurposed chunks of code and hardware from older, but still buggy, systems). Despite the best efforts of the teams that build these systems, many still contain flaws and vulnerabilities when they are deployed. Likewise, even a previously debugged chunk of code can become part of a vulnerability when re-used in a different context, or when new kinds of attacks emerge.
Finding and fixing flaws in computers and computer programs is a never-ending job, one that requires robust testing and updating. But two federal statutes make this task even harder than it already is: the Computer Fraud and Abuse Act (CFAA), which was designed to prevent malicious actors from accessing government-controlled computers, and Section 1201 of the Digital Millennium Copyright Act (DMCA). The CFAA will be reviewed by the Supreme Court for the first time this year, in Van Buren v. United States. The U.S. Copyright Office will soon begin its triennial rulemaking to authorize exemptions to the DMCA. Both opportunities create an important chance for the Court and the Copyright Office to clarify the legality of much-needed security research.
The Van Buren Case
The CFAA prohibits knowingly accessing a protected computer without authorization, or exceeding authorized access and either obtaining (or altering) information or causing damage. The term “protected computer” is broadly defined, and includes basically any computer connected to the internet. The phrase “exceeding authorized access,” however, has been interpreted to mean different things by various judges.
On one end of the spectrum, some judges (in the 2nd, 4th, and 9th Circuits) have said that to exceed authorized access requires getting past some kind of technical control, such as a password. On the other end, some judges have said that even unauthorized use of information that a person is authorized to access can be a violation. For example, an employer might authorize an employee to access certain information for some, but not all, purposes, or to use their computers for work-related, but not personal purposes. According to the 1st, 5th, 7th, and 11th Circuits, contravening such use limitations can be considered “exceeding authorized access” and can therefore be a CFAA violation.
The Supreme Court will hear its first ever CFAA case this fall and the facts of the case are primed to resolve these disparate interpretations. In Van Buren v. US, the petitioner, a police officer, used his department-issued computer to access information for a purpose that was not authorized. The 11th Circuit upheld his conviction under the CFAA, but now the Court is set to opine on whether the law applies to such purpose-based limitations, or if “authorized access” is defined by an actual limitation on one’s ability to access a computer or information.
This case could have significant ramifications for the scope of the CFAA – in particular, addressing whether the bounds of a federal crime should be set by private entities when they write their terms of service. The case has particular significance for white hat security researchers and other academics, for whom the risk of potential criminal liability for violating a website’s service terms has tremendous chilling effects. That’s why CDT, as well as a host of other civil society and professional organizations, have weighed in on the case.
Security Research Under the CFAA
For independent security researchers, the risk of liability under the CFAA (and potential for major fines and prison time) is a major deterrent—many choose to completely avoid research that might involve any “protected computer” rather than take a chance that their work might cause them to exceed authorized access. Under the narrower interpretation (based on one’s authorized ability to access), security researchers have greater certainty that what they do stays on the right side of the law, because some technical barrier lies between them and material they are not authorized to access.
However, under the broader, purpose-based interpretation of authorization, researchers face significant uncertainty and risk. This interpretation allows computer owners to define the limits of authorized access with things like acceptable use policies, terms of service, or other non-technical mechanisms. Not only does this reading give private entities the power to define criminality, it also lets them leverage the CFAA to threaten any uses of information they deem improper. For security researchers, this means that the rules controlling what they may access (and for what purposes) are not always clearly defined, are subject to change, and often require legal assistance to interpret.
This uncertainty for researchers has real-world consequences. As a result of the CFAA’s chilling effects, many internet-based or connected systems are never tested by independent researchers, which means that any flaws or vulnerabilities they have are more likely to be found and exploited by bad actors, rather than found by researchers committed to identify vulnerabilities so they may be fixed. In a 2018 report, CDT interviewed 20 academic and independent researchers about their perceptions of the risks associated with their work, and over half of them mentioned the CFAA as a major risk factor. This report was cited in the amicus brief filed by the Electronic Frontier Foundation on behalf of security researchers and CDT this week, which illustrates for the Court the importance of independent security research and the CFAA’s chilling effect.
Why Van Buren Also Implicates the DMCA
If the Court adopts the narrower interpretation of “exceeds authorized access,” it may also improve copyright law. Section 1201 of the DMCA prohibits the circumvention of any effective technological protection measure restricting access to a copyrighted work. If that strikes you as similar to the CFAA’s standard of “exceeding authorized access,” we agree. Section 1201 also has the same chilling effect on security research: many researchers simply avoid circumventing any type of access controls rather than face liability under the DMCA. Notably, the DMCA does not require any actual infringement of copyright for a person to be found liable for a costly legal violation — the act of circumvention is enough. Access controls are ubiquitous, but they do not necessarily indicate whether someone is authorized to access the material the access controls protect: for example, companies use access controls to prevent interoperability of devices like garage door openers and to prevent farmers from diagnosing and repairing their own tractors, but few would argue that garage door and tractor owners are not authorized to access parts of their own purchases.
Although the DMCA grants a few exemptions for things like reverse engineering and “security testing,” these exemptions are so poorly worded that no one relies on them. The statute also creates a process by which the Copyright Office and Librarian of Congress can issue temporary exemptions. CDT and others have worked to obtain and improve a broad temporary exemption for security research since 2015. Unfortunately, both the statutory and temporary exemptions contain a clause saying that violating any other law, specifically including the CFAA, makes one ineligible for the exemption. An overbroad reading of the CFAA thus impedes even research and security activities that the Copyright Office agrees should be permitted in its triennial DMCA rulemaking. In Van Buren, the Supreme Court has a chance to clean up this damaging uncertainty.
Like everyone else, researchers must obey the law. But the standards of criminal liability under the CFAA should not turn simply on the terms of service adopted by private website owners, when those terms of service are often vague, overbroad, and subject to frequent change. Similarly, security researchers shouldn’t face liability under the DMCA—and its threat of significant monetary penalties—for engaging in conduct that the Copyright Office’s triennial rulemaking process expressly recognizes as having a legitimate, non-infringing value.
The world needs more researchers working to improve the security of computers and computer systems, because we all benefit when flaws are fixed rather than exploited. The Copyright Office will soon begin the 8th triennial process for granting exemptions, and CDT will once again ask for a broader, clearer exemption under the DMCA in hopes of creating greater certainty for the security research community. Meanwhile, the Van Buren case creates a crucial opportunity for the Supreme Court to clarify the bounds of the CFAA and, consequently, the DMCA as well.
We hope the Court and the Copyright Office will help researchers feel more certain that they may perform beneficial research without running afoul of the law.