How far is too far?
This is the question we’ve been asking over and over again at CDT while conducting interviews of security researchers and in drafting CDT’s new white paper that surveys “hard questions” in the world of computer security research.
On the one hand, many security researchers deliberately push boundaries – in some cases to characterize the extent of a vulnerability and in other cases to publicize the seriousness of the vulnerability they have discovered.
For instance, Charlie Miller and Chris Valasek made news in 2015 when they remotely accessed critical systems of a Jeep through its entertainment system. They tested it on a willing, but eventually terrified, Wired reporter who was in the Jeep on an interstate when they took over control. Likewise, researcher Chris Roberts was kicked off a United flight and questioned by authorities for allegedly attempting to access the airplane’s systems during flight.
Although these “stunts” are controversial among security researchers themselves, there is a reasonable argument to be made that these efforts resulted in positive change. Jeep implemented a recall of 1.4 million affected vehicles and United Airlines launched a bug bounty program (with rewards paid out in airline miles, to the consternation of some).
Similarly, many researchers are now engaged in internet scanning. For example, a number of research teams regularly scan the internet to collect data on the extent of vulnerable systems deployed in the field, including censys.io, a network search engine for researchers, and shodan.io, which styles itself as a search engine for the Internet of Things. As another example, Rapid7, a public security research company, has Project Sonar, which pings all public IPv4 addresses looking for open ports. The data collected is made publicly available in order to promote security research; the idea being that we can’t fix problems we don’t know about, and Project Sonar can find situations where a privileged piece of software – for example, your antivirus program – might be exposing more of your home and office to the open internet than it should be. Although Rapid7 takes extensive steps to minimize the effects on remote computers, Project Sonar does access – in an automated fashion – all publicly available IPv4 addresses, which is, in and of itself, controversial to some since it uses resources without permission.
These and similar examples show that the ethics of computer security research is, at the risk of understatement, complicated. By definition, remotely accessing a system or data – even if it is publicly available on the internet – means accessing someone else’s computing resources without express permission.
Indeed, many of these efforts, including Project Sonar, run counter to the express desire of the system or data owner, who will often purport to bar automatic scanning (“spidering” or “scraping”) of publicly available data through their terms of service. We should note that in the technical community, the understanding can be very different; the basic architecture of the internet involves servers that accept signals from disparate destinations and then must decide whether or not to respond. That is, by placing something on the internet you must necessarily be ready to accept or block signals from other internet destinations. Problems arise when the recipient of scanning traffic takes issue with the fact or the amount of traffic being sent their way, so teams have developed extensive guidelines as to how scanning tools should behave.
Given the moral spectrum of security research, some argue that it is unwise to identify ethical “redlines” – such as experimenting on live systems in use – that clearly demarcate how far is too far. They argue that to do so would invite generalist judges and lawyers to adopt these redlines as normative-based legal boundaries in the course of, among other things, clarifying ambiguities in laws like the Computer Fraud and Abuse Act (“CFAA”).
Others believe that these fears are overblown. Unauthorized experimentation on commercial aircraft in flight, for instance, would be a clear bridge too far. Likewise, exploiting a vulnerability in an application to track an individual’s location without their knowledge poses a serious problem. And anything with the potential for physical harm would be verboten.
Through our conversations with security researchers, representatives of the technology industry, and experts in civil society and law enforcement, we are developing a basic set of ethical spectra – essentially, axes along which security research activities become more or less ethically questionable. In our recent white paper, we note a few possible options for better mapping the ethical landscape of the security research world.
For instance, an amended CFAA could actually provide some guidance to researchers. In defining what constitutes “without access” to require the circumvention of some access control, and by more realistically quantifying damage or loss that will trigger liability, Congress could outline something like a safe harbor for researchers. Similarly, researchers could adopt a pre-existing ethical framework like that of the Association for Computing Machinery, which, among other things, cites privacy violations and the unauthorized use of another system’s resources as practices to avoid. We discuss potentially adopting only the brightest of bright lines, such as experimentation on live systems that poses a danger to others. And finally, we suggest that computer science and engineering courses could better integrate ethics education into their curricula, much like the core role that professional responsibility in law school or medical ethics in medical school plays.
While we do not yet endorse any of these options, we suggest them in the hopes of provoking discussion; clearly these and other questions will have to be answered in the near term, as security research continually advances towards being a profession akin to doctoring, lawyering, and engineering. Please take a moment to skim our white paper, available here, for more on this and other pressing questions in the world of computer security research. And keep your eyes out for the results of our interview study closer to the end of this year.