“The Cyber” Part III: The Role of Vulnerability Disclosure and Bug Bounties
Written by Michelle Richardson
The Harvard Mark I, technically known as the IBM Automatic Sequence Controlled Calculator, was a beast of the early computer age. Used during World War II to figure out whether the atomic bomb would actually explode, the Mark I weighed more than 50 tons and had more than three-quarters of a million working parts.
It was also, famously, the first instance of a literal computer “bug.” As told by U.S. Rear Admiral Grace Hopper, a pioneer in the field of computer science who first suggested that computer languages could be written using English words, operators of the Mark I were perplexed one day when the machine suddenly broke down. Digging around inside the device, they found – jamming up the works – a moth, which they taped to a page in their logbook with the notation: “[f]irst actual case of bug being found.”
Software bugs have come a long way since then. They’ve been the source of comedy: As in the films Office Space and Superman III, where characters exploited a flaw to collect the difference from rounding up pennies. They’ve also been the source of tragedy: From plane crashes and malfunctioning medical devices to a Cold War pipeline explosion that could be seen from space, software flaws have been responsible for real world harm.
The dangers bugs pose has led to a long running debate between proponents of “full disclosure” (meaning security researchers who discover flaws should disclose them publicly to force vendors to patch them) and skeptics in industry who argue that researchers should not disclose unless or until the vendor has fixed the problem.
In the middle of these two camps is “coordinated disclosure,” where researchers and vendors work together to identify and fix the flaw within a reasonable amount of time. Some argue that coordinated disclosure is useless unless backed up by the threat that the researcher will publish the flaw in the face of a dilatory vendor. Others have pushed back, arguing that publication in the absence of a fix is irresponsible, and can increase the likelihood that active exploits show up in the wild using these flaws.
That debate is, by now, old hat. What is relatively new is the proliferation of formalized programs among software vendors to create incentives and processes for the disclosure of bugs. Many of these include “bug bounties,” monetary or reputational awards for researchers who discover important flaws and report them through a disclosure process to the vendor.
Bug bounties have become so popular (in some quarters) that companies like Bugcrowd, HackerOne, and Colbalt have emerged to allow smaller vendors to outsource their bug bounty and vulnerability disclosure processes. Even the government has entered the game. In 2016, the Defense Department launched “Hack the Pentagon,” a pilot bug bounty program for the department’s public facing websites. Many expect similar programs to proliferate.
As CDT has heard from some quarters, however, actual bug bounty programs aren’t for everyone. Given that they affirmatively invite researchers to hunt for bugs, some vendors are wary of prompting a flood of bug disclosures, some of which may be minor or non-exploitable, and others of which could be so serious that disclosure without a fix would be catastrophic. Similarly, researchers may not consider the bounty to adequately reward them for the hard work that goes into finding flaws, and they may decide to sell the vulnerability, disclose it outside of the bounty program, or even decide that disclosure is more trouble than it’s worth, and not disclose at all.
Even with that said, virtually all vendors have lauded the value of having a publicly coordinated disclosure policy, even without a bounty. Researchers too have expressed the sentiment that it is very helpful to know what a vendor considers the rules of engagement and that there is staff available to handle reported vulnerabilities.
As part of CDT’s new white paper, “The Cyber: Hard Questions in the World of Security Research,” we look in depth at the current state of bug bounties and vulnerability disclosure, and note a few themes and trends in the space.
We explore, for instance, the dispute between Google and Microsoft over bug disclosure (Google’s Project Zero has disclosed Microsoft bugs in the absence of a patch on a couple of occasions, which has generated some controversy).
We also look at whether there are emerging best practices to be gleaned from existing bug bounty programs and existing best practices (e.g., the relevant ISO 30111/29147 standards) now that vulnerability disclosure programs are maturing, and whether the government could participate more with a public program that collects and discloses vulnerabilities to vendors. And we ask whether that program could act as a counter-weight to the military, intelligence community, or federal law enforcement agencies who purchase sensitive vulnerabilities and exploits for sabotage, espionage, or investigative purposes.
We hope that this paper will generate a discussion about these and other issues that will eventually lead to concrete policy proposals.