“The Cyber” Part I: Legal Impediments to Security Research

Written by Joseph Lorenzo Hall

(This is the first of a series of four blog posts where we’ll detail some of the interesting elements of CDT’s recent report on chilling effects to security research.)

Last week, CDT released a white paper exploring some of the hardest questions facing cyber policy today, including – for instance – whether we can or should place ethical constraints on hacking.  (For a broad summary of the report, please see our first post here.)

Two stories in the New York Times illustrate the fraught territory occupied by our first hard question: whether and how the law needs to be changed to give computer security researchers breathing room to do their work.

The first story about the discovery of a way to hack accelerometers (the things that figure out which way your iPhone is moving) using music neatly shows the profound social value of security researchers in, among other things, hardening connected devices against attacks. The second, about the FBI’s unprecedented $3 million bounty on a professional cybercriminal in Russia (who allegedly moonlights as an asset for Russian intelligence), illustrates the need for government intervention in cybercrime, and the legitimate concern with an “anything goes” hacking culture.

Let’s take the first:  Security researchers at the University of Michigan and University of South Carolina announced that they had created what they termed a “musical virus” that could lead an accelerometer to deliver false readings to, for instance, a Fitbit device or remote controlled car.

In essence, an accelerometer works like, say, a rock attached to one end of a slinky. If you hold the other end of the slinky and jump or hop, your body moves, along with the slinky bouncing the rock around. In your phone, it’s not easy for the chips to “see” what the phone is doing, so instead it watches its equivalent of the rock & slinky combination and can figure out what the phone is doing based on how the “rock” moves. It’s this kind of device that prompts an airbag to deploy, for instance, in the rapid deceleration of a car crash. The “musical virus” uses sound waves, which can be produced by a speaker on or near the same device as the accelerometer, to “confuse” the mass pushing down on the spring, leading to bad output.

As the story points out, fooling a Fitbit into counting fewer steps might seem innocuous, but the implications are scary when one considers the ubiquity of accelerometers in our modern digital lives. Apple, for instance, has included one in every iPhone since the device’s unveiling in 2007. As noted, they are also important instruments in cars, boats, planes, etc., and increasingly in medical devices. (Note to the interested reader: a Stanford team found that they could turn gyroscopes in your devices – which tell your device which way is “up” or “north” – into microphones, so this is not a boutique concern!)

There is clear value in this work, and this is especially true for two reasons. One: in contrast to areas such as food safety or physical construction, government regulation here is very early, as is potential tort or criminal product liability for egregious flaws. Two: as illustrated in the hardcoded passwords that facilitated the growth of the Mirai connected devices botnet, many vendors have little incentive to spend the extra money to build products where security is a top priority.

Unfortunately, through a series of qualtitative interviews with a broad swath of community researchers, CDT has heard repeatedly that an array of federal laws present challenges to valuable security research. To take the example of the musical virus above, depending on how the speaker and accelerometer are manipulated, one could conceivably run afoul of the Digital Millennium Copyright Act (the “DMCA”), which bars researchers from circumventing access controls, even on devices that they own.

Similarly, researchers engaged in activity like automated scanning of public facing sites and services on the internet sit in a legal gray zone under the widely criticized Computer Fraud and Abuse Act (“CFAA”), which, among a few other things, creates a federal crime for accessing any connected device “without authorization,” which is undefined and therefore open to broad interpretation.

Likewise, researchers who collect exploits for penetration testing suites – virtual hacking toolboxes that security professionals use to identify vulnerabilities in systems they’re inspecting – face uncertainty from export control regulations. The United States, for instance, proposed rules to implement the international Waasenaar Arrangement two years ago and was forced to retreat after broad outcry over their application to white hat hacking tools (tools good guys use) like these “pen testing” packages.

The second story mentioned above, however, demonstrates the need for government intervention to protect cybersecurity (though it also highlights one of the potential pitfalls with overly aggressive law enforcement action). That is, the Times had a long feature on Evgeniy Bogachev, the alleged ringleader behind the “Zeus” malware, a variant of which—called Gameover Zeus—the FBI claims infected more than a million computers and resulted in more than $100 million in financial losses.

Particularly concerning in the Times story are reports that Russian intelligence “piggybacked” on Bogachev’s botnet to conduct cyber-espionage (Bogachev is the subject to the round of sanctions that President Obama imposed on Russian intelligence agents and cyber-criminals following allegations of Russian interference in the 2016 election).

The Bogachev case just serves to show how difficult the legal questions around computer crime can be. Clearly, the existence of Gameover Zeus, and the rash of ransomware attacks around the country over the past several years highlight the need for a law enforcement response, as well as the importance of having a computer trespass and/or damage law. However, even the response to Gameover Zeus, where the FBI and international authorities succeeded in disrupting the command-and-control communications from the botnet master, was not without controversy. CDT and others have cautioned that botnet “takedowns” must be carefully orchestrated to protect innocent victims from future collateral harm.

In any event, the two stories highlighted serve to neatly bookend many of the issues CDT raises in our white paper: “The Cyber: Hard Questions in the World of Computer Security Research.”  We encourage readers to explore these and other issues in more depth.

Share Post