Skip to Content

Cybersecurity & Standards, Government Surveillance

Private Sector Hack-Backs and the Law of Unintended Consequences

*Mark Raymond (University of Oklahoma), Greg Nojeim (Center for Democracy & Technology) and Alan Brill (Kroll)

Congress is considering legislation to authorize companies to use countermeasures against cyber attacks. However, the legislation could undermine cybersecurity by authorizing victims to “hack back” and cause harm to a third party. Because cyber attribution is at best imperfect, countermeasures may be misdirected to innocent third parties. Attackers are likely to realize this and adapt by routing attacks through sensitive third-party systems to elude responsibility and heighten the costs of employing countermeasures.

Pending legislation also risks creating uncertainty about what countermeasures would be authorized and whether tort liability would apply to firms engaging in such conduct. It does not account for civil and criminal liability for U.S. firms under the domestic laws of other countries. Moreover, if foreign governments emulate the approach the U.S. is contemplating on countermeasures, US companies and citizens will suffer. ” At a minimum, Congress should ensure that the countermeasures it authorizes operate on the system on which they are deployed, do not gain unauthorized access to other systems, and do not cause harm to others’ networks, data, or connected devices.”

Current Law

Current law gives companies substantial authority to deploy cybersecurity countermeasures on their own networks in order to protect them against malware, and it criminalizes computer attacks on others, including hack-backs.

The Wiretap Act provides that it is lawful for any provider of electronic communications service to intercept, disclose, or use communications passing over its network while engaged in any activity that is a necessary incident to the protection of its rights and property. 18 USC 2511(2)(a)(i).   This includes the authority to use devices and procedures to intercept or redirect communications in order to protect their networks and the transiting data.

The federal anti-hacking law, the Computer Fraud and Abuse Act (“CFAA”) subjects to criminal and civil liability anyone who intentionally accesses another person’s computer without authorization, and as a result of such conduct, recklessly causes damage. 18 USC 1030(a)(5)(B). If the damage caused exceeds $5,000 or affects 10 or more computers, the perpetrator faces a hefty fine and up to 5 years in prison. Merely accessing another’s computer without authorization is also outlawed.

Proposed Changes

Pending cybersecurity bills would remove potential liability for violating the CFAA while operating very broadly-defined countermeasures, even if the countermeasure causes harm to another’s network, to data stored on another’s computer, or to a device connected to another’s computer. Three pending bills now in conference include similar provisions authorizing the use and operation of countermeasures “notwithstanding any law:” the Protecting Cyber Networks Act, H.R. 1560 or PCNA, the National Cybersecurity Protection Advancement Act, H.R. 1731, or NCPAA, and the Cybersecurity Information Sharing Act or CISA, S. 754.

For example, under the Senate bill, CISA, a countermeasure is any action, device, technique, procedure, or other measure applied to on one’s own information system (or the system of a consenting party) or to information on such system, which detects, prevents, or mitigates a known or suspected cybersecurity threat. Under CISA, countermeasures cannot include a measure that “destroys, renders unusable, or substantially harms an information system or data on an information system” other than one’s own or that of a consenting party. A helpful amendment to CISA bars countermeasures that grant unauthorized access to another’s network or data, but there is no corresponding provision in either of the other bills.  A helpful provision of the PCNA would require that countermeasures be operated on one’s own system, but the other bills do not require this.

As a result, depending on the final language in the bill, countermeasures deployed for legitimate reasons on one network that damage data on another network, or damage the network itself, could become lawful so long as the damage to data or to a network is not “substantial.” Countermeasures that slow or impede access to data on another network, would also become lawful, so long as they don’t render the network completely “unusable.” Countermeasures deployed on one network that damage or destroy a physical device attached to another network, but do not cause substantial harm to information or to an information system, could also become lawful. Finally, countermeasures that initiate new actions, processes or procedures on another’s information system would also become lawful, including a countermeasure that turns on a computer’s camera or audio capability.

Problems these Changes Would Create

There are a number of significant problems with these provisions of the pending bills that we do not believe can be overcome.

Opening the use of countermeasures to non-governmental players will result in their use on the basis of limited ability to conduct attribution. The vast majority of private actors lack access to the sophisticated attribution tools and information available to government. This means private actors are more likely to improperly attribute attacks and cause collateral damage. Firms will need to consider the potential liability that could be created, and the potential disruption to their businesses if they become the unwitting targets of countermeasures deployed by others.

Bad actors will also have clear incentives to run attacks through key systems, choosing networks that, if slowed down, harmed, or rendered partially inaccessible by a countermeasure will cause embarrassment and/or disaster: government systems, medical facilities including pediatric and acute care hospitals, and critical infrastructure.

Perversely, the incentive to run attacks through key systems increases as countermeasures become (or are perceived to become) more effective; attackers will see this as a cheap way to avoid the intended deterrent effect of countermeasures. Due to the nature of botnets and their prevalence in cybercrime, such outcomes are highly likely even with absolutely no adaptation by bad actors.

To the extent the proposed laws authorize counter-attack activities that may be directed against supposed bad actors in the United States, and recognizing that foreign bad actors can run their actions through computers in the U.S. that have been compromised without the knowledge of the owner of the compromised system, one has to ask whether authorizing such attacks is in the public interest.

Because attribution is an uncertain science, there is always a risk that some cybersecurity countermeasures will “victimize the victim.” When a cyber-attack is routed through computers at a hospital, a countermeasure that slows the hospital’s network or damages medical equipment that relies on the network, could have devastating consequences. Yet, the statutory language Congress is considering might permit such actions. It would not cause “substantial harm” to “an information system” but it could cause “substantial harm” to a patient. It could also cause significant financial and other costs. We should be asking whether a countermeasure that causes any harm to a third party should be authorized. But, there has been no public debate in Congress about this.

Perhaps that’s because the next question would be, “Who pays?” Who pays for harm caused by a countermeasure that causes damage to a third party? Congress is granting authority to operate countermeasures “notwithstanding any law” and it is clear this formulation will trump contrary statutes that would impose liability, such as the CFAA. But whether this language trumps tort liability as well is not clear at all.  Should the hospital pay? If that seems unfair and unwise (as it does to us), maybe the company operating the countermeasure (or the company on whose behalf the countermeasure was deployed) should be automatically liable. But, the hospital likely won’t know who the responsible party is. Should the company launching the countermeasure be required to provide notice? To whom should the notice be directed?

Our point is that the Congressional debate about this troublesome countermeasures language is frighteningly sparse. The proponents seem not to have accounted for the possibility of off network harm – how to avoid such harm and ensure that those who cause it pay for it. Insulating such behavior from prosecution – and possibly from tort liability – only invites abuse.

U.S. businesses also have to realize that these collateral damage and false flag situations could easily cross borders. In this context, it is worth noting that any authorization of countermeasures “notwithstanding any law” trumps only U.S. law. Firms utilizing such countermeasures that cause damage abroad may inadvertently expose themselves to civil and criminal liability in other jurisdictions under other countries’ laws.

Finally, authorizing private sector countermeasures in the United States could lead to emulation on the part of other countries. The United States is asking other countries to accept a situation in which their firms and even their governments may have their networks probed and attacked by American firms, either as a result of mistaken attribution or because third parties have run attacks through those networks. If the U.S. adopts a policy of authorizing countermeasures that can cause harm to others, it must accept that other countries are likely to do the same. American companies, as well as all levels of government, would need to be prepared for the likelihood that foreign firms might mistakenly apply countermeasures against networks in the United States and be authorized by law in their home countries for doing so.

Illustrative Scenario

Given the abstract and somewhat technical nature of these risks, we believe it may be helpful to show more concretely what they may look like in practice. Imagine the following scenario:

Company A’s defenses are breached in a cyber attack. Approximately 500,000 records containing sensitive proprietary information were obtained by the hackers in the course of the intrusion. The company’s IT department determined that the information was compressed with an open-source program to reduce its overall volume, and then transferred using the File Transfer Protocol (FTP) to an IP address assigned to a company based in Asia.

The IT department believes that it has adequate information to support a good faith belief that the system at that IP address was directly involved in the data theft. Company management wishes to “take back our stuff,” to quote Company A’s CEO. She authorizes the IT department to take steps to enter the target’s system, find Company A’s stolen data and render it useless. The CEO also tells them not to do any more damage to the target than is necessary to carry out her instructions. The IT department quickly engages a “white-hat hacker” who has formed a company to handle hack-back projects.

Unfortunately, it turns out that the target is actually a children’s research hospital owned by the charitable foundation of a huge Asia-based corporation, and that the parent company obtains and manages all of the conglomerate’s IP addresses, so the hospital, located not in Asia but just south of the Canadian border on Michigan’s upper peninsula, has an IP address assigned to the Asian company. This hospital carries out basic and clinical research, and provides no-cost clinical care services to children suffering from several related but ultimately terminal diseases.

Unbeknownst to the hospital and to the Asia-based corporation, hackers from Eastern Europe were able to social engineer an employee’s log-in credentials. Once in the system, they gained control of other servers, installing the software that enabled them to create a hidden system that allowed the attackers to conduct hacking attacks against other parties which would appear to originate at the IP address of the hospital, receive stolen information, and to then encrypt it and forward it to addresses designated by the actual hackers. The hospital was a convenient “cutout” to shield the actual identity and location of the hackers.

Company A, working with its hack-back consultant, carries out an attack against the target. The following things happen at the hospital:

  • The hospital IT department sees the attack activity. Recognizing that the hospital is being attacked, the hospital IT department immediately implements its incident response program. The hospital notifies its insurance broker who notifies the insurance carrier of the hospital’s cyber-crime policy. The hospital and insurer authorize the hiring of pre-approved legal consultants, incident response specialists, and computer forensic personnel.
  • The hospital files a report with the local police and state Attorney General. The police assign a detective from their cyber-crime unit. The Attorney General assigns an Assistant Attorney General to work with the police in carrying out a criminal investigation under the state’s cyber-crime laws. The hospital’s General Counsel researches whether the hospital must also notify the Department of Health and Human Services pursuant to the current medical data privacy laws (HIPAA and HITECH.)
  • The Company A hacking team successfully breached the hospital’s network. In attempting to locate the stolen data, it scans the hospital’s network. Several devices in the research labs, which don’t have the capability of being scanned without being affected, are thrown offline, delaying ongoing research.
  • Several other devices connected to the hospital’s clinical environment are also caused to shut down by the Company A hack-back explorations.
  • The hack-back team finds a compressed encrypted file in a directory named “Data_to_be_Transmitted.” The team believes this file to be their client’s stolen data. Believing this in good faith, they take steps to not only delete the file, but to overwrite it so that it can’t be brought back by forensic manipulation. Unfortunately, the hackers had already wiped the file of stolen data on the hospital’s system, and what was erased was a file of raw research data that was to be sent to a research institution in Italy that partnered with the hospital on basic research into the genetic cause of the diseases.

At this point, the hospital and its insurer have probably committed more than $100,000 in immediate investigative, legal, and personnel costs. They have suffered outages in both research and clinical devices which, while the devices were not physically damaged, caused a disruption in both research projects and clinical treatments, which were suspended until the integrity of the affected machines could be assured.

Assuming that Company A launched the attack to mitigate a suspected cybersecurity threat and without an intention to cause damage beyond that involved in “taking back” control of their stolen data, the proposed law would appear to authorize this conduct under U.S. law. Nonetheless, given that the hack-back data may well have passed through the servers of the parent company in Asia, might Company A be held responsible under the laws of countries through which the attacks were routed? And should the hospital be prohibited from seeking redress under tort statutes for the unintended damages of Company A’s attack? What about the costs incurred by the police and prosecutors who expended resources to enable them to pursue a criminal investigation into the hacking of the hospital?

Add to this the fact that Company A made the decision to hack back without even consulting legal counsel. Management’s knowledge of the hack-back law was based on stories in the technology press. They believed that once they had what they considered enough to form a good-faith belief in the source of the attack, they had the right to strike back. Given the ease of creating new Internet startups, and the increasing reliance on the Internet by firms of all kinds that may have little experience with the underlying technology, there is good reason to question the wisdom of allowing victims of attacks to make decisions about when and how to employ countermeasures. To do so would place victims in the difficult position of having to blindly trust providers of such services that have an inherent conflict of interest and that may themselves have an imperfect understanding of the law’s provisions.

As complex as this scenario already is, it does not deal fully with the transnational risks created by the approach to countermeasures contemplated in the pending legislation. For example, imagine instead that the hospital had been located in Canada rather than in Michigan’s upper peninsula. Or imagine, alternatively, that the hospital is located in the United States but that Company A is a Canadian company authorized by hypothetical Canadian legislation that emulated this approach to countermeasures. Either situation might well lead to complex multinational litigation, or even to political disputes between countries.

Conclusion

Cyber attacks against U.S. public and private sector organizations are a serious problem. Legislation to improve information sharing and law enforcement capabilities in the cyber arena is needed, but authorizing do-it-yourself justice when proof is hard to come by is dangerous. The applicable law may turn out to be the law of unintended consequences.

Novel, complex issues place a premium on caution. We urge legislators to consider the possible consequences of the approach to countermeasures contemplated in the current bills being considered in Congress.  They should ensure that countermeasures operate on the system on which they are deployed, do not gain unauthorized access to other systems, and do not cause harm to others’ networks, data, or connected devices.

*Mark Raymond (@mraymondonir) is the Wick Cary Assistant Professor of International Security at the University of Oklahoma.   He writes and teaches extensively on cybersecurity and Internet governance, and is the co-author (with Laura DeNardis) of “Multistakeholderism: Anatomy of an Inchoate Global Institution.”

Greg Nojeim is the Director of the Freedom, Security and Technology Project at the Center for Democracy & Technology.  He leads CDT’s cybersecurity work and is the author of “Cybersecurity and Freedom on Internet,” published by the Journal of National Security Law and Policy.

Alan Brill is a Senior Managing Director of the global cybersecurity firm, Kroll Cyber Security.  He founded Kroll’s high tech investigations practice, which investigates computer intrusions for customers around the world, and he speaks and writes extensively on computer attack forensics.