When IoT Kills: Preparing for Digital Products Liability
Increasingly, objects in our environment are computerized and networked, bringing both the promise and the peril of the internet to our everyday lives. We are starting to see serious harm resulting from errors, attacks, and misdesign of these systems. On the eve of 18 March, an Uber self-driving car hit and killed a pedestrian in Tempe, Arizona. Tempe Police say the car was in self-driving mode and there was a safety driver behind the wheel of the car at the time of the incident. The Governor of Arizona has since suspended Uber’s testing of self-driving cars in the state, the National Transportation Safety Board (NTSB) has begun investigating the incident, and Uber has settled the matter with the victim’s family. It’s an unfortunate occasion to reflect about who will be held liable for the harm caused by the failure of products with autonomous capabilities.
Today we are releasing a paper that examines issues in product liability for Internet of Thing (IoT) devices to mark the start of a research agenda in this area. We expect that the digital technology industry is about to undergo a process of change akin to what the automobile industry experienced in the 1960s and 70s. Then, as now, insufficient security measures, dangerous design or adding-on of security features post-design were widely accepted industry practice. Those practices had to change as the perils of unsafe cars became obvious – as is increasingly the case today with IoT devices. We summarize the discussion of the paper in the remainder of this post.
Internet connectivity, software, and autonomous capabilities are increasingly integrated into all manner of devices and objects. This Internet of Things ranges from fitness trackers to household appliances, automobiles to critical infrastructure and beyond. The fundamental idea of the IoT is to create ‘smart’ objects with greater convenience and efficiency.
However, the benefits from these technological advances also come with risks. It may be more convenient to have a ‘smart’ Kettle, but consider if such a kettle has buggy software that inadvertently turns the kettle on (or all similar kettles at once!) and starts a fire in the kitchen? What happens if it fails because a factory-set default password allows it to be hacked remotely, starting a fire? How would we know the cause of the failure – who is responsible for the cause – and ultimately who is liable for the damages caused?
The IoT inherits long-standing cyber security risks.
The answers to these questions are complex and highly context dependent. For instance, some might argue that remote patching capabilities make the device ‘more secure’. Yet patching also introduces new risks, particularly if the software supply-chain is compromised. What is an acceptable balance of one risk against another in such situations? If there really are only two kinds of companies – “those who have been hacked and those who don’t know that they’ve been hacked” – it seems foreseeable that security measures will be circumvented by malicious third parties. Does this mean that device compromise is inevitable, which may render the devices involved unsafe by design? Finding answers to these and many other questions will be critical over the coming years if this wave of technological change is to deliver the maximum benefit possible without exposing society to unnecessary dangers.
The IoT inherits long-standing cyber security risks. For decades industry has ‘shipped-now, patched-later’ to fix bugs in software. Anti-virus software often has had to be purchased with a new computer, then kept up-to-date, to remain responsive to new threats. When these measures failed in the past, the outcome for users was typically inconvenience and lost time.
Failures of IoT devices, however, have a higher probability of physical injury, property damage or death – especially when these are so-called “cyber-physical systems” that use software and networking to control real-world physical objects, machines, and devices. This raises the possibility of application of law that has not up until now been widely applied to digital technologies: strict products liability.
Strict products liability arises when harm is caused by or threatened by unreasonably dangerous products. One of its purposes is to ensure that the costs of harm to a person – or property of that person or a third party – due to a product are borne by the producer of the product.
Digital technologies can be hijacked by malicious third-parties, involve complex and thus difficult to parse codebases, and possess interdependencies that can result in unpredictable outcomes.
Strict products liability legal cases will place an intense focus on various hereto under-examined elements of the cybersecurity of digital technologies. These cases are likely to examine whether there was a design or manufacturing defect in the IoT product in question (including the software), whether that defect caused the injury or property damage, whether adequate cost was incurred by the producer to identify bugs or implement security measures relative to the damage caused by device failure, and to what extent the incident could be foreseen by the producer (including malicious hacking), among others.
Answering these questions will not be straightforward. Digital technologies can be hijacked by malicious third-parties, involve complex and thus difficult to parse codebases, and possess interdependencies that can result in unpredictable outcomes. When autonomous capabilities are introduced, little understood risks associated with adversarial perturbations (small modifications to images or objects, which may be imperceptible to the human eye, that lead to misclassification by machine learning and artificially intelligent systems) are also introduced. Government agencies sometimes purchase or develop knowledge of software vulnerabilities – then may lose control of that information, resulting in large attacks when those flaws are maliciously weaponized. The many stakeholders implicated will wrestle with various other technical, legal, and economic issues, as well as contextual elements, as determinations are made as to who pays when smart devices do stupid things.
Questions such as these are already on the agenda of policymakers worldwide. The European Commission is considering whether it needs to revise its Product Liability Directive to respond to challenges created by IoT, robotics, and autonomous capabilities. The Japanese government’s Council on Investments has created draft guidelines governing autonomous cars, which represents a concrete step toward a legal framework and toward leading in the creation of international rules in this space.
If policymakers in the United States – at a federal and state level – wish for their country’s companies to remain at the edge of technological innovation, then these issues must be considered and dealt with. The good news is that some discussions are already taking place (e.g. the Consumer Products Safety Commission will soon hold a hearing on IoT risks) and some guidance to individuals on ways to reduce the risks they face has been developed and released.
As we suggest in the concluding section of our paper, a sea change will be required in software development practices so as to identify and remove defects. A minimum set of agreed upon security practices for IoT products will be required and these practices will have to be adjusted so as to be suitable to a wide range of contexts. Development of safety standards for autonomous systems will be required, which will have to be based on a firmer understanding of the risks of such systems than we possess today. Finally, some difficult questions will have to be answered around the appropriateness of open versus closed source software in certain contexts. If these questions cannot be answered adequately, and the costs of these ‘smart’ devices are disproportionately placed on those least able to avoid or bear them, we may have to rethink whether making devices ‘smart’ is such a smart idea after all.