This article is adapted from a forthcoming peer-reviewed essay in Volume 61 of the Communications of the ACM.
The pedestrian who was struck and killed by a self-driving Uber car in Arizona this week was not the first person to die in a collision involving a vehicle that was driving itself. In 2016, a driver was killed in a crash while his Tesla was in Autopilot mode. But there is a big difference between these two stories. The Tesla driver had made a decision to engage Autopilot and arguably assumed the risk of an accident. The pedestrian who recently died in Arizona took on no such obligation. This distinction has given rise to a great deal of recent commentary about self-driving vehicles and liability, with some speculating that Uber’s accident could delay wider deployment of the technology.
In its centuries of grappling with new technologies, however, the common law has seen tougher problems than these and managed to fashion roughly sensible remedies. Uber will likely settle its case with the pedestrian’s family. If not, a court will sort it out.
As a law professor who teaches torts, I have been studying driverless cars for almost a decade. Notwithstanding the headlines, I am reasonably convinced that American common law is going to adapt to driverless cars just fine. The courts have seen hundreds of years of new technology, including robots. American judges have had to decide, for example, whether a salvage operation exercises exclusive possession over a shipwreck by visiting it with a robot submarine (it does) and whether a robot copy of a person can violate their rights of publicity (it can). Assigning liability in the event of a driverless car crash is not, in the run of things, all that tall an order.
There is, however, one truly baffling question courts will have to confront when it comes to driverless cars—and autonomous systems in general. That question is: What to do about genuinely unforeseeable categories of harm?
Picture a day when driverless cars are wildly popular. They are safer than human-driven vehicles, and occupants can watch movies or catch up on email during rush hour. There may be the occasional handwringing by pundits and the legal academy, but overall, courts have little trouble sorting out who is liable for the occasional driverless car crash. Generally, judges assign liability to whoever built the vehicle or vehicles involved in the accident.
But there are some much tougher cases on the horizon. Policymakers will have to determine just how much safer than people driverless cars will need to be before they are allowed—or event mandated—on the nation’s roads. And courts will have to determine who is responsible in situations where a human or a vehicle could have intervened but did not. On the one hand, courts tend to avoid questions of machine liability if they can find a human operator to blame. A court recently pinned the blame of an airplane accident exclusively on the airline for incorrectly balancing the cargo hold despite evidence that the autopilot was engaged at the time of the accident. On the other hand, there is presumably a limit on how much responsibility a company can transfer to vehicle owners merely because they clicked “I agree” on terms of service.
Chances are you’ve heard of the “new trolley problem,” which posits that cars will have to make fine-grained moral decisions about whom to kill in the event of an accident. I have never found this hypothetical particularly troubling. The thought experiment invites us to imagine a robot so poor at driving that, unlike you or anyone you know, the car finds itself in a situation that it must kill someone. At the same time, the robot is so sophisticated that it can somehow instantaneously weigh the relative moral considerations of killing a child versus three elderly people in real time. The new trolley problem strikes me as a quirky puzzle in search of a dinner party. A new technology challenges law not when it shifts responsibility in space and time, as driverless cars may, but when it presents a genuinely novel conundrum that existing legal categories failed to anticipate.
Imagine one manufacturer stands out in this driverless future. Its vehicles free occupants from the need to drive while maintaining a sterling safety record, and it adaptively reduces its environmental impact. The designers of this hybrid vehicle provide it with an objective function of greater fuel efficiency and the leeway to experiment with system operations, consistent with the rules of the road and passenger expectations. A month or so after deployment, one vehicle determines that it performs more efficiently overall if it begins the day with a full battery. One night, the owners forget to plug the car in to charge. Accordingly, the car decides to run the gas engine overnight in the garage—killing everyone in the household.
Imagine the designers wind up in court and deny that they had any idea this would happen.
They understood that a driverless car could get into an accident. They understood it might run out of gas and strand the passenger. But they did not in their wildest nightmares imagine it would kill people through carbon monoxide poisoning.
This may appear, at first blush, to be just as easy a case as the driverless car collision. It likely isn’t. Even under a strict liability regime—one that dispenses with the need to find intent or negligence on the part of the defendant—courts still require the plaintiff to show the defendant could foresee at least the category of harm that transpired. The legal term is “proximate causation.” Thus, a company that demolishes a building with explosives will be liable for the collapse of a nearby parking garage due to underground vibrations, even if the company employed best practices in demolition. But, as a Washington court held in 1954, a demolition company will not be liable if mink at a nearby mink farm react to the vibrations by instinctively eating their young. The first type of harm is foreseeable and therefore a fair basis for liability; the second is not.
We are already seeing examples of emergent behavior in the wild, much less in the university and corporate research labs that work on adaptive systems. A Twitter bot once unexpectedly threatened a fashion show in Amsterdam with violence, leading the organizers to call the police. Tay—Microsoft’s ill-fated chatbot—famously began to deny the Holocaust within hours of operation. And who can forget the flash crash of 2010, in which high-speed trading algorithms destabilized the market, precipitating a 10 percent drop in the Dow Jones within minutes?
As more and more adaptive systems enter the physical world, courts will have to re-examine the role foreseeability will play as a fundamental arbiter of proximate causation and fairness. That’s a big change, but the alternative is to entertain the prospect of victims without perpetrators.
We lawyers and judges have our work cut out for us. We may wind up having to jettison a longstanding and ubiquitous means of limiting liability. But what role might there be for system designers? I certainly would not recommend stamping out adaptation or emergence as a research goal or system feature. Indeed, machines are increasingly useful precisely because they solve problems, spot patterns, or achieve goals in novel ways no human imagined.
Nevertheless, there are a few things we can do to prepare for these challenges. First, it seems worthwhile to invest in tools that attempt to anticipate robot behavior and mitigate harm. The University of Michigan has constructed a faux city to test driverless cars. Short of this, virtual environments can be used to study robot interactions with complex inputs. True, some literature suggests that the behavior of software cannot be fully anticipated as a matter of mathematics. But the more we can do to understand autonomous systems before deploying them in the wild, the better.
Second, it is critical that researchers be permitted and even encouraged to test deployed systems—without fear of reprisal. Corporations and regulators can and should support research that throws curveballs to autonomous technology to see how it reacts. Perhaps the closest analogy is bug bounties in the security context; at a minimum, terms of service should clarify that safety-critical research is welcome and will not be met with litigation.
Finally, the present wave of intelligence was preceded by an equally consequential wave of connectivity. The ongoing connection firms now maintain to intelligence products, while in ways problematic, also offers an opportunity for better monitoring. One day, perhaps, mechanical angels will sense an unexpected opportunity but check with a human before rushing in.
None of these interventions represent a panacea. The good news is that we have time. The first generation of mainstream robotics, including fully autonomous vehicles, does not present a genuinely difficult puzzle for law in this law professor’s view. The next well may. In the interim, I hope the law and technology community will be hard at work grappling with the legal uncertainty that technical uncertainty understandably begets.