Future Tense

What the Fatal Uber Crash Doesn’t Tell Us About Self-Driving Cars

This sad accident won’t set any useful precedents.

A person driving on a highway.
Autonomous vehicles are supposed to be safer than human drivers.
titi-kako/iStock

On Sunday night, a self-driving Uber struck and killed a pedestrian in Arizona.
This appears to be the first fatality in which a self-driving car was involved.
The facts of the case remain unclear, and interpretation of the events will likely change as the local authorities and the National Transportation Safety Board investigate. And while a tremendous amount of ink had already been spilled about the looming ethical challenges of autonomous vehicles—both in academia and public fora—this case ultimately might not help us settle the burning issues that surround this new technology. Here’s what we know.

The pedestrian, Elaine Herzberg, was struck by Uber’s Volvo XC90 SUV as she attempted to cross a street in Tempe, Arizona. While the Tempe police have made no conclusions about who is at fault, video footage of the crash shows Herzberg attempting to cross the street with a bike, outside of a protected crosswalk, about 10 p.m. The Uber vehicle had a “safety driver”—a human in the driver seat—who is supposed to take control in the event of an emergency. It appears the driver was looking down and only realized what was happening when she heard the sound of the impact. Tempe Police Chief Sylvia Moir stated, “It’s very clear it would have been difficult to avoid this collision in any kind of mode.” Nevertheless, Uber has suspended its testing of self-driving cars throughout North America—in Arizona, Pittsburgh, San Francisco, and Toronto.

So what does this mean for the testing and rollout of self-driving vehicles? Will states like Arizona, which serves as the anarchic frontier of self-driving regulations since the governor issued an executive order on the testing of self-driving cars, apply the brakes and tighten regulations? Is self-driving technology, even in beta mode, not yet ready for prime time? Many discussing the Uber crash have reiterated familiar worries about the safety of self-driving vehicles. That’s an important topic, one that needs debate. But an even greater risk is that, when considering these questions, we might lose sight of one of the major projected benefits of self-driving cars: that they are expected to ultimately save lives of tens of thousands of drivers, passengers, and pedestrians.

Perhaps a better question than “Are self-driving cars safe?” is “Should we blame an autonomous vehicle more than we would a human driver in a similar case?” Autonomous vehicles are supposed to be safer than human drivers—this has been sold to us as their principal benefit. Given that, isn’t it only fair to hold AVs to higher standards than humans? If that were true, we might think this crash is actually worse than if it had been the result of a human driver.

For example, imagine you are walking along a sidewalk when you see a stranger clutch his chest and keel over. You stoop down to perform CPR. Now, in this situation, if you are a trained physician, a bystander would reasonably demand more of you, precisely because of your greater capacities. If you fail to revive this person, you have done something that is less easily excused than if someone else with no medical training had failed in the same task. What it is reasonable to blame someone for is proportionate to what we could have expected of her, and what we can expect of someone is proportionate to her capacities.

The manufacturers of autonomous vehicles may find themselves in an analogous situation: Because they have trumpeted the safety benefits of autonomous vehicles, they have dramatically raised the public’s expectations. While autonomous vehicles can be expected to lower the number of traffic fatalities greatly, we might also view the harms and deaths they will inevitably cause as worse than those caused by human drivers. How much worse will be a matter of debate. But we are stuck in a Catch-22 for the time being, and the public seems to want it both ways: Autonomous vehicles will be superior to human drivers, despite the deaths they may cause, at the same time that each particular self-driving car crash is worse.

However, the Uber fatality probably cannot help us answer the most urgent and compelling ethical questions that have haunted discussions of autonomous vehicles: the question of responsibility for harms or deaths. Similar worries confound the development of all systems that act so autonomously that they take a human out of the loop for decisions that could cause harm.

In legal circles, they say that “bad facts make bad law”: A precedent built on facts that are not favorable to begin with is not going to be useful. This particular crash provides an imperfect test case. Judging from early reports, with the caveat that the investigation is still ongoing, it seems that the pedestrian was at fault and that it would have been nearly impossible for any driver—human or machine—to prevent this accident. (Some things are simply ruled out by the laws of physics.)

Finally, there was a human in the car monitoring the car’s activity, so the blame would not fall to the machine itself or the creators of its algorithmic programming. Manufacturers, the public, and regulators still await what would be the true test case of the ethics of autonomous vehicles: an empty car in full autonomous mode striking a pedestrian who is in no way at fault, especially if a human driver could have been expected to avoid causing that harm. The slim solace that this tragic accident brings is that it presents the opportunity to have a full and open debate about the appropriate standards for blame in the event of such accidents. Doing so can help us shed some light on just how good is good enough when bringing autonomous vehicles onto the road. Unfortunately, this may turn out to be a case of what the Uber crash doesn’t tell us.