It was bound to happen. With autonomous vehicles on the streets across the country, one of them—through computer error, supervisor carelessness, or a pedestrian’s mistake—was going to hit someone.
On Monday morning, a self-driving Uber with a supervising driver struck and killed 49-year-old Elaine Herzberg as she tried to cross eight lanes of traffic in Tempe, Arizona.
Herzberg’s death was the first recorded fatality caused by a self-driving car.
Much of the debate going forward will circle around the particulars of the accident—the design of the intersection near which Herzberg was killed, the viability of Uber’s autonomous vehicle technology to detect pedestrians at night, the actions of the supervising engineer, and the fact that the victim was reportedly crossing the street outside the crosswalk.
More broadly, observers will be tempted to use the death as a data point to evaluate the safety record of AVs, which is both one of most promising long-term benefits of the technology and a present-day talking point of companies like Google’s Waymo.
On the surface, after today, the results do not look good: Waymo claims to have driven more than 5 million miles, including 3.5 million on public roads. In September, Uber claimed to have done more than 1 million miles. The industry’s other players are behind them, so it seems certain that AVs as a whole clocked fewer than 10 million public miles before recording a pedestrian fatality. Nationally, the rate is closer to one pedestrian fatality every 480 million miles traveled. If you include all auto deaths, the 2015 rate was one every 88 million miles. Reaching either marker would require a lot of safe driving for AVs right now.
But there are a few reasons that back-of-the-napkin comparison doesn’t make a lot of sense. For one thing, when you’re evaluating something that happens once every 88 million miles, 5 million to 10 million miles is not a big sample size. (Even Tesla’s claim that its autopilot tech constituted a “statistically significant improvement in safety” after its first fatality was recorded after 130 million miles is not a huge sample size.) You’d need many more hundreds of millions of miles—or a few more fatalities—to establish an accurate sense of how often this is likely to happen.
And the calculation makes even less sense than that, because most of those miles have been performed with an engineer ready to take over. How often have human drivers taken the wheel? How many of those situations represent serious mistakes by the computer driving the car? What if human intervention makes a situation worse? All that introduces an uncomfortable amount of noise into the data that we do have.
Finally, fatality rates vary widely by age and especially by gender. Male drivers between the ages of 16 and 25 tend to be about five times as likely to die behind the wheel than women between the ages of 46 and 55. (Men are much, much worse drivers across the board.) They also vary geographically. Pedestrian mortality rates are more than four times higher in Florida than in Vermont. So the safety improvement we can expect from autonomous vehicles depends in part on where and how they are deployed, which drivers they are taking off the road, and at which times of day.
The gist is that we little have no idea how the current fleet of AVs compares, safetywise, to our existing (and fairly lousy, all things considered) crop of human drivers. If the technology were considerably more unsafe, we would probably have seen a few more crashes like the one that happened today in Tempe. But given the small sample size and complicating factors, it’s hopeless to try to make a side-by-side statistical comparison.
What is true is that autonomous vehicles are likely to be better and safer than human drivers in the long term, and the sooner we make the transition away from human drivers, the more lives we will save on the road. The uncomfortable question we have to ask now is what kind of short-term cost we are willing to bear to help self-driving technology make that leap.