The death of a Tesla Model S rider in May 2016, which occurred while the car was operating on autopilot, was a jarring resurrection of the debate behind the safety of autonomous vehicles. The death represented the only fatality in the history of the world from an autonomous vehicle—for comparison’s sake, there were 40,200 car-related fatalities in 2016 in the U.S alone. Nevertheless, the public and policymakers alike have made it clear that they demand an extremely high threshold of tested safety before self-driving cars can be let loose en masse. The head of the National Highway Traffic Safety Administration believes autonomous cars need to be twice as safe as human drivers before they are allowed on the road.
That level of testing would take a long time: maybe 15 more years, or maybe 50 more. Is that high safety standard really necessary? Or is it holding us back? A new report by the RAND Corp. argues that instead of waiting for near-perfect driving, we should start putting autonomous vehicles on the road as soon as they are even just a little bit safer than humans. After all, doing so would already lower the number of lives lost in car accidents—even if some of those self-driven vehicles still crash, and even kill.
The report’s authors developed three basic models for the future safety of autonomous vehicles: one in which those cars are 10 percent safer than human drivers, one in which they are 75 percent safer, and another where they are 90 percent safer. Each of those models were run through different scenarios to evaluate how that technology could be advanced and to what degree our current level of motor-vehicle accidents might be decreased.
The authors found that widespread adoption of autonomous cars that are even 10 percent safer than average human drivers would likely save as many as 3,000 lives a year. Which means widespread adoption of self-driving cars could save hundreds of thousands of lives in just three decades’ time.
It’s not just that those cars would be less prone to errors than humans are. Autonomous cars will likely also be able to communicate with one another, ensuring that they could coordinate their movements on the road to avoid accidents. Another factor harkens back the old adage, “practice makes perfect”: As self-driving cars get more driving time on the road, they can use that experience to more effectively learn how to be safer. And unlike with humans, the experience of one vehicle could be shared with practically every other autonomous vehicle on the road.
That is, if we’re willing to take the leap. The main obstacle impeding this future starts with humans. People have to learn how to be OK with the notion that autonomous vehicles will be involved with at least some portion of accidents and that machine error might even be the root cause for some of those collisions. But while it’s easy for humans to understand why a fellow human being can make a mistake that results in an crash on the road, it’s a lot more difficult to sympathize with the mistakes of an artificial system made of wires and sensors—especially when it only really has one job to do.
Most technologies come with risk, and often, the only way for people to get over those fears is to let them try things out firsthand. Commercial air travel was probably unthinkable for many people in the mid-20th century, for example, but nowadays it’s a normal, boring part of life. One way to jump-start that adoption could be in ride sharing: Google’s parent company, Alphabet, is already taking a leap by making some of its Waymo-operated cars in Phoenix fully autonomous (meaning, even without someone in the driver’s seat ready to take over). The plan is to test out a few hundred of these cars over the next few months through a car-sharing service that early test riders opt into, like an automated Uber or Lyft. The massive expansion of ride-sharing apps might be an easy way to give cautious consumers a taste for the driverless mode of transportation.
Of course, our reluctance to adopt this technology manifests in multiple other ways, and this might actually be the least problematic. Apart from the technological kinks engineers are still troubleshooting, self-driving cars are also entirely misunderstood by the people in charge of regulating them.