No, Driverless Cars Won’t “Choose” to Kill You

A Range Rover Evoque equipped with Valeo self-parking technology backs into a parking spot during a driverless car demo.

Photo by ROBYN BECK/AFP/Getty Images

Though still in the testing phase, driverless cars are quickly moving from closed-off highways to bustling city streets. Seeing the inevitable adoption of the futuristic technology by our nation’s commuters, many states and jurisdictions across the country are proactively enacting regulations governing autonomous vehicles on public roads. But a recent discussion among technophiles goes way further into the future, asking whether or not driverless cars will “choose” to kill their occupants in the event of an accident.

Patrick Lin started the debate last week on Wired, questioning whether or not (hereto unspecified) crash-optimization algorithms designed to ensure passenger safety in the event of an emergency would cause driverless cars to intentionally hit larger vehicles better capable of absorbing an impact. Lin, an associate professor at California Polytechnic State University, is careful to acknowledge that his hypotheticals are merely a thought experiment intended to raise awareness of ethical issues surrounding autonomous vehicles. But then Popular Science and Gizmodo latched onto the concept, quickly allowing the discussion to devolve into one ridiculous question: Should a driverless car be authorized to kill you?

Arguments asking whether or not driverless cars should “choose” the least deadly collision (sacrificing the vehicle operator’s life to spare a school bus full of children, for example) are fun/terrifying to think about. It’s like someone asked Christopher Nolan to direct Speed 3: Driverless Doom. (Only one Google car can survive!)

But these arguments lose sight of reality in favor of drama. Washington, D.C., for example, already passed legislation deeming the operator of an autonomous vehicle to be the “driver” for all legal purposes, including traffic violations. The first models of driverless cars will be able to operate in either autonomous or manual modes, providing motorists with extra control during tricky situations. Simply alerting the motorist to a danger before switching to manual control will be enough to allay fears of an Audi’s Choice scenario.

And don’t forget: Autonomous technology is already out there on the roads. Many vehicles on the market today have automatic braking systems, which detect objects in front of the car and apply the brakes to avoid collisions. A family of five in an automatic-braking minivan may be relieved to avoid smashing into the trashcans in front of them, but decidedly less enthusiastic when they are subsequently rear-ended by the runaway garbage truck speeding behind them. Where is the panic?

The fact is, autonomous and semi-autonomous systems help make our roads safer. (A recent study of automatic braking systems found equipped vehicles 27 percent less likely to be involved in low speed crashes.) Driverless cars are no exception. In all likelihood, having a range of sensors capable of detecting any number of things a driver can’t even see will help stop accidents before they happen. Just as people will certainly continue to die in automobile accidents long after driverless cars become available to the public, autonomous systems developers will also continue to find ways to avoid these tragedies. And no, your car won’t “choose” to kill you. Here’s hoping it reroutes you to a hospital if you choke on your breakfast burrito, though. Drive on, driverless cars. Drive on.