Self-Driving Cars Need Good Maps

The crash of a Tesla while it was using Autopilot hints at the geographic limitations of autonomous vehicles.

A Tesla.
Photo illustration by Slate. Photos by Spencer Platt/Getty Images, Thinkstock.

Last week, writing about Arizona’s early and perhaps reckless embrace of driverless car companies, I wondered about the consequences of the awkward courtship between cities and companies who need to test their technologies in real-world environments. It seemed like the first-mover advantage was bound to accrue to the company, not the place. A fraught trial period would yield a developed, highly mobile technology that could be quickly exported to any other American city.

One obvious obstacle to this hypothesis is weather. Most self-driving cars rely on lidar, a radar-like feature that uses laser beams, which has an important weakness: It sometimes confuses a snowflake for a more solid obstacle and gets thrown off by snow drifts that temporarily change the shape of the world. That’s one reason why self-driving car rivals Uber and Waymo have been doing cold-city testing in Pittsburgh and Detroit, respectively. And why there could nevertheless be a significant gap between the implementation of year-round automation in California and the Sun Belt and its arrival in Boston and Minneapolis.

But a recent crash of a Tesla raises another possibility. What if the success of autonomous vehicles will rely not just on the power of their sensors and computer systems, but also on the sophistication of the maps they consult as they make their way through the world?

On March 23, a Tesla Model X with Autopilot engaged crashed into a damaged highway barrier in Mountain View, California, killing the driver. The accident took place in a corridor that Tesla’s software should know well. The company says owners have driven that stretch of road using Autopilot 85,000 times since 2015, including about 200 trips a day right now, with no incidents.

But while Tesla’s fleet does “learn” from its environment (more on that in a minute), it still relies on cameras, radar, and ultrasonic sensors to avoid obstacles. Tesla also has GPS and maps, but there’s no indication they are detailed enough to know where the white line between lanes becomes a deadly rib of concrete.

That makes Tesla, whose CEO Elon Musk has claimed Autopilot functions as training wheels toward the grown-up automation at work in those Uber and Waymo cars, something of an outlier in the autonomous vehicle race. Its rivals rely on some combination of sensors and maps, and while the former do the exciting work—avoiding everything that moves, and many things that don’t—they can only do so much without cartographers laying the groundwork.

That’s why companies from Waymo to Uber to General Motors have all been racing to create hyperdetailed maps of auto infrastructure. Ex-Tesla engineer Andrew Kouri, whose company lvl5 (as in Level 5, or full automation) is working on independent mapping software, told Jalopnik better maps might have prevented the fatal 2016 Florida crash in which a Tesla driver using Autopilot hit the side of a truck in an intersection. The truck was ultimately found to be at fault, but Tesla posited at the time that the Model S Autopilot (designed to be used only on grade-separated highways) might have mistaken the truck for a bridge or another overhead structure—an error, Kouri said, that could have been overcome had a map told the car there was no bridge there.

In Tesla’s case, the automaker is using “fleet learning” to help overcome the kinds of errors that simple radar navigation might make. In a 2016 blog post, Tesla gave the example of an overhead sign positioned at a rise in the road, giving the illusion of a collision course. “If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist,” the company explained. In that way, familiarity with a given environment makes Autopilot better. But Tesla’s maps are more for navigation than for obstacle avoidance, which is one reason you can use the software on any highway you want, even if no Tesla has driven there before.

For companies experimenting with higher levels of autonomy, there can be no uncertainty about the contours of the surrounding environment. In 2017, Waymo clocked more than 350,000 miles of autonomous vehicle travel in California with only 63 instances of “disengagement,” or human takeover. But that testing occurred in 15 mostly small California cities. Why not cruise the whole state? In part, Waymo wrote in 2016, because “before we drive in a new city or new part of town, we build a detailed picture of what’s around us using the sensors on our self-driving car.” As far back as two years ago, the feedback between sensors and maps allowed Waymo cars to know their positions on the road to within 10 centimeters. It’s a Borgesian endeavor: “A map for self-driving cars has a lot more detail than conventional maps (e.g. the height of a curb, width of an intersection, and the exact location of a traffic light or stop sign), so we’ve had to develop a whole new way of mapping the world,” the company wrote. That’s expensive and time-consuming, even for the world leader in digital maps.

In short, the challenge of cartography may be a limit on the geographic reach of driverless cars. Waymo plans to unveil a 10-by-10–square-mile taxi service area in Phoenix by the end of this year, which seems like a massive terrain for driverless vehicles—until you remember the average commute in Phoenix is 11.4 miles and would barely be contained in that zone.

Of course, there are skeptics of this cartographic arms race who believe the cars may get good enough at reading the terrain before the companies successfully put it all in a world-size 3-D map. Speaking to CNN last year, analyst Rebecca Lindland described this scenario as a “conundrum” in which the depth of maps and skills of cars evolve in tandem and might ultimately be ready at the same time.

But in many cases, that vision rests on extensive deployment of vehicle-to-vehicle and infrastructure-to-vehicle communication—another big upfront investment in a particular place. At least for the moment, maps appear to be key. And this means that a city like Phoenix may also have a first-mover advantage, as one of the few major cities that has undergone the meticulous surveying necessary to support current AV tech from the current industry leader. Waymo CEO John Krafcik already has his eye on a second destination for Waymo taxis, but if the rollout continues at this pace, Phoenix might spend years in a little bubble of vehicle autonomy. A lot of the positive impacts of autonomous driving can’t be realized at that scale, especially in a place without contained travel patterns. Is Phoenix’s early lead enough to change the way people there get around?

In any case, the Tesla crash reminds us that Autopilot doesn’t see that much more than you do. “This is not a self-driving vehicle. Far from it,” John Snyder, of the generally bullish Autoblog, wrote on Monday about the unreleased Model 3 Autopilot. “It seems a misleading—or even dangerous—to call it a semi-autonomous system considering the vague amount of automation such a descriptor provides.”

Which means there is a map a Tesla needs to fall back on to do its job properly, and it hasn’t been coded into your car. It’s the one your eyes are creating every second inside your head as you sit behind the wheel, ready to take control.