Apparently self-driving cars are actually happening? I read that Ford patented a movie screen that projects onto the inside of windshields. And apparently experts are convinced that people are going to have tons of sex in driverless vehicles? Is this stuff for real?
A patent isn’t a promise and “experts” sometimes aren’t, but there’s certainly something going on here. Powerful, well-resourced companies are actively trying to automate our cars. In 2016, Apple increased its research and development budget by almost 30 percent, leading some to speculate that it was stepping into the automotive market. While other analysts disagree, Apple recently invested $1 billion into the Chinese ride-hailing app company Didi, indicating a clear interest in automobiles, even if it’s not developing a car of its own.
Meanwhile, Tesla—which is more definitively invested in automated vehicles—has been actively hiring away engineers from Apple to build up its technological infrastructure. Older car companies are getting in on the game as well, most dramatically via General Motors’ purchase of the self-driving startup Cruise automation. One way or another, we’re seeing a convergence between consumer computing technology and the automobile industry. Self-driving vehicles are the flashpoint of that conversion.
But why? Do we really need to automate everything?
The affirmative answer is that it’s about safety. Where humans aren’t actually that great at driving, automated vehicles are terrific at collision avoidance, pace matching, and so on. Long-distance trucking, in particular, is a dangerous profession, with driver exhaustion causing a disquieting number of wrecks. To that end, you see companies like Otto, which aims to retrofit trucks with self-driving systems. While drivers would still need to handle their big rigs in more confined quarters, and they’d need to be at the wheel at all times, Otto’s technology would take over on highways, hopefully making our roads a lot safer.
Will it, though? Won’t all of this automation just make us lazier?
It might! Worrying over our self-driving future in The Glass Cage, Nicholas Carr looks to the model of commercial airplanes, which have been almost flying themselves for decades, arguably degrading the skills of pilots in the process. “There’s growing evidence that recent expansions in the scope of automation also put cognitive skills at risk,” he writes. In our relentless march toward universal automation, he argues, we risk collapsing into “automation complacency,” a condition that “takes hold when a computer lulls us into a false sense of security.” If catastrophe occurs, we may not be equipped to respond to it, accustomed as we are to everything going smoothly.
A similar concern occurred to Slate’s Will Oremus when he had the opportunity to try out Tesla’s autopilot mode. As he notes—
Hold up. Tesla already has a self-driving car?
Of a sort. Though it still requires a great deal of human input, the Model S can brake, change lanes, and steer itself under certain circumstances. As with Otto’s planned long-haul trucking solution, Tesla’s system still always requires a human behind the wheel, making it more like a helpmate than a fully automated driver.
OK, sorry. So what’s the problem again?
As Oremus notes, partial automation may convince people to embrace risky behaviors, like texting or drunk driving. Until we remove humans from the loop altogether, then, these technologies may do more to persuade us that we’re safe than to actually make us safer. There probably would be a lot fewer accidents in a world where all the cars really were driverless, and some companies are coming up with crazy solutions to minimize risk—like an adhesive that would glue pedestrians to the hoods of vehicles after a collision in an attempt to prevent further injuries. But we’re still a long way from the point where everyone has to give up their driver’s licenses, and we may be introducing an entirely new class of risks in the meantime.
But everything will be fine in the long term?
Sure, we’d probably be safer if our cars were fully automated. It would also mean less wear and tear on roads and might help decrease the environmental impact of our car culture.
In the long term, we may be looking at an end to private car ownership. There’s a reason that companies like Uber and Lyft are investing in self-driving technologies: They seemingly want to remove humans from the equation altogether, deploying fleets of robot taxis to replace the relatively inefficient human contractors they’re working with now. If they succeed, especially if a particular company or conglomerate wins out, we might end up giving over our transportation infrastructure to private interests—and sacrificing a great deal of individual autonomy in the process. (They might cost local governments a lot of money, too.)
Is that really realistic? I mean, are truly driverless systems even possible?
Not only are they technically possible; they’ve been around for more than a decade. Way back in 2005, a handful of cars completed the Defense Advanced Research Projects Agency’s grand challenge, making their way along a 128-mile course through difficult desert terrain. What’s more, they did it without humans behind the wheel. More recently, Google’s self-driving cars have been cruising around Mountain View, California, racking up thousands of hours of road time in the process. And though there’s always been someone behind the wheel, until recently, all accidents they’d been involved in were attributable to human—rather than machine—error.
How is this even possible?
These vehicles bring together a host of complex systems. They’re loaded with cameras that scan for obstructions and laser turrets that create 3-D models of the surrounding terrain. They’re plugged into GPS systems, and some of them may even connect into transportation infrastructure in the near future, giving them information about changing traffic signals and other data points.
But the most important component of any modern self-driving car has to be the machine learning algorithms that bring all its systems together. It’s difficult to anticipate how a vehicle will perform under various circumstances—and hence difficult to train it in advance. Machine learning allows a computer to build on prior experiences, making it easier for a self-driving car to perform under unexpected conditions. As Burkhard Bilger explains in a New Yorker article on the state of the self-driving car, sometimes designers still have to intervene manually—teaching a vehicle how to respond to stop signs, for example—but machine learning allows for a different kind of flexibility.
Are there limitations?
Machine learning can only do so much at this point. As Alexis Madrigal noted in the Atlantic in 2014, Google’s self-driving cars are great at driving around Mountain View because they effectively have extremely sophisticated digital models of the area built into their software. Madrigal refers to the process not as cartography but as “crawling,” a laborious effort to make the world “legible and useful to computers.”
Plop one of Google’s cars down almost anywhere else in the world, and it’s not going to perform nearly as well, since it won’t have that virtual double of the real environments that it’s passing through. According to Madrigal, Google had only mapped 2,000 miles of road in 2014, a mere fraction of the roughly 4 million miles in the United States alone. As Oremus has argued, that may mean that we’ll see self-driving cars roll out more widely in places like Dubai, where it’s easier to control for situational uncertainty—and maybe easier to map the territory as a whole.
It sounds like we’re still going to be dealing with driverless cars, one way or another. How long do we have?
If you’re just talking about the time until we see some driverless vehicles on the road, probably not that long. Anthony Foxx, secretary of the U.S. Department of Transportation, went on the record in 2015 with the claim that “we’re going to see [fully autonomous cars] within five years,” though he allows that that “just means market availability.” A more comprehensive timeline assembled by Recode suggests that by 2030, “Automakers will stop manufacturing cars that don’t have at least some highly autonomous features.” It goes on to predict that by the middle of the 21st century, we’ll witness total fleet turnover, at which point virtually all vehicles on the road will be at least partially autonomous. If that’s true, it’s possible that driving your own car will rapidly come to be seen as a dangerous affectation like smoking
Of course, before we get there, the industry will have to overcome a number of regulatory hurdles. The Obama administration has proposed putting $4 billion toward facilitating the development of self-driving vehicles. Ultimately, that effort might lead to clearer national standards. For now, though, self-driving vehicles are regulated differently in different jurisdictions, much like drones. Until that changes, we’re unlikely to see really revolutionary commercial developments.
I heard a rumor that you don’t know how to drive. Are you eager for the robocars to take over?
This article is part of the driverless cars installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on driverless cars:
Future Tense is a collaboration among Arizona State University, New America, and Slate. To get the latest from Futurography in your inbox, sign up for the weekly Future Tense newsletter.