You’re driving down a busy suburban street when a bicyclist suddenly raises his arm and weaves out in front of you. You tap the brakes, only for the bicyclist to change his mind and settle back into his bike lane. Then, just as you’re speeding up again, he weaves in front of you again. Are you irritated yet?
Google’s self-driving car isn’t. It simply slows down again and waits politely for the cyclist to make up his mind. It will do this as many times as it takes to be sure that it can pass the cyclist without endangering anyone.
The self-driving car has been safely cruising California highways for years now. On Monday, Google released a new video (see below) highlighting its increasing aptitude for navigating city streets—or, at least, moderately busy suburban streets. While highways entail the peril of high speeds, city driving introduces far more variables to the equation for the machine-learning algorithms that govern Google’s autonomous cars. Accordingly, Google’s drivers have been logging thousands of miles in the city’s hometown of Mountain View, testing the car’s ability to deal with unpredictable obstacles ranging from jaywalkers to construction cones to cars backing into and out of parking spots.
“We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held by a crossing guard, or a cyclist making gestures that indicate a possible turn,” writes Chris Urmson, director of Google’s self-driving car project, in a blog post. “A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.”
It also never gets impatient. That sounds obvious, but it could be a godsend for bicyclists in particular if and when self-driving cars begin to replace human drivers on America’s roads. In the video below, you can see how the car’s computers process various types of actors and obstacles as they make their way through intersections and across railroad tracks.
In all cases, the car errs on the side of caution—something we human drivers could stand to do a little more often. And its decisions are rigorously data-driven. For instance, Google’s car will wait for a split second when a light turns green, because research shows that red-light runners are most likely to come flying through the intersection in the first moments after the signal changes.
As far as it has come, the self-driving car technology remains very much a work in progress. This becomes evident when you read a detailed firsthand account in The Atlantic Cities today by Eric Jaffe, who recently took a ride in the back seat of a self-driving Lexus RT 450H with Google’s Dmitri Dolgov. While the overall experience is so smooth that Jaffe has to stop himself from heaping praise upon the car, there are two instances when Dolgov feels compelled to take the wheel back from the machines. Here’s Jaffe describing the first of those two human interventions:
We are in the left lane on Mountain View’s West Middlefield Road when some road work appears up ahead. A dozen or so orange cones guide traffic to the right. The self-driving car slows down and announces the obstruction — “lane blocked” — but seems confused what to do next. It won’t merge right, even though no cars are coming up behind us. After a few false starts, Brian Torcellini takes the wheel and steers around the cones before reengaging auto mode.
“It detected the cones and it tried to go around them, but it wasn’t confident,” says Dmitri Dolgov, typing at the laptop. “The car is capable of a lot of things, but unless it’s absolutely sure that it can handle some situation well, it will err on the conservative side.”
The second tense moment in Jaffe’s ride came when a utility truck suddenly cut off the self-driving car from the left. Dolgov reacted before the car’s computers did, and he stomped the brakes with his own foot. A few days later, Google emailed Jaffe to say that a simulation showed the self-driving car would have done the same thing if Dolgov hadn’t moved first.
That’s comforting, although only to a certain extent. The hardest part about adjusting to self-driving cars, if we ever do, will be trusting them to do the right thing even in every possible circumstance. The first fatal accident caused by “computer error”—and even the car’s backers admit there may be one eventually—is likely to cause an uproar that could set the project back years.
But here’s the thing: Even if self-driving cars never achieve perfection, it’s looking increasingly likely that they’ll prove far safer than your average cellphone-checking, selfie-snapping, road-raging, falling-asleep-at-the-wheel human driver ever was. In fact, given that Google’s cars have now logged nearly 700,000 miles without causing a single accident, it seems that they already are.
Previously in Slate: