There are few smartphone annoyances worse than your device not doing what you expect it to. When you tap the shutter button, but the camera fires a split second too late. When you’re in the middle of an important meeting, but your phone is blowing up with notifications. In its new Pixel 3 smartphone, which debuted Tuesday, Google has added a number of subtle A.I.-powered features that make the device react not just more speedily but in the way you thought your phone should work all along.
The camera is perhaps the most dramatic place you can see Google’s A.I. at work. Reviewers have consistently lauded the Pixel for having one of the best smartphone cameras available, but Google introduced several new features that further anticipate your needs. The first is Top Shot, a tool that takes a burst of photos and selects the best image from that series. It appears that, like Apple’s Live Photos, it starts capturing shots shortly before you hit the shutter, as well as shortly after. (TechCrunch called Top Shot “Live Photo but useful.”) It then uses machine learning to analyze the images, screen out blurry shots or ones with blinking eyes or weird faces, and surface the best of that bunch. It does what you hope your camera would do: capture a great photo even if you hit the shutter a split second too late.
Another camera feature called Motion Auto Focus uses object recognition to keep a specific person, animal, or moving object in focus as it zooms around the frame. It aims to eliminate blurry action shots or the need to shoot a dozen burst photos. And then there’s Night Sight, which uses machine learning to determine the right colors for a scene based on the content in the image. The result: photos shot in darker hours that are significantly brighter than otherwise—the kind of photos you’d hope your phone would be able to take, rather than the noisy, pixelated result you often get in those conditions.
Google has also honed its Duplex technology, which uses an incredibly lifelike-sounding A.I. to make or answer calls on your behalf. In the Pixel 3’s call screening feature, it leverages Duplex so that you never have to answer a robocall or telemarketer call again. After tapping a button on the device’s screen, the A.I. takes over, answering on your behalf and asking why the person is calling. For transparency, the phone transcribes the entire conversation onto your screen, should you be curious about what’s happening. It sounds like having a secretary that you don’t need to feel guilty about fielding annoying phone calls on your behalf.
Another new feature called Flip to Shhh lets you silence notifications by flipping your phone face down onto a desk or tabletop rather than diving into your phone’s settings to enable Do Not Disturb mode. Google also made some clever changes for when the device is on its wireless charging dock: Voice-based communication is now the default, and notifications are now easier to see or dismiss from a distance. When it’s charging on this base and not in use, the phone also uses photos from Google Photos to transform into a photo frame (much like the newly announced Google Home Hub).
Individually, each of these features brings the device up to par or exceeds those of other leading smartphones. Taken together, they demonstrate that Google is trying to make its smartphone better anticipate and accommodate user needs through A.I. and machine learning in subtle, methodical ways. None of these A.I.-based updates is jarring or screams “privacy violation.” Google is taking its time to get consumers used to the idea that a smartphone is controlling some experiences. The company likely has a grand vision for mobile devices that’s far more reliant on A.I., but it’s learned from its missteps with products like Google Glass. With the Pixel 3, it’s demonstrating that when it succeeds, it will make your phone a little bit smarter and easier to use.