Artificial intelligence is creeping into our smartphones in small, subtle ways. Google’s Pixel 3, announced Tuesday, can answer robocalls on your behalf thanks to Google’s Duplex technology and Google Assistant. Meanwhile, Android P, the latest operating system for Google’s phones, can learn from how you interact with phone alerts to suggest stopping notifications for particular apps, reducing the amount of unnecessary intrusions your phone makes into your daily life. But there’s another new phone in the pipeline that takes these kinds of developments further. By pairing them with more robust voice control, it may help fill in the picture of how we’ll talk to the next generation of smartphones—and what they’ll learn about us in order to talk back.
Essential, the company formed by former Android chief Andy Rubin, is developing a smartphone that “will try to mimic the user and automatically respond to messages on their behalf,” Bloomberg reports. The device will shun the extra-large edge-to-edge displays currently trending in smartphones in favor of a smaller display and a focus on voice. The eventual goal, based on statements Rubin made in a 2017 interview, is for a phone that’s “a virtual version of you.” The device might not be ideal for everyone but would suit those who aim to spend less time glued to their screens—a problem both Apple and Google have begun trying to solve with behavior-monitoring software—as well as those who may want a smaller-screened device to complement a larger smartphone or phablet. Essential’s phone could be the first in a new breed of smartphones designed specifically around voice interactions and A.I., rather than apps, photos, and video.
Right now, phones can’t match devices like Amazon’s Echo or Google’s Home when it comes to voice controls. The Essential device points to a future where that’s changed. “Voice is going to be a huge aspect of the home screen,” mobile-experience strategist and UX design company Design Caffeine CEO Greg Nudelman told me. When digital assistants and voice control first began to debut on connected devices, it was more of a gimmick than a useful tool. Their knowledge base and feature sets were extremely limited, but in the seven years since Apple introduced Siri, the four years since Amazon first showed off Alexa, and the two years since Google Assistant debuted, these virtual assistants’ capabilities have expanded dramatically to encompass smart home device control, interactions with third-party apps and services, and stats about a wider variety of subjects and events. Voice assistants still have issues: They don’t always understand queries correctly, they have distinct limitations in their functionality, and, on a phone, it’s not always socially acceptable to voice a command to your phone. (We’ve also seen, as in the case of Google’s Duplex A.I., that these systems can cross the line into creepy territory.) But paired with A.I. and something called “contextual awareness,” a phone like Essential could advance the technology significantly.
The smartphone home screen we’re currently familiar with—a grid of app icons, perhaps augmented by a handful of widgets as on Androids—isn’t necessarily the one we’ll have in a few years. Instead, the experience could center around contextual awareness, the concept of your phone being cognizant of your activities and whereabouts in a useful way. “As a result of voice being the primary mechanism for more and more functions, the phone will become less app-centric and more subject-centric, centered on key relationships in your life—people, objects, groups, organizations,” Nudelman said.
The large home screen dominated by apps could be replaced by one either focused on relationships and people or by an experience centered around notifications that need dealing with. We’re already seeing the beginnings of this: Both Android and iOS have notification centers that are proving increasingly important to the overall smartphone experience. Notifications are being grouped more logically than in the past, now include more context (such as thumbnails of photos that have been sent), and have shortcuts for us to quickly take action on those alerts, like the ability to reply directly to a text. These features bypass the need to visit a specific app to take action on a notification.
And with contextual awareness, a device could more easily learn how to respond on a user’s behalf, as Essential’s phone will reportedly do. If it senses you’re driving, it could respond to a message saying that you’ll get back once you arrive at your destination—ditto if you’re working out, or if it knows that you’re in a meeting for the next hour. Knowing your location, what activity you may be doing, or what’s next on your schedule, the device could also tailor notifications, apps, and other experiences so that they’re specifically relevant to your current situation, sparing you tasks or alerts that are better suited for another time.
Of course, such an experience would have drawbacks. With a smaller screen, watching HD video streams may be tough. It’s also unclear whether this phone might provide a more limited, curated Android experience than users are accustomed to—it’s possible device owners might have to use Essential-built apps for the experience to work, rather than third-party alternatives, for example. (At the very least, its software will surely support only a limited number of third-party apps, as we’ve seen with the early stages of other digital assistants.) And then there’s the question of whether the phone’s A.I. will even work as promised.
A prototype of Essential’s phone may be ready by the end of the year, according to Bloomberg, in time for a quiet debut with the company’s partners at CES in January. It’s certainly an ambitious project: While Google’s Pixel 3 may use more A.I. than any phone yet, it does so with conscientious subtlety. Essential would be throwing that restraint out the window and offering consumers a radically different smartphone experience. Its biggest bet is whether consumers will be willing to take the leap too.