Which Smart Speaker Should You Trust Most—and Least?

How to decide on a virtual assistant if you care about privacy.

Google Home and Amazon Echo devices wearing disguises.
Photo illustration by Slate

If virtual assistants haven’t yet taken over the entire tech world, they’ve at least conquered CES, the annual tech expo that’s going on right now in Las Vegas. Alexa, Google Assistant, and various also-rans are de rigueur among this year’s wares at the conference, animating everything from lawnmowers to dishwashers to a $7,000 smart toilet.

Setting the stage for the event were dueling announcements last week from Amazon and Google about the reach of their respective A.I.s. Amazon disclosed last week that it had sold 100 million Alexa devices. Days later, Google reported that its Assistant would soon be on more than billion devices, although that’s cheating a bit, since the vast majority of those are not smart speakers or appliances but Android phones.

Fudgy figures aside, it’s clear that both have already infiltrated millions of lives, and CES 2019 foreshadows a future in which the majority of home appliances respond to either “Alexa,” “Hey Google,” or both.

Yet this A.I. takeover of the home comes at a time when there’s a countercurrent in the industry and society toward greater awareness of online privacy risks. Whether a CES-like world of talking lightbulbs and listening ovens comes to pass will depend on tech companies’ ability to persuade people that these digital cohabitants aren’t spying on them—or, at least, that the intelligence the devices gather won’t somehow come back to haunt their owners.

As increasingly privacy-conscious buyers navigate this noisy new world, they are more and more likely to ask: Which virtual assistant, if any, can I trust? If I bring Alexa, Google, Siri, Cortana, or Bixby into my home—which one will turn out to be the snitch?

The answer is not yet obvious, although we have some early clues.

Both Google and Amazon have assured the public and the media countless times that their A.I. products, like virtual assistants and smart speakers, are not spying on us. Both promise that they only record and store what we say in our direct interactions with the devices, which are triggered by their respective wake words: “OK Google,” “Alexa,” etc. The rest of the time, they’re listening, but only for those wake words—and they’re not storing anything they hear or sending it to a remote server. On most smart speakers, you can pause even that by hitting their mute buttons. The information they do record is stored in your Google or Alexa account, where you can view it and delete it if you want. In theory, it’s accessible only to you.

Amazon and Google would like you to think it’s that simple: Their assistants only store what you say when you’re talking to them directly, and no one has access to that information except you.  Yet both have had snafus in which those constraints failed to hold.

In Google’s case, some early versions of the Home Mini smart speaker were found to be activated almost constantly by “phantom” touches of their top button, with the result that they were listening throughout the day when they weren’t supposed to be. (Google fixed the problem by disabling the top-button activation feature.) No one’s information was compromised, as far as we know, but it was an alarming reminder that it’s hard to know for sure that an always-on assistant only listens to what it’s supposed to hear.

As for Amazon, Alexa has suffered multiple privacy embarrassments in the past year, some more worrying than others. First came the “creepy laugh,” in which Alexa devices apparently mistook other commands for “Alexa, laugh” and emitted spontaneous peals of mirth. Nobody’s privacy was at stake, but it illustrated the imperfection of relying on wake words to protect users. More disconcerting was an instance in which one Alexa device in a home filled with smart appliances recorded a family’s private conversation and sent it to a contact, seemingly on its own. It turned out that Alexa had misheard a word in the conversation as “Alexa,” then misheard a subsequent word as “send message,” and finally misinterpreted yet another snippet of the family’s ongoing conversation as the name of someone in their contact list.

Just last month, an Alexa user was mistakenly granted access to a stranger’s archive of recordings, which Amazon chalked up to human error. That’s believable, though it isn’t much of an excuse, since human error will always be a risk factor whenever a company collects sensitive data that isn’t encrypted. And Thursday, the Intercept reported based on anonymous sources that Amazon’s Alexa-compatible Ring security cameras have been giving Ring employees and others outside the company live access to the feeds from some customers’ cameras, including cameras inside their homes.

Taken together, these bugs indicate that voice-powered devices are not perfectly safe and that even the most reputable tech companies are prone to mistakes that will occasionally compromise the information that their personal assistants collect. As these assistants make their way onto more and more third-party gadgets, the risks go up. It probably doesn’t help that Google and Amazon are openly racing to see who can sign the most deals with the most hardware partners. While both companies have incentives to steer clear of privacy scandals, the land grabs on display at CES make clear that the rivals are also prioritizing speed and scale.

One silver lining is that, glitches aside, some cybersecurity experts say smart speakers in particular are actually much harder to hack remotely than, say, a website or personal computer. Jake Williams, the founder of Rendition Infosec and a former NSA hacker, pointed out to CNBC that smart speakers are designed to accept inputs only from two sources: a voice in the room and the company that makes them. Users can’t use smart speakers to browse the open web, click malicious links, or download unauthorized third-party software, which limits the “attack surface” for would-be hackers. Then again, if you’re paranoid, you might note that this wouldn’t necessarily prevent manufacturers from granting secret or backdoor access to an organization like, well, the NSA—or handing it over in response to a law enforcement subpoena.

And it’s worth keeping in mind that your “attack surface” expands with each listening device you add to your home. A hacker might not be able to break into your Amazon Echo by injecting code, but a burglar could conceivably disable your Alexa-powered door locks by shouting the right command. Both Google and Amazon offer voice recognition features that could help thwart this, but they’re far from perfect.

The privacy-conscious might also sensibly shy away from anything with a video camera, such as Amazon’s Echo Look and Echo Show, or Google Assistant–powered smart displays with cameras. (Google’s own Home Hub smart display was wisely built without a camera, almost certainly for privacy reasons.) It’s not so much that cameras are easier to hack but that the data they record can be so sensitive.

On the other hand, most of us have cameras on our phones and laptops, too. Which illustrates a broader point: The privacy risks of smart speakers and virtual assistants might be more obvious than those of other computing devices and services that we’ve already largely accepted as a society, such as Gmail or iPhones. But they’re not necessarily any more dangerous. For the most part, they present the same risks, just in a newfangled form.

A more mundane set of privacy considerations involves what Google, Amazon, or other virtual assistant–makers might do with your data themselves. Google’s advertising business is powered by users’ online behavior and preferences, and its Assistant represents another way for it to gather that information. That might be reason enough for some to choose a different voice platform, although again, it’s not clear that the information Assistant gathers is any more invasive than what Google already gets from users of Android, Gmail, or Google Maps. Amazon’s business model hasn’t historically revolved around targeted ads, and the company maintains it has no plans to put ads on Alexa. But the company has been moving aggressively into the advertising business, and it’s widely assumed to be at least exploring Alexa-based ads.

What about the alternatives, then? One obvious company to avoid might be Facebook, whose Portal video-calling device was released at a time when its reputation for protecting users’ privacy is in shambles. Facebook did give the Portal’s camera a “privacy shutter,” which might reassure some. But it has already had to walk back its claims that the device would not collect personal data.

Microsoft’s Cortana and Samsung’s Bixby may have their merits, but neither has emerged as a serious competitor to Google Assistant or Alexa as a platform. Samsung announced at CES that its smart TVs will now work with Google and Alexa as well as Bixby, while Cortana began collaborating with Alexa last year, in a signal that neither of those voice assistants is poised to go head-to-head with the big two at this point.

That leaves Apple, which famously shuns CES yet couldn’t resist getting in a privacy dig at its rivals this year. The company plastered a giant ad on the side of a 13-story building saying, “What happens on your iPhone, stays on your iPhone.” It’s a clever play on the Las Vegas slogan and an unsubtle jab at how Google and Amazon store your virtual assistant data on their servers. Apple’s own recently released smart speaker, the HomePod, anonymizes your requests to Apple’s servers, so that they don’t stay tied to your account. It also has high-end sound quality.

That sounds great until you realize that Siri, which powers HomePod, is significantly less intelligent than Alexa or the Google Assistant. Siri struggles to understand what you’re saying or to answer questions about the world, limiting its functionality to basics like playing music, setting timers, turning on the lights, or sending a text message. And the HomePod in particular comes with a significant privacy hole of its own: Because it lacks voice recognition and doesn’t support multiple accounts, anyone who gains access to your house can use it to pose as you—by, say, sending a text message from your account. Oh yeah, and it costs $350, more than three times the price of an Echo or Google Home.

HomePod might still be your best bet if what you really crave is a voice-activated speaker, privacy is your top concern, and cash is not an obstacle. But Siri isn’t really in Google’s or Alexa’s class as a voice platform, and you won’t find it built in to the third-party smart devices that are being hawked at CES. So where does that leave the rest of us?

For now, if you really want to talk to stuff in your house, you’ll have to choose between Google, a company that’s well-known for building personal profiles on its users, and Amazon, one that isn’t but hasn’t exactly ruled it out. That might make Amazon sound like the obvious choice, but there are counterpoints: Because collecting sensitive personal data has always been core to Google’s business, it has had much longer to grapple with the challenges of how to secure it. In an interview with Slate’s If Then podcast last year, Amazon’s Alexa chief Al Lindsay seemed to dismiss the idea that privacy presented a significant challenge.

A glimmer of hope at CES 2019 came in the form of a new open-source virtual assistant called Mycroft that promises never to collect your personal data. CNet notes that it’s still in early development, and it may never challenge Google Assistant or Alexa in general intelligence. But at least the device shows that some in the tech world are starting to regard privacy as a core feature of virtual assistants, rather than an afterthought.