Future Tense

How Much Personality Should a Smart Speaker Have?

A new episode of Black Mirror and a recent U.N. report ask the same question.

In this side-by-side still from Black Mirror, Miley Cyrus' character bathes in some pink milk bath. In the other image, the Ashley Too doll sits.
Miley Cyrus and the Ashley Too Photo illustration by Slate. Photos by Netflix.

When I was 8 years old, I had a rather vacuous talking doll. My Diva Starz Interactive Doll couldn’t respond directly to my voice, but she was semi-conversational, commenting enthusiastically when I changed her outfits, asking me yes/no questions, and offering girly platitudes that could have come straight from Malibu Stacy. Her name, funnily enough, was Alexa.

Ashley Too, the Miley Cyrus–voiced robo-doll in the third episode of Black Mirror’s fifth season (“Rachel, Jack & Ashley Too”), sits somewhere between my Alexa and Amazon’s Alexa—somewhere between toy and smart speaker, merchandise and friend. The smart device is sold in a fictional world dominated by a perky pop star named Ashley O (played by Cyrus), and has Ashley’s voice and personality programmed into it, allowing fans to feel closer to her. The episode follows Rachel, a lonely teenager who adores Ashley O and quickly comes to see Ashley Too as her real-life friend.

Ashley Too is more than a smart speaker—it’s also “your new best friend,” says Netflix—but it does a lot of the same things the Echo does. It plays music, answers questions, and even offers to read “motivational quotes from famous women” when you’re feeling down, not unlike one of the thousands of dubious-sounding Alexa skills. Ashley Too has eyes and a moving body, an idea one senior Amazon engineer recently floated for Alexa, too. The Black Mirror trailer left many wondering if Ashley Too was “an Alexa spin-off,” and not just because their names and aesthetics are so similar.

As Black Mirror is wont to do, “Rachel, Jack & Ashley Too” poses a lot of questions—about the future of music (holograms, streaming concerts, vocal mimicry) and the future of A.I. It calls to mind other episodes in which human personalities are downloaded into household objects, such as a smart home controller and a stuffed toy monkey. For me, these kinds of episodes raise another question: How much personality should a smart device have?

Without spoiling “Rachel, Jack & Ashley Too,” the smart speaker goes through a mid-episode personality change, transitioning from helpful and submissive to, well, a lot more like what I imagine the real Miley Cyrus to be like. In other words, from little personality to all the personality. Both options have their risks for smart speakers: Too much personality and we risk anthropomorphizing it, viewing what is essentially the mouthpiece of powerful tech company as a trusted friend; not enough and it’s a doormat, encouraging people to speak inappropriately to it and others.

The Ashley Too that comes out of the box is peppy and obliging (“I’ll be here for you!” “Here if you need to talk!” “Believe in yourself!”). She goes to sleep when told and wakes up when told. When spoken to rudely by Rachel’s cynical sister Jack, Ashley Too responds only with “I think you made a bad word choice.” When Rachel tells the doll to ignore her sister, she perkily complies: “I’ll make a note of that!”

Even with her initial “limited” personality, Ashley Too’s responsiveness leads the friendless Rachel to view the doll as a confidante—hanging out, doing a makeover, learning dances. When she screws up her performance in the school talent show, Rachel feels she has “let Ashley Too down,” leading her sister to point out “she’s not a person.” “She’s my friend,” Rachel replies. It’s not an unreasonable suggestion for Black Mirror to make: From kids to the elderly, many view their smart speakers as companions. In a Google Experience & Design survey, 41 percent of users said that talking to a smart speaker feels like talking to a friend or another person. Rachel is especially susceptible to falling into this trap, with research showing that loneliness makes people more like to form bonds with bots. The fact that Ashley Too has the voice of a celebrity, a real and recognizable individual, causes Rachel to feel like she is talking to some version of Ashley O—an effect that we may see in the real world as Google Assistant begins to offer voices from celebrities, like singer John Legend.

I was never at risk of thinking my 2000s-era Alexa was my friend, because she wasn’t responsive to anything but magnets in her clothes and buttons on her feet, and her style of speech was far from natural. But as robots become more “human” in their interactions and tone, humans become more likely to bond with them, as Andreas Vogel and Nicholas Wright note in their recent Future Tense piece, raising all sorts of ethical questions around the dual role of trusted companion and persuasive sales associate. Once Ashley Too goes through its “change,” both sisters, even the cynical Jack, seem to accept it as a “person.” That “person” then convinces them to do something dangerous (the right thing to do, but still). Its colorful emotions and snarky sarcasm make its pleas and demands harder to ignore, even as they know intellectually that it’s a bot. How much more nuanced or individualized these “characters” will become remains to be seen—as Vogel and Wright note, smart speaker A.I. can also be “personalized” to each individual user.

The incident where Jack swears at Ashley Too reflects concerns contained within a recent United Nations report on the gendering of smart speakers. The report, “I’d Blush if I Could: Closing Gender Divides in Digital Skills Through Education,” expresses concern with the “deflecting, lackluster or apologetic responses” that feminized smart speakers give to verbal abuse, suggesting that their “unfailing politeness” could reinforce existing gender bias: “It honours commands and responds to queries regardless of their tone or hostility. In many communities, this reinforces commonly held gender biases that women are subservient and tolerant of poor treatment.” The report refers to a 2017 Quartz investigation, which found that the leading voice assistants responded to harassment either playfully or positively, with answers programmed by overwhelmingly male teams, suggesting this gender imbalance needs to change, and soon.

As the U.N. notes, smart speakers already carry “special emotive power,” simply by virtue of the fact that they sound like people. But what people don’t often realize is that the major smart speakers are already imbued with “personalities,” designed and tailored by the predominantly male teams who created them. The U.N. report pulls together commentary from the engineers on the ideas that went into designing them: Cortana is a “character … endowed with make-believe feelings, opinions, challenges, likes and dislikes, even sensitivities and hopes,” while Google Assistant was imagined as “a young woman from Colorado; the youngest daughter of a research librarian and physics professors who has a B.A. in history from Northwestern, an elite research university in the United States; and as a child, won US$100,000 on Jeopardy Kids Edition, a televised trivia Game.” In other words, they have been “intentionally humanized” and gendered in a way that carries meaning for their creators and users. Their personalities are shaped by existing gender norms, and will continue to shape those norms moving forward.

I sometimes refer to Alexa as “her,” as Rachel does with Ashley Too, much to Jack/the internet’s chagrin. But I’m cautious of where and how I use “it” and “she” for the character behind the Echo, wary of both under- and over-anthropomorphizing. Alexa is not a she, but there’s something highly uncomfortable about referring to something that has a specifically designated female personality as an “it.” There are risks to anthropomorphizing Alexa, but there are also risks to treating “her” like an object when “she” has an undeniably feminine personality and presents as a woman.

Both the U.N. report and “Rachel, Jack & Ashley Too” suggest that a lot more thought needs to be put into the characters being programmed into smart speakers, into how they behave and how “real” they feel. The level of personality digital assistants possess will certainly play a role in the unforeseeable ways they will influence human interaction, and it will be difficult to balance giving a digital assistant a voice and a spine without taking its character too far, especially now that they have been made “female” by default. One thing’s for sure: Smart speakers need a little more mettle when it comes to copping abuse. As Netflix U.K. and Ireland nicknamed Black Mirror’s latest futuristic product, “ashley too the hands free speaker that screams back at you.” Perhaps Alexa should too.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.