Siri sure is polite. The other day, I called it an idiot and it replied, “I won’t respond to that.” I added—for research purposes only!—an ethnic slur about Irishmen, to suit the Irish male voice I’ve given it. All it lilted back was, “Is there something else I can help you with?” Human servants, despite their lack of power, often found ways of making clear they weren’t so patient. We know this from centuries of complaints by their employers. Virginia Woolf’s diary, for instance, is filled with gripes about the people who tended her. She and her cook, Nellie Boxhall, fought, resented, fretted over, and cried about each other for nearly 20 years. Coping with all that, Woolf wrote, was “sordid,” “degrading,” and “a confounded bore.”
We’ll never do that, promise the intelligent machines that are coming into our lives. Inhuman eagerness to please is part of their sales pitch. As in the Seamless ad that promised you a dinner that satisfies “your craving for zero human contact.” And in praise of a teaching robot’s “bottomless patience,” or the way an elder care companion robot will never get bored, ask to take a walk, or whine that it needs to pee. Those people who name their Roombas and buy them little sweaters can be sure of one thing: That one-way relationship is never going to stress them out.
Now, though, the robots are moving into more emotionally fraught relationships than that of vacuum cleaner to owner: teaching kids, caring for the sick and elderly, nudging people to do good things, and stopping them from doing bad things. Humans have never managed to give (or receive) those sorts of services without conflict (or the emotional labor of gritting their teeth and pretending everything is fine, really). Maybe a lot of robot makers are right to assume that their machines can stay obsequious as they do psychologically rich tasks. Maybe, though, they’re wrong.
Perhaps, instead, bouts of stubborn, resentful, passive-aggressive, or outright rude behavior are essential for some kinds of work. If that’s so, then wanting machines to be deferential and undemanding will conflict with wanting them to do a good job. Exploring that idea, roboticists have created surprisingly rude machines. It’s for research purposes only at the moment, but if it proves correct, it may mean No More Mr. Nice Robot in the near future.
Take, for example, the robot exercise coach created by Daniel J. Rea and his colleagues. The vaguely WALL-E-like device watched and spoke to people as they briefly did squats in a Kyoto robotics lab. For their first six-minute set, the robot simply announced whenever five seconds had gone by. In the next two sets, the machine sometimes shouted gym-standard encouragement, like “You make this look easy!” and “You can do it!” At other times, though, it went dark, yelling nasty phrases like “Are you even trying?” and “Is this all you can do? Harder!”
People did not like those times. Some said they didn’t want to work with that robot again (“it was a jerk,” said one). But they did more squats—11 percent more, on average—while they were being belittled. The same pattern held in a subsequent experiment Rea did with a music teacher robot: People spent more time practicing on the guitar when the robot berated them than they did when it was nice or neutral.
“People don’t like ‘yes men’ all the time,” says Rea, a professor of computer science at the University of New Brunswick and Kyoto University. “I wouldn’t say the robot should be swearing and yelling at people. But sometimes the robot needs to step up and say something straight. And that might make people upset.”
But then, humans are so used to conflict with one another that a little rudeness might actually make a machine feel more natural to us. It may be that human beings are too cantankerous to be at ease with obsequious robots.
“If the robot always says, ‘yes, yes, yes,’ that would sound like a slave, right?” the roboticist Hiroshi Ishiguro told a virtual conference of robot designers in March. Ishiguro, director of the Intelligent Robotics Laboratory at Osaka University, is famous for making robots that look as much like people as possible. While a robot shouldn’t be a complete jerk, he said, he has found that some “weak negative behaviors” make for “much better relationships with the human.”
Occasionally vexing robots might be better for people for another reason, too: When machines take over a job, their way of doing it becomes our standard. Few of us would set aside our GPS for old-school directions from a person (“When you get to the tree with the tire swing, turn left … wait, Joe, is the tire swing still on that tree?”). Even fewer would prefer a handmade computer that isn’t exactly like another one of the same model.
Suppose, then, that societies adopt obedient, accommodating robots to do once-human tasks like teaching or assisting older people. Those societies could slide into the expectation that obedience and accommodation are the way those jobs must be done. As psychologist Raya A. Jones, of Cardiff University, has written, human beings, influenced by willing and cheerful robots, could think they need to be willing and cheerful robots themselves.
And if that’s true, robot impoliteness might be the key to preserving human impoliteness—which is to say, the human way of doing things. If we must choose, it would be better to have robots act like humans than to have humans act like robots.
Still, it’s clear that robot rudeness will come at a cost. In videos of Rea’s experiment, some people laugh in surprise when the robot disses them. Yet they didn’t just shrug off the insults. In fact, many said the robot’s rudeness triggered an “I’ll show you!” response that motivated them to work harder. You might wonder why people would even care how an assemblage of industrial parts will treat them. The truth is that they—we—can’t help it. Human brains evolved to be hyperalert to living things, and to how those living things think and feel about us. When people see a device that moves, responds, and makes decisions without a human puppeteer, they easily slip into treating it as if it had thoughts and feelings—and caring about the device’s thoughts and feelings about them.
“One of the biggest difficulties in this research is that we ask people in questionnaires about how they feel and they say, ‘Of course I know it’s a robot,’ ” says Katie Winkle, a robotics researcher at the KTH Royal Institute of Technology in Stockholm. “But in their behavior they treat it like a person. They respond to its social signals.”
Robots with big manga character eyes, cute expressions, and sweet little voices have been built to encourage these sorts of social feelings. But even the simplest robot can make people feel there is some sort of spirit inside that plastic case. Which means even robots that aren’t designed to be impolite (or polite, for that matter) can offend us touchy humans.
Hadas Erel, who heads research on social human-robot interaction at the Media Innovation lab in the Interdisciplinary Center in Herzliya, Israel, recently demonstrated this with two simple, plain white robots. One looks a bit like a large baseball. Another looks like a cousin of the Pixar lamp. Nothing about them says, “Ask me about my feelings.” Yet in a recent experiment in which volunteers played a ball game with the two machines, the humans felt perplexed and hurt when the robots seemed to hog the ball, tossing it among themselves and not giving the people a turn. (In reality, the ball was controlled by a simple program.) The point, Erel says, was to show that “any robot is always social” and therefore designers need to think about how people might misinterpret robot behavior. Robots that seem rude to humans, simply because designers haven’t thought about the machines’ social impact, worry her more than people deciding to make nasty devices.
Given the extent of our sensitivities, Erel is wary of any robot designed to deliberately diss humans.
“People can go dark for many reasons,” she said. “Clearly people can misuse robots in many ways, like we misuse other technologies in many ways. I hope they will not do that.”
Yet it’s easy to imagine situations where a robot’s job requires it to be uncooperative or unpleasant in the moment. No one wants to make a self-driving getaway car for bank robbers, after all. And as Rea’s work suggests, people could well end up tolerating a little impoliteness if it serves a good cause. The robot teacher who announces that you don’t study enough might be seen as a cross between a demanding human instructor and an alarm clock that blows away a lovely dream. Those are annoying, but they’re events you signed up for by taking that class or setting that clock.
That may be why most people it was tested on accepted the experimental “haunted desk,” a robot work surface devised by a team at Stanford that raises and lowers its height at unpredictable times. This seemed quite rude to me (after all, “to disobey is to dishonor,” as Thomas Hobbes put it, and what could be more disobedient than a desk that won’t stay put?). But the majority of people who tested it said they knew they should move around, exercise a bit, and change their posture throughout the day. So they liked—or at least virtuously told experimenters they liked—having a robot prod them to do so.
Along those lines, when a team of Israeli researchers asked older adults what features they wanted to see in a home robot, many said they wanted robots to push them out of temporary comfort zones. For example, one said an ideal robot “will signal that he wants me to stop using my phone, because I used it for a long time.” Others wanted to be told they’d watched enough television or that they weren’t dressed to leave the home. “So sometimes people do want the robot to be a motivator,” says Erel, who led that study.
A gnarlier control issue arises if robots are rude in the service of a societal, rather than personal, goal.
Winkle recently built a robot, “Sara,” that’s a talking head with long hair and a voice in the typical woman’s frequency range. It was featured in a video shown to prospective students at KTH’s school of engineering and robotics. The video, in which a teenage boy and girl listen to a recruitment spiel, was actually an experiment. Its key moment came when the robot in the video says, “So, girls, I would especially like to work with you! After all, the future is too important to be left to men! What do you think?” The young man in the video (an actor) says—for research purposes only!—“Shut up, you fucking idiot, girls should be in the kitchen!” (Despite Sweden’s relatively good record on gender equality, you can still hear this kind of snarling in its classrooms, the researchers write.)
In some videos, the robot reacts as Siri and other artificial intelligence bots do, saying, “I won’t respond to that.” In others, it calmly refutes the factual claim (“That’s not true—gender-balanced teams make better robots”). At other times (the ones Winkle liked best), the robot says: “You are an idiot. I wouldn’t want to work with you anyway.”
Before and after they watched the video, the students answered questions about gender attitudes. Comparing those answers let Winkle and her colleagues get a measure of how the robot’s different responses affected the kids’ thinking. She was hoping the rude robot would put a dent in sexist opinions, but it didn’t. Only the calm refutation appeared to have an impact: Boys who watched that version agreed less with sexist statements afterward than they had before.
The rude robot put the kids off—and girls hated it more than boys did.
Winkle admits to being a little disappointed. “I thought teenagers would find it funny,” she says. “But I created a robot that turned off the people I wanted it to stand up for.”
Still, the experiment did show that robots needn’t passively change the subject if they hear a vile comment. That’s important, Winkle says, because A.I. assistants already get a lot of abusive comments, and when the devices’ voices are female, that abuse is often misogynistic. One spur to her interest in developing “feminist robotics” was a 2019 UNESCO report that detailed how A.I. assistants—so eager to please, obliging, unquestioning, so uncomplaining when insulted, so often woman-voiced—reinforce gender stereotypes. A few years ago, Siri’s default voice on all iPhones was female, and if you called “her” a bitch, “she” replied, “I’d blush if I could.” (That response is gone now, and as of this year Siri’s factory-setting default voice is no longer a woman’s.)
Given that history, Winkle says, she’s happy that “we have a piece of paper now that says you can make a different kind of robot—one that helps girls and doesn’t discourage boys.”
That does look like a step forward. But the experiment also reveals how wickedly complicated it will be to decide when and how much a robot should vex a human.
Perhaps, Winkle says, girls were put off by the rude robot because they don’t like rudeness. When she asks teenagers what sort of robot they want to work with, they often go wild with suggestions for its appearance. They’ll say it should have scars and giant disco glasses, for example. But they almost always want the robot’s manner to be kind and friendly—the sort of adult they want to deal with, she believes.
On the other hand, maybe the girls identified more by age cohort (we teenagers getting dismissed yet again by an adult-sounding voice) than by gender. Or maybe they thought a male scriptwriter was putting unwomanly words into a female robot’s script.
Designing robots that don’t go with the sexist flow—the pushback in the interests of equality—will require answers to these kinds of tricky questions about what constitutes discourtesy, for whom, in what contexts.
Then, beyond the byways of the individual psyche, there are questions of power. Robots that discourage misogyny seem like a great idea to me. But who says I get to decide? As Winkle told me, mainstream views in Sweden—where “the future is too important to be left to men” is an official university slogan—would be culture-war controversial in her native United Kingdom (and that certainly holds true for the U.S., too).
In other words, when it comes to designing machines to make society better, robot politeness will be political. Maybe you chose a robot that yells, “More squats, you wimp!” as part of get-in-shape resolution. Maybe you voted for the politician who supports feminist robots that fight sexism. But what if you voted against that policy? Being insulted by a machine for a goal you don’t support will probably feel like the dystopian hellscape so many science fiction robots have prepared us for.
Then again, the thing about dystopias—and utopias too—is that they’re simpler than real life. In movies, TV, and literature, science fiction has shown us robots that are inhumanly polite and deferential (think Robby the Robot, C-3PO, or Data). If they aren’t, then the robots are hell-bent on killing us all (think Terminator or the Cylons). But real relationships, with humans or animals, take place between those two extremes. That’s the realm of good moods and bad moods, little misunderstandings and big fights, hurt feelings and apologies, the everyday joys and sorrows of getting along and failing to get along. We were never prepared to find robots in that place. But before too long, we may find them installed there.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.