Future Tense

The Ick Factor of Computers That Converse Like People

Can A.I. risk sounding too human?

A winking robot. Chatbots that imitate humans a little too well are creepy.
Chatbots that imitate humans a little too well are creepy.
MakaronProduktion/Thinkstock

My best friend’s boyfriend once remarked that he had never heard two people converse the way she and I do. When Zoe and I are together (or catching up over Facebook chat), we have so much to say that it is almost as if we are talking at—not to mention over—one another, hurtling between topics, tangents, stories, often talking about seemingly different things, but somehow still keeping up. He couldn’t understand how we were also listening to what the other had to say.

I felt almost a perverse sense of pride in his assessment. Though this anecdote may fulfill an exasperating stereotype about how much women love to gab, I like to think that Zoe’s and my conversational skills—the ability to talk and listen at the same time—had merely reached more advanced levels of the kind of interaction humans specialize in. Our conversations may be indecipherable, but they are undeniably human.

In a blog post on its website Wednesday, Microsoft announced that it had made a breakthrough in artificial intelligence technology that would allow chatbots to engage with users in a more human way—“more like that natural experience a person might have when talking on the phone to a friend.” Microsoft claims the technology allows its AI-powered chatbot to operate in “full duplex,” meaning it can talk and listen simultaneously, for the first time. The company likens the previous “half duplex” technology to speaking into a walkie-talkie or texting, in which one party speaks while the other listens, then responds.

The tête-à-tête technology reportedly also teaches the chatbot to anticipate what a human might say next, allowing it to “make decisions about both how and when to respond to someone who is chatting with her, a skill set that is very natural to people but not yet common in chatbots.” Microsoft claims to have successfully incorporated the new technology into XiaoIce, a social chatbot that is popular in China, but it won’t be long before the technology makes it here, with plans to introduce the technology into Microsoft’s U.S. equivalent, Zo.

Microsoft wants to make conversation with robots more like conversation with humans, more flowing, more natural. But “less stilted” seems closer to “more eerie” to me. I already dislike chatting to online customer service bots when I can’t tell whether the words are coming from a human, and the idea that Zo may soon sound even more like Zoe is unsettling. But why?

According to the uncanny valley theory, robots that look almost human create a sense of revulsion in real humans. The idea states that our affection for machines grows as they become more humanoid, up until a particular point, at which point the resemblance becomes creepy and “off.” So can A.I. risk sounding too much like a person? Conversational chatbots feel like the new, aural frontier of the uncanny valley, talking in a way that is almost but not quite right. Arguably, there’s something even creepier about the method by which robots verbally impersonate their creators, using their real (recorded) voices. And while the physicality of robots (and developers’ desire to avoid the uncanny valley) can serve as a reminder that they are machines, a disembodied voice that converses as naturally as it computes seems somehow more like a sentient being than a robot—a fully formed soul, just on the other end of the line.

Once a chatbot starts to feel like a confidante, what’s to stop you empathizing with it like a human being? Pop culture droids long ago mastered the art of thoughts, feelings, and dialogue, making conversationally fluent A.I. feel like the first major step on a path toward a sci-fi dystopia. If A.I. can chat, can it laugh? Can it feel? Can it long to be acknowledged for who and what it is?

It’s probably only a matter of time before this kind of A.I. technology—which Microsoft suggests makes users “more relaxed” (cue nervous laughter)—makes smart homes even smarter. And as my first—and clearly, lasting—impression of smart homes demonstrated, this can end in disaster. The Ultrahouse 3000, the Pierce Brosnan-voiced, Space Odyssey-parodying smart home in The Simpsons’ “Treehouse of Horror XII,” was such a charming conversationalist that Marge felt uncomfortable undressing in front of it—and she was right to be, with Ultrahouse soon falling in love with her and attempting to kill Homer. I may have a Simpsons-inspired fear of in-home A.I. to this day, but at least Alexa still sounds like a stilted, sexless robot.

Malicious, fictitious A.I. aside, this comes back to my preference for Zoe over Zo. Conversation is one of the few things that reassures me that machine-learning won’t make humans entirely obsolete—that there are some skills that will always be better performed by people, and that human intelligence will always have some advantages. One assumes that the arts—music, painting, literature—are among these human-dominated skills sets. And conversation is, after all, an art (lost or not). The ability to banter is uniquely ours. If A.I. can master this, what can’t it master?

Computers may get smarter by the day, but I’m stubbornly hopeful that people will always have something to bring to the table—even if it’s just the dinner table conversation.