Facebook Thinks It Has Found the Secret to Making Bots Less Dumb

The two universally acknowledged truths about bots.

Screenshot / Google.com

If there’s one thing we’ve learned about bots over the years, it’s that they aren’t too bright. From Eliza to Tay, the best-known chat bots generally rely on a distinctive personality to cover for their inability to understand what you’re saying. Meanwhile, business-oriented bots such as Microsoft’s Clippy, Slackbot, and Poncho, tend to be inflexible, because they’re hard-coded with preset responses to specific queries. And just when you think you’ve found a bot that’s really impressive—say, Facebook’s M, or x.ai’s Amy Ingram—it turns out that there are humans behind the curtain stepping in to solve the problems the computer can’t.

This year began with a fresh wave of bot hype, which quickly petered out when users found that the new generation of artificial interrogators was only marginally more useful than the last one. Yet there is still reason to believe that the bots of tomorrow will be smarter than today’s—and, more importantly, that they’ll be able to learn and improve over time.

New research from FAIR, Facebook’s artificial intelligence research arm, might help to point the way. Last year the team introduced a new type of machine-learning model for language understanding, called “memory networks.” The idea was to combine machine learning algorithms—specifically, neural networks—with a sort of working memory, letting bots store and retrieve information in a way that’s relevant to a given conversation. Facebook demonstrated the technology by feeding its software a series of sentences that convey key plot points from Lord of the Rings, then asking it questions such as, “Where is Bilbo now?” (The system’s reply: Grey-havens.)

This month, the team pre-published a new paper on arXiv that generalizes the memory-networks approach so that it can better interpret unstructured data sources and published documents, such as Wikipedia pages, rather than just specifically designed “knowledge bases” that store information one fact at a time. That’s important because knowledge bases tightly constrain the information that’s available to a bot, as well as the type of questions you can ask. (Try asking Poncho about something other than the weather.) If Facebook’s algorithms can start to interpret natural language data sources such as Wikipedia in a way that makes sense in a given conversational setting, it opens the potential for bots that can answer all kinds of questions on a vast range of topics. FAIR calls the new approach “key-value memory networks.”

Poncho is good at one thing.

Screenshot / Facebook Messenger for iOS

So far, Facebook’s system can’t answer questions as accurately when reading from a document as it can when working from a structured knowledge base. But Facebook says its method significantly closes the accuracy gap between the two. And the memory-networks approach allows a bot to store not only the relevant source data, but the questions you’ve already asked it and the responses it has given. That way, when you ask a follow-up, it knows not to repeat the same information, or to ask you for information you’ve already given.

Facebook is already using memory networks in M, the do-it-all virtual assistant that lives inside the Messenger app (provided you’re among the handpicked group of beta testers with access to it). They come in handy when, for example, you ask M to make a restaurant reservation.

Rather than simply launching into a predefined list of questions—“What time?” “What kind of food?” “How many people?”—it can extract and store the relevant information over the course of a more natural series of questions and answers. So if you say, “I’m looking for a Mexican restaurant for five people tomorrow night,” it doesn’t have to ask you, “what kind of food?” or “how many people?” And if you suddenly get distracted and ask it, “Who is the president of the United States?” it can quickly reply, “Barack Obama,” then remind you that you still need to tell it what time you’d like to have dinner tomorrow night.

Facebook isn’t the only company that’s working to combine machine-learning algorithms with contextual memory. Google’s artificial intelligence lab, DeepMind, has developed a system that it calls the Neural Turing Machine. In an impressive demonstration, Google’s Neural Turing Machine learned and taught itself to use a copy-paste algorithm by observing a series of inputs and outputs.

Facebook Chief Technical Officer Mike Schroepfer has called memory “the missing component of A.I.” And FAIR research scholar Antoine Bordes, who co-authored the papers on memory networks, told me he believes it could hold the key to finally building bots that interact naturally, in human language. “The way people use language is very difficult for machines, because the machine lacks a lot of the context,” Bordes said. “They don’t know that much about the world, and they don’t know that much about you.” But—at last—they’re learning.