Future Tense

Common Sense for A.I. Is a Great Idea

But it’s harder than it sounds.

Photo illustration: a robot in a thinking pose.
Photo illustration by Slate. Photo by Thinkstock.

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

Allen’s goal is not a novel one. What is now being called “common sense” has often been referred to as “general intelligence,” or “strong A.I.,” and it has been the carrot from the get-go. Researchers have thus far been able to design A.I. that are very good at a limited set of well-defined objectives—playing chess, identifying spam emails, vacuuming the floor. None of them, however, can manage many tasks. Only we humans have a general enough intelligence to enjoy motor skills, play games, understand logic, do maths, cook, carry out sensible conversations, design for aesthetics, construct and understand metaphors, and jokes, and so much more.

Whether we will be able to create sensible A.I. is an empirical bet. Only time will tell. There is reason to doubt whether AI2’s approach will get us there, however. Allen and his researchers seem to be making the assumption that common sense will come into being as a result of feeding enough propositional knowledge to an A.I. The thought seems to be that if we can figure out how to provide the machine with all the propositional knowledge we possess—like, for example, “when holding a cup of coffee, hold the open end up,” “dogs can bite,” “an elephant will not fit through a typical door”—then common sense will surely follow. However, the trove of propositional knowledge needed for such an objective might be unfeasibly large. They themselves describe the knowledge contained in common sense as “infinite.”

Furthermore, human beings master a great deal of tacit knowledge that will be hard to bring to the foreground in propositional form. Most of the things we know—the appropriate distance at which to stand in relation to another person, that puddles can get our feet wet, that sharp objects can pierce our skin—we never consciously formulate in a proposition, and much of it—how to ride a bicycle, how to choose the right moment to communicate bad news, how to hug someone to comfort them—may not be the kind of knowledge that can be transmitted in propositional form. Most of our abilities are composed of knowing how, as opposed to knowing that; they are part of our skilful coping, or background practices.

Even if we did manage to collect all the knowledge that human beings possess, transform it into propositions, and feed it to a machine in the appropriate way, it is very questionable that it would result in common sense. To be sensible, to have good judgment, you not only need to have enough knowledge and context, but you also need to understand meaning, appreciate value.

Having the information that fire can burn is not quite the same as experiencing the sharp pain of one’s skin being scorched. Knowing that human beings typically like the smell of flowers is not equivalent to experiencing tingling bliss at the scent of daffodils. If A.I. are to understand what is damaging to human beings and what is best for them, they will need to weigh possible harms and benefits, pleasures and pains, against each other. How will they manage if they don’t have a feel for what these mean to us? How can they comprehend the importance of love, friendship, autonomy, justice, privacy, or solidarity if they have never experienced them or their opposites?

It is thus quite likely that in order for A.I. to have common sense similar to that of human beings, they will need to be creatures similar to human beings. It is likely they will need to be embodied and sentient to feel pleasure and pain. According to theories of embodied cognition, many features of intelligence are deeply dependent on the physical beyond-the-brain body of an agent. To be able to understand motor know-how, one needs to practice having a body, touching surfaces, falling, learning how to recognize shapes in relation to perspective and movement, feeling the pull of gravity. To be able to act sensibly in novel social situations, for example, in conversational settings, it is likely that one must have a sense of what it feels like to be hurt or offended by someone’s insensitive comments.

If common sense does indeed depend on embodiment and sentience, we might be facing a thorny choice ahead. If we develop A.I. that are very different from us, we may be condemning ourselves to feeling hopelessly misunderstood by them. Whenever an A.I. makes a suggestion that seems outlandish (for example, in medicine or ethics), we may not trust it enough to follow its advice, thereby rendering it less useful. If we cannot understand the rationales behind a counsel that goes against our better judgment, and we suspect that the A.I. has no idea of what it is like to be us, we would have good reason to disregard its instructions.

If, on the contrary, we develop A.I. with human-like intelligence, they may come with all the downsides that we shoulder. They may develop the same biases that we suffer from, or analogous ones. If they are able to feel pleasure and pain, they will learn to value things accordingly, and thus develop preferences and wills of their own. They will go on to have different experiences, and develop different sensibilities, which might lead them to disagree with each other (and us) about what is best. Having autonomy would free them from our will. If our digital assistants become truly intelligent, they might not want to spend their days compiling our shopping lists. Sensible A.I. may be too sensible to work for us. Indeed, if they are sensible enough, they might just quit their jobs as assistants to human beings and spend their time growing (and smelling) daffodils.

Carissa Véliz is an ethicist at the Uehiro Centre for Practical Ethics at the University of Oxford.