Thursday marked the end of this year’s Association for the Advancement of Artificial Intelligence conference in Bellevue, Wash., where leading AI researchers gathered to share their latest findings on topics like new crowdsourcing methods and algorithms for machine vision. Many brilliant ideas were exchanged, but judging from recent press coverage, the most interesting development discussed at the conference is the claim that the smartest computers today are now about as intelligent as 4-year-old children. Is this true, and if so, is that even the right way to think about comparisons between humans and machines? No and no—and the reasons why tell us a lot about how we’re likely to interact with machines in the coming decades.
Most of the news articles discussing the paper in question quote at length from the University of Illinois at Chicago’s initial press release on the topic, titled “Computer Smart As a 4-year old.” What the paper itself actually says is much more nuanced and less impressive than the headline and subsequent articles parroting its message (and yes, 4-year old-level behavior in a wide range of domains would be extremely impressive for an AI system). In reality, researchers took one of the best semantic networks and wrote a program to test its knowledge with a standard verbal IQ test. (Essentially, a semantic network is a bunch of words connected to one another through various relationships, such as “used for” or “made of,” that is designed to give computers a head start in understanding various concepts and language.) And as the paper, press release, and talk at the AAAI conference point out, the system performed well on sub-categories such as vocabulary, but it exhibited alarmingly poor comprehension by 4-year old human standards. This shouldn’t be very surprising—the network did well on precisely the parts of the test one would expect computers to excel at. After all, who uses a physical dictionary anymore?
The paper does a good job of assessing how rich the content of a database filled with words and connections between them is. That’s useful information for AI researchers working in, say, natural language processing or knowledge representation. But it doesn’t tell us much about the state of the art in AI as a whole or whether we should be worried about a looming robopocalypse or holding our breath for the technological Singularity. To think about issues like that, IQ tests are not very helpful. This is partly because artificial intelligence is (and is likely to continue to be, for a while at least) very different from natural intelligence. One of the surprising trends in the history of AI research is that things we humans think of as hard (playing chess, winning Jeopardy!) have been conquered by researchers, but things we think of as easy (recognizing a chess board and moving pieces on it without knocking other ones over, walking around the set of Jeopardy!) remain unsolved. This makes sense, when you think about it. After all, it took billions of years for life to master things like perception and locomotion, but what we think of as “intellectual” tasks are relatively new phenomena that build in many ways on more ancient and hard-won skill sets.
And even if we could compare humans and machines along a single dimension of intelligence and move machines up along that dimension as fast as possible, would we really want to? After all, we already have plenty of humans who can do all sorts of cool things, many of whom are unemployed, and we know how to make more of them. But there are plenty of tasks that need to be done that humans aren’t naturally good at, don’t want to do, or are just too dangerous. That’s why some of the hottest topics at the AAAI conference this year were human-robot collaboration for applications such as search and rescue, in which humans and machines each do what they’re best at and accomplish more than either can alone; logical and statistical reasoning (something we humans don’t always excel at, to say the least); and finding patterns in huge volumes of data that humans will never be able to analyze on their own.
So AI systems won’t be enrolling in kindergarten this year, after all. But they may soon drive your child there, and that’s news-worthy enough for me.