Jeopardy, Schmeopardy

Why IBM’s next target should be a machine that plays poker.

Ken Jennings and Brad Rutter compete on Jeopardy! against Watson

Watson, the Jeopardy!- playing supercomputer, did give the right response. In a way.

The clue, in the category “Rhyme Time,” was “A hit below the belt.” Though this was just a practice round, held years before the public man-vs.-machine challenge that airs this week, Watson was dealing with authentic game-show material. Back in 1992, when human contestant Marty Brophy saw that $200 stumper in a broadcast episode, he correctly replied, “What is low blow?” The state-of-the-art AI, by contrast, scanned its elephantine database of documents and came up with something else: “What is wang bang?”

Watson’s wang-bang days are now in the past, to the point that the machine now competes with the best human players in the history of the game. IBM is rolling out the red carpet for its new star-child with a big publicity campaign, but for all the hype, it insists Jeopardy! is just a convenient exhibition. They have a much more important goal: teaching a machine to understand language written for humans, not computers. This is one of the holy grails of artificial-intelligence research, and a technology that would revolutionize any industry plagued by the fact that computers are still miserable at understanding what’s known as “natural language.”

The Watson project is a case where a relatively simple human game can teach a computer the skills and character it needs to succeed elsewhere in life. This is not true of every game that computers can play well. No human can beat a machine that’s programmed to play checkers perfectly, but the existence of masterful checkers software doesn’t solve any classic problems in artificial intelligence. There may be some applications for the decision-making algorithms they use, but nothing close to the promise of Watson’s post-game-show career.

Quite simply, the development of computer programs that can beat champion human players of checkers—or even chess—hasn’t really changed the world outside of the competitive gaming circuit. This didn’t always appear to be the case. Decades before IBM’s Deep Blue showed up and defeated chess grandmaster Garry Kasparov, we imagined that such an accomplishment would require a machine that could think creatively and exploit an opponent’s particular tendencies and habits. But the emergence of massive processing power seems to have obviated the need for major innovations in AI. It’s been 14 years since that famous chess match in New York City. As Kasparov wrote last year in the New York Review of Books,

Instead of a computer that thought and played chess like a human, with human creativity and intuition, [the AI crowd] got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force.

One Watson researcher I spoke with disputed this, saying the strategic element of the Deep Blue program was as important as its computational brawn. (One of that machine’s original developers now works on the Watson team.) But it’s safe to say that the algorithms that were finally able to defeat Kasparov did not revolutionize the industry. Chess simply wasn’t the right challenge for the computer scientists. In fact, there are many other games at which computers are blisteringly incompetent, and whose mastery would, in fact, herald tremendous breakthroughs in artificial intelligence. One of those games is poker.

It may be surprising to learn that it’s much easier to build a computer that can win at Jeopardy! than one that cleans up at the poker table in real-world situations. The quiz show, after all, can draw from any subject on God’s green earth, while card games are based on 52 discrete units that interact with presumably calculable probabilities. Good Texas Hold ‘Em players can estimate the odds of completing this or that hand based on the cards they see in their hands and on the table. So why can’t a computer kick some ass at the casino?

For two-player, limited-bet Hold ‘Em, computers are quite good. In 2008, a program called “Polaris” edged out a team of professional poker players with two wins, one loss, and a draw. Computers are easier to beat when you play “no-limit” Hold ‘Em—unlimited bets complicate the algorithms and change the optimal strategy—but researchers are confident this problem will be solved in due time. These types of two-player games are all fairly predictable, in their way. But when you add a third player to the game, all hell breaks loose.

“For two-player games, there still is fundamentally a right answer” as to what move to make, says Michael Bowling, the leader of the Computer Poker Research Group at the University of Alberta. “It’s a fair game—I’m guaranteeing, in the long run, that I don’t lose money. Every time my opponent makes a mistake, it’s only to my advantage.” That is to say, it’s possible to develop an optimal strategy such that there’s always a best move that can be made. One needn’t worry about the fact that a computer can’t read its opponent’s body language or demeanor. A two-player game of poker is essentially a math problem.

Adding a third player is the equivalent of going from a 2-D world to a 3-D world: The one-on-one matchup unfolds into a trio of one-on-two relationships. The concept of a “right move” for any given situation goes out the window, and the delicate strategic equilibrium is ruined. The strategies of the three players become hopelessly intertwined—and now hinge on both statistical assessments and psychological ones. Bowling described one experiment in which three computers—or “bots”—had reached a virtual stalemate in a simplified version of three-player Hold ‘Em. To test this delicate balance, they switched one of the machine players over to a simple (and generally unadvisable) “always raise” strategy. To their surprise, the always-raise bot didn’t lose much ground. The bot to its left, however, cleaned up—and the one to its right had to mortgage the farm.

That’s why poker is such a useful problem: To develop an excellent multiplayer bot, programmers would have to model people as well as probabilities. “To be able to handle these ring games, we’re going to have to work in behavior,” Bowling says. This is outside the realm of traditional game theory, and outside the sort of brute-force calculations and strategizing that made up Deep Blue’s DNA. Even if a computer couldn’t read body language, it would glean a lot of information from more explicit patterns of human behavior. UCLA computer scientist Leonard Kleinrock, whose most famous student is poker pro Chris Ferguson, says the timing and speed with which someone bets, for example, could be exploited by a machine as a different sort of tell. (Perhaps someone bets instantaneously when bluffing but takes forever with a borderline hand.) So could things like how often an opponent is caught bluffing, which hands he likes to pursue most often, and how his betting patterns change over time.

So far, most of the cross-disciplinary work is being done with economists, who have long incorporated game theory into their own models. There’s interest in poker from both directions. One economist I spoke with, Roger McCain, immediately suggested AI researchers focus on three-card draw, a more strategic game than Hold ‘Em, and begin from the perspective of human frailty. “We know people don’t always choose the best strategy,” he said. Other models approach the game as though the opponent were an extremely savvy player. Indeed, the very process of sussing out the competition—in Jeopardy! terms, is he a Ken Jennings or a Cliff Clavin?—could yield some important computing innovations down the line. As with so many of these emerging technologies, the most obvious applications would be in finance: Since both poker and investing are about managing risk with money on the line, it’s easy to imagine how a model for playing poker would serve as a powerful tool for playing the markets. In a broader sense, understanding the nature of gamblers may help economists to understand rapid, and even irrational, shifts in behavior—say, when people move their money, sell their houses, buy government bonds, or do anything else to try to stay a step ahead.

Building such a program will be a major undertaking, and we won’t get anywhere by locking a bunch of economists and computer scientists in a room for a month. For one thing, many in the AI crowd consider existing economic models of behavior to be unrealistic, as they expect people to act strategically after considering thousands of options. “Basically, [computer science] criticizes economic theories because they require a lot of computational power from the players,” says Constantinos Daskalakis, a game theorist at MIT. There is a more promising line of research in economics known as “bounded rationality,” which takes into account that humans have only so much processing power, but that field will need to develop further before it can be of much use to the programmers.

The irony is that, unlike checkers or chess, a poker program that studies behavior begins to resemble the very thing that Deep Blue did not: a machine that plays games like a human. Without a doubt, the first computer to win the National Heads-Up Poker Championship will rely on its share of brute force computations. It will also understand a thing or two about personality—who its opponents are, the sort of mistakes they’re likely to make, and the moment when they’re at their weakest. I can see it now: Watson on Poker After Dark, announcing to his human components in that treadmill voice of his: “All in.”

Like Slate on Facebook. Follow us on Twitter.