Play Slate’s Lean/Lock and test your skills as a political pundit.
Every election creates winners and losers. Some people get lucky. Others win with skill. Yet others prevail by sheer force of will. I’m talking, of course, about political prognosticators.
With the midterm election less than two weeks away, everyone is picking a number—the number of seats in the House and Senate they think Republicans will pick up. (The GOP needs to pick up 39 seats in the House to form a majority, and 10 in the Senate.) The boldest declaration so far has come from Dick Morris, who predicted that Republicans will pick up more than 74 seats, the current historic record, and said that number “could go as high as 100.” Bill Kristol and Mark Halperin both said in August that if current trends continued, Republicans could gain about 60 seats. (Halperin was careful to say that his statement was not a “prediction.”) At the low end of the spectrum, one group of political scientists has predicted Republican pickup as low as 22 House seats. The two most influential Washington prognosticators, the Cook Political Report and the Rothenberg Political Report, put the Republican surge somewhere between 40 and 50 seats, while University of Virginia professor and longtime handicapper Larry Sabato bets the GOP picks up 47 seats in the House, and 8 or 9 in the Senate.
Not everyone can be right. In fact, there’s a decent chance that some people will end up eating their hats. If past elections are any indicator, there are plenty of ways political predictions can go awry.
The earliest predictive disasters occurred because of bad polling. In 1936, the Literary Digest, which had correctly predicted the outcome of the five previous elections, gave its nod to Republican presidential candidate Alfred Landon. According to its poll of two million voters, Landon would beat Franklin D. Roosevelt with 57 percent of the vote. He did not. The problem was the survey’s sample, which consisted of relatively well-to-do Digest readers—i.e. Republican voters.
Hubris led to another historically disastrous prediction. George Gallup and other pollsters were so sure that Thomas Dewey would defeat Harry Truman in 1948, they stopped polling weeks before the election. The consequences were infamous. Another flaw is giving anecdotal evidence more weight than it deserves. “The press traveling with McGovern in 1972 was absolutely convinced that a massive upset was gonna happen because he was drawing huge crowds,” says Sabato. McGovern ended up losing every state but Massachusetts. David Broder’s declaration in 1983 that “[w]hat we are witnessing this January is not the midpoint of the Reagan presidency, but its phase-out”—an ill-fated prophecy—owed partly to the Republicans’ recent defeat in the midterms, partly to the president’s sagging poll numbers, and partly to Broder’s vague perception that “power is moving away from Reagan in the ongoing work of government.”
Even the best prognosticators run into problems when calling local races, due to the dearth of surveys. In 2006, Carol Shea-Porter wasn’t seen as competitive in her campaign to win a seat in the U.S. House from New Hampshire. Pollsters and handicappers forgot about her as a result—until she won in November. Same with Dave Loebsack’s race against 15-term House incumbent Jim Leach in Iowa in 2006. “We weren’t carrying them on our competitive races list,” says Nathan Gonzales of the Rothenberg Political Report. Then there was the collective misfire in the days before Hillary Clinton won the New Hampshire primary in 2008. “I think we all got that one wrong,” says Jennifer Duffy of the Cook Political Report. “I remember doing an interview I hope to God didn’t air. I predicted Obama’s victory because that’s what the data I had said.” The explanation: Turnout among women was a lot higher than expected.
Prognosticators control for error by looking at more than just public polls. They also analyze turnout models, conduct interviews with candidates, and communicate with campaigns about their internal polls. There’s a chance subjective information could lead predictors astray. But it’s a chance they’re willing to take. “I’d rather have more info than less information,” says Gonzales. Sometimes, forecasters just go with their gut. In 2006, Rothenberg moved the Virginia Senate race from “toss-up” to “lean Democratic.” “I had no numbers to justify that, I just kind of took a flyer,” he says. He knew it was a big year for Democrats and “all the close ones tend to fall one way.” In 2008, Rothenberg again predicted toss-ups would break disproportionately for Democrats. But this time they broke evenly.
The other trick for avoiding error is, when in doubt, to declare a race a toss-up. “We can take one of two paths,” says Gonzales. “Any time we hear about an incumbent that might be vulnerable, we can put them on our list [of competitive races] to avoid that surprise. But we don’t want to put everything on just to cover our rear end.” As a compromise, the Rothenberg report created a new category half-way between “toss-up” and “lean” called the “toss-up/tilt.” “We’d always like to put a pinky on scale,” says Stuart Rothenberg. From Sabato’s perspective, that’s the easy way out. He calls every race. “That’s the fun of it,” he says. “Why not take a guess? Who cares if you’re wrong? It doesn’t make any difference.”
The beauty of punditry is there are no consequences. “For our subscribers and our readers that understand what we do, there isn’t a long term backlash,” says Rothenberg. Some people will accuse them of carrying water for one party or another. But that’s probably because the party is doing well. “Four years ago, we were attacked as liberals because everything we said looked like good news for Democrats,” says Gonzales. “Now we’re being attacked as conservatives because it’s good news for Republicans.” In both cases—with the disclaimer, if current trends continue—they were right.