Why Americans Will Never Turn Against Polling

Failures inspire distrust of pollsters and calls for more shoe-leather reporting. But by the next election, we always come running back.

Historic black-and-white photo of Truman holding a newspaper with the headline "Dewey Defeats Truman" up  for the camera.
President Harry S. Truman laughing as he holds an early edition of the Chicago Tribune for Nov. 4, 1948, incorrectly declaring Thomas Dewey the winner. It wasn’t the last time the polls would get things wrong. UPI

Across the land, people who care about election results have once again commenced moaning about the inaccuracy of the preelection polling that led Democrats to believe Joe Biden would have a much easier path to the presidency—and maybe, even, the benefit of a Democrat-controlled Senate. “The Polling Crisis Is a Catastrophe for American Democracy,” ran a headline in the Atlantic; “The Polling Industry Can’t Sweep Its Failure Under the Rug,” warned the Washington Post; on this website, we went with “The Problem Isn’t That The Polls Were Wrong. It’s That They Were Useless.”

Communications scholar W. Joseph Campbell’s Lost in a Gallup: Polling Failure in U.S. Presidential Elections, which came out earlier this year, shows how this dynamic of poll failure, poll-blaming, and soul-searching is a familiar one in American life. For years, as Campbell documents, pollsters have gotten things wrong, journalists have called for less reliance on polling in political reporting, and people shocked by election results have sworn that next time they won’t have any faith in polls—only to do it all over again, two and four years later. Will this time be any different? I asked Campbell for his thoughts. Our conversation has been edited and condensed for clarity.

Rebecca Onion: I just saw someone tweet something like, Fire pollsters, hire local news reporters! And since I had just been looking through your book, I thought, “Oh boy, it’s happening all over again!”

W. Joseph Campbell: Ha! Yes, but it always happens with a twist. It’s never the same. Somebody said to me recently that polling failures are like Tolstoy’s unhappy families—all unhappy in their own way. And with polling failures, polling surprises, and presidential elections no two are exactly the same. There’s never been a replay, really, of the 1948 “Dewey Defeats Truman” election, which was an epic polling failure.

You think that one was the worst? What makes you say so?

I think it was the worst because of the shock that permeated from that election outcome. The pollsters were saying it was a done deal; the press was buying into that narrative and the pundits were saying to everyone it was a sure thing that Thomas Dewey would be elected president.

But of course, that election was 72 years ago, so it’s not surprising that it’s faded away a bit in popular consciousness, though everybody remembers the photograph of Truman holding up the copy of the Chicago Tribune’s famous front page. That photo really tells you a lot about the fallibility of polls, how politicians can run against them, and how journalists can err when they buy into a polling narrative.

I’d like to talk about trust and distrust of polls over the years. Your book has a lot of examples that show that this kind of discourse around polling that we are reentering after the 2020 results has deep roots. How do you see that dynamic evolving over time?

Public suspicions about polling run deep, but at the same time memories are short. At the beginning, the dawn of modern opinion research, which was 1936, when pollsters like George Gallup and Elmo Roper began polling using quasi-scientific techniques, they were improving on a technique that the Literary Digest magazine had used successfully since 1924, which was to send out mailed postcards to millions of voters. Literary Digest called elections correctly in 1924, 1928, and 1932—basically by luck. But in 1936 they did the presidential election poll again, sent out 10 million postcard ballots, received 2.3 million in return, tallied them all up, and said it looked like Alf Landon was going to win the presidential election and unseat the incumbent Franklin Roosevelt by a comfortable margin. That poll was 19.9 percentage points off.

I mention the story to make the point that the very emergence of public opinion research came after a polling failure. The roots of popular suspicions about the accuracy and reliability of polls go back a long way.

I think there’s a tendency to want to believe polls because they have a certain degree of precision attached to them. Those cold, hard numbers look like they must be accurate. I think the default is to treat polls as if they are accurate, and this extends not only to the public at large, but to journalists. The precision in the numbers is attractive and very appealing, because journalists deal with ambiguity and imprecision all the time. Even if polling numbers come with some degree of caveat, there’s that appeal there.

Another little story I’ve seen emerging around the polling in 2020 is the idea that polls used to be more accurate—that there was some kind of golden age of polling—and that after 2016 and 2020, polling is newly broken. But it strikes me, looking at your book, that this might be sort of a false memory.

Since the dawn of modern opinion research in the mid-’30s, almost every presidential election has had some kind of polling controversy, big or small, attached to it. It’s a very rare election that doesn’t.

In 1948, of course, we had the “Dewey Defeats Truman” polling failure. That was followed four years later by a landslide election that the pollsters completely missed, in 1952. And they had been very, very wary about getting it wrong, because of 1948. Gallup, Roper, and Archibald Crossley, the three principal national pollsters of the time, had been so cautious about the race between Dwight Eisenhower and Adlai Stevenson. They thought it was a very close race that might go either way; they had Eisenhower slightly ahead, but with the caveat that Stevenson seemed to be coming on strong near the end, and he might pull ahead, if enough undecided voters broke for him. It didn’t happen that way! It was not a close election at all. It was a landslide [for Eisenhower].

So, back-to-back elections, ’48 and ’52, and two different types of polling failure. The pollsters were beside themselves and the press was relentless in criticizing them for these back-to-back mistakes. These weren’t the same kinds of errors—one was an error of arrogance and the other an error of timidity, in a sense, but it was a shock.

Then, of course, the 1960 campaign, the Kennedy-Nixon race, was a very close election. George Gallup got the election almost spot-on correctly—within a fraction of a percentage point. Elmo Roper also got the election very close, but his final poll pointed to the wrong winner. He thought Nixon was going to pull ahead. That was a minor one—not a prominent polling failure. And not an outcome that’s widely remembered, or that we often recall. But it’s there, as part of the polling history.

1980 was another landslide that pollsters missed. This was akin to 1952 but not exactly. It was when Reagan defeated Carter to win the presidency and oust Carter as a one-term president. And again, the pollsters were thinking, This is a very close election, too close to call. And in the end Reagan won in a landslide. Almost 10 percentage points.

And then there’ve been other cases where exit polls have gone awry, or have led people to think things were going to happen that didn’t. 2004 was a well-known case of exit polling failure—exit polls that year were pointing to John Kerry as the likely winner of the presidency, and news organizations were making plans for their coverage for the next day, around these exit polls. One of Kerry’s campaign aides, a guy named Bob Shrum, even turned to Kerry about 7 p.m. on election night and looked him in the face and said Mr. President. George Bush, in his memoir, talks about brooding in the White House, wondering how the election went so badly, believing these exit polls.

After what seems like another polling failure in 2020, I’m seeing all kinds of self-examination among people wondering, Can we depend on polls for a picture of the electorate? Are polls over? It’s like a supercharged emotional reaction to polling failure, which of course Hillary Clinton supporters also experienced after 2016. I’m wondering how people in the past might have reacted to these polling failures. Dewey’s supporters in 1948, for example.

Of course, it was a different situation—Dewey was a two-time losing presidential candidate, in 1944 and in 1948. When he ran in 1948 and lost, people were like, Oh my God! He lost again? And so after that election, he didn’t contest the results—it wasn’t the kind of election you would have contested; not really close—and he announced right afterwards, after conceding, that he wasn’t going to run again. And so there was this sense that Dewey is out of the picture.

So that’s a difference and that’s an important difference. I think there was some Dewey weariness that set in, and I also think Republicans may have had themselves to blame in part for the outcome of that election; the turnout that year was extraordinarily low. That may have been due in part to Republican voters thinking it was in the bag for Dewey.

Was that because of the positive polling?

Nobody knows for sure. But yes, if you look at the turnout numbers in that election and then previous and subsequent elections … it’s a low point, which says something. Just hard to know for sure.

He also decided to run a glide-path campaign—not to stir up any controversy at all, not disrupt, run above the fray, do nothing that might cause him problems and damage his presumed lead.

So did people at the time say, Man, you really shouldn’t have trusted the polls?

Yes, that was identified as one of the factors as to why he ran this run-out-the-clock kind of campaign. It was seen almost immediately as a mistake. His biographer tells a story of him looking at some footage of a Truman rally during the campaign and thinking there might be an enthusiasm gap, but deciding to trust the polls, and going on as he had been doing—that was a call that didn’t redound to his benefit!

I noticed that in some of the publicity for your book, you note that despite the fact that it’s a history of polling failure (and of people who’ve criticized polling), your book is not an anti-polling argument. How’d you put it? “This is not a poll-bashing book.”

Right! George Gallup, one of the pioneers of opinions research, probably the best-known single pollster of all time, said something like, This is the best technique that has ever been devised in a democracy for evaluating public opinion. Of course, he was not a disinterested critic of polling by any means, and we might take his comment with a grain of salt. But at the same time, it’s kind of right, because what other systematic method of evaluating public opinion has really been developed?

Every time there’s a major polling failure it seems like the news media says, We have got to go out and talk to the people. We have to do more shoe-leather journalism—and that’s fine. But it also becomes very impressionistic. Are you talking to the right people? How do you know that the insights you get from the person on the street are better than a systematic evaluation of public opinion? So it falters. Shoe-leather journalism sounds appealing, and it’s a good technique to get reporters out from behind screens, talking to people, and all that. It’s an effective supplemental tool, but I don’t think it’s the answer. I’ve studied too many of these cases of polling failure and seen how every time shoe-leather journalism is identified as the remedy. It’s not a panacea; it really isn’t.

What do you think will happen to polling after 2020? I feel like everyone today is saying “We can’t trust the polls!”—but I can’t imagine we won’t be writing about them again in 2022, or 2024.

Right. A Republican pollster named Frank Luntz said a week or so ago, on Twitter and on Fox News, If we get it wrong again in 2020, my industry is done for. [Luntz said it again on Tuesday night.] But it’s pretty clear that’s not going to happen. A polling surprise like we had in 2020 is not going to uproot or destroy the industry of opinion research. It’s just too profoundly attached to American life, with deep roots. It’s a multibillion-dollar industry that includes market research and consumer preference research and public policy research; there’s a lot of polling that goes on that has little if anything to do with election polls. I can’t see how one polling surprise is going to destroy it. If that were the case, it would have happened in 1952, after back-to-back polling errors, missing Dewey/Truman and then Eisenhower/Stevenson. And the industry lived on.