How the Polls Got the Election Wrong, According to One Pollster Who Got It Right

A voter marks his ballot in a voting booth inside a shed
Voting at a polling place in Dennis Wilkening’s shed in Richland, Iowa, on Nov. 3. Mario Tama/Getty Images

Subscribe to What Next on Apple Podcasts for the full episode.

In the week before Election Day, many dismissed Ann Selzer’s poll showing President Donald Trump up 7 points in Iowa, when other polls were showing a closer race. But Selzer, a veteran Iowa pollster, was right about Trump—and about another trend that played out across the country: the durability of Republican candidates for Congress. She correctly predicted that Sen. Joni Ernst would keep her seat and that a Democratic House incumbent, Abby Finkenauer, would lose to her Republican challenger. On Thursday’s episode of What Next, I talked to Selzer about how polls work, what they’re good for, and whether we should pay attention to them at all. This transcript of our conversation has been condensed and edited for clarity.

Mary Harris: In your polls, you use random digit dialing for landline and cell, you ask people how they’re going to vote, and you adjust the data to make it reflect the population of the place you’re polling—and that’s it. You say that you try not to bake in any assumptions, especially about past behavior. Do you think that’s where some pollsters get tripped up?

Ann Selzer: So one of the thoughts I’ve had about Florida in particular is that there might have been pollsters who took a look at their numbers and thought, This isn’t the outcome I was expecting, as they delved into some of the crosstabs, perhaps with the Latino community, perhaps with the African American community, and they may have done some additional adjusting. We would never do that. We would not think that we know better what’s going to happen in the future except what our data will reveal to us. I’m not there trying to figure out, well, do I expect the turnout among non-college-educated white men to be up or down compared to last year? That’s a judgment call. I worked with a colleague, he kept saying to me, “You’ve got to figure out the size and shape of the electorate.” And I go, “I don’t know what you’re talking about.” You know, there’s just a complete disconnect of him thinking he can outthink the future and my position of saying, “I don’t think I can do that. I think I can be in a best position to see the future when my data shows it to me.”

But I’ll tell you, my approach is probably the most simplistic of any of these polling firms’. And so there’s perhaps a commercial advantage to having a complicated scheme that makes people think it’s all science-y and therefore it’s better and it is secret and it’s protected by patents and trademarks and all of that. Mine looks like a second grader dreamed it up, by comparison. [Laughs.] And all I can say is that has worked for me.

There are things that polls can’t do. We have to stop polling at some point because the point of a poll is for people to use the data. And for me, my clients, typically, they’re going to write the stories about it. So I think people get a misguided idea of what’s the good of a poll if it’s not perfect, if it’s not proven to be perfect. A poll is a device to help reporters, in my case, tell stories of what’s happening. And if you didn’t have polls, how would an assignment editor know how to disperse the resources? If you don’t know who’s leading, if you don’t know if a third-party candidate is getting traction and going to take away some of the vote, how do you know how to report on the race? So I don’t aim for perfection. It’s nice when I get close to it. But there are other things that a poll can help you do.

When you look at your polls this year, it looks like in Iowa there was this kind of last-minute Republican surge, and I’m wondering if you can talk about how you found that in the data and what you thought of it when you saw it.

It’s a data-driven speculation, let’s put it that way. Our September poll had had Theresa Greenfield leading Joni Ernst by a few points. … That was the Senate race. And then for the presidency, a dead heat. This is before early voting started. But as state by state started their early voting, those initial days, you would see long lines of people standing in line for hours so that they could cast their vote early, and then that kind of dwindled down. So I wondered if the Democratic playbook was to really put the effort into getting that early vote out in strength and that the peak of their arc of their final push was a little early, and that the Republicans who knew that their supporters were more likely to vote on Election Day, their arc peaked a little later.

There was more energy, there was just more electricity happening on the Republican side, and more sedate activity on the Democratic side. And that kind of got me thinking about, well, where was the peak of the Democratic arc? And because it was early, and the feeling that you had your support banked in early voting, there just didn’t seem to be much of a final push—except on the Republican side. The president came here a couple of times. The vice president came here. So I think that’s just interesting and it perhaps is unique to this year because of the pandemic and the decisions that people are making about being out in public at all.

I wonder how much you think the criticism of polling has been strengthened by the fact that we had this long-drawn-out process of getting to the place we are now, where we have a president-elect with Biden, and that it just took so much time. There was all this space for us to think about polling and pick it apart.

[Laughs] I think when there are misses in polling that are widespread, it’s just a matter of time that people are going to start complaining about it. I don’t fault the media for picking at it. But I do want to say that the idea that the polling industry needs a reckoning—this is a sentence that doesn’t make that much sense to me. It’s not like the polling community acts as one and that we all do things the same way and that we’re just a commodity kind of product that you can switch in and out and get the same result. There’s a lot of diversity, a lot of different approaches. And it’s a commercial enterprise, so where there is demand, there will be supply.

I will tell you that if you’re thinking about an aggregate—that is, an average of polling—and whether that’s a weighted average or clean average, it gives the illusion that that’s going to be more accurate.

You’re talking about forecasting, like what FiveThirtyEight does, where they sort of combine everything and come out with something like a cumulative score.

Right. And what FiveThirtyEight does that others might not do is that they factor in the pollsters’ history of being accurate. So it’s not just a straight average, it’s a handicapped average. And again, I hate to bring this back to me, but if you averaged my poll in, the average for Iowa would look like it was going to be a close race, even with my one outlier poll, because that influence is going to be depressed. I suspect there would be some polls in Ohio that got it right and some polls in Florida that got it right, but when all of the polls are looked at together, that can mask something that’s actually happening out there.

I wonder if you think that’s a problem, where each polling group is doing their own thing, but because there’s not consistency, you might be comparing apples to oranges in terms of what they’re doing and what they’re showing.

Well, I don’t think it’s a problem. I think it’s the reality. People are talking about polling as though we’re a public utility and everyone should be doing everything the exact same way. And I don’t envision that that’s ever going to happen. It doesn’t happen in political consulting. It doesn’t happen in the way people choose to advertise. Everybody has their own way.

But you clearly have a method where you don’t do this adjusting for what you think is going to happen after the fact. When you do your polling, do you think it’s a mistake that more people don’t follow that straight-ahead method?

A lot of times they have come up with a data-driven approach that has worked for them in the past, and they can show you the regression equations which lead them to think this is what’s going to happen and how they should define a likely voter. So I’m sure that they would push back and say, “Hey, it was 2020. What made you think anything was going to work this year?” We publish our methodology. It’s available to anybody who thinks maybe we should try something different.

Some have pointed out that if you’re exposed to a forecasting prediction, it can change the way you vote, decrease turnout, and confuse people. Do you worry about that in the work that you do, that giving people some kind of prediction can become part of the story?

I’ve never seen data that would say what the impact of that might be. That is, if the person you’re inclined to support is behind, are you more inclined to show up, or are you more inclined to not? And if you’re supporting the leading candidate, again, what impact does that have or would be expected to have consistently from voter to voter to voter?

At a higher level, [to] the idea that people think polls are evil because they might influence the outcome of an election, I say, well, is political advertising in our campaigns evil? Are campaigns evil? You’ve got a lot of people spending a lot of money trying to influence the outcome of the election. So polls are a piece of it, but I don’t think they need to be taken out and shot.

Subscribe to What Next on Apple Podcasts

Get more news from Mary Harris every weekday.