As the 2018 midterms draw nearer and nearer, I am becoming glued to political analyses and polling. With growing distance between political messaging and facts and a news cycle that focuses as much on outrage and emotion as analysis, data journalism hits a sweet spot for me: It feeds my deep-seated neurotic indignation and confusion about Trump-era politics and my hunger for clear, dispassionate reality. And as much as the return of the New York Times needle might be borderline traumatic, I can’t help but look.
However, just like we should never go food shopping when we’re hungry, so too should we be wary of consuming data journalism without the proper bearings. I catch myself treating these analyses not merely as venues to broaden my understanding of a particular issue but as omniscient oracles foretelling the future in empirical tea leaves. My rational self—in my day job, I teach how our musical expectations and reactions can be framed in terms of probabilities—scolds me for forgetting that these models aren’t perfect and reminds me that there are actually some basic reasons to be skeptical about election modeling.
The first bit of skepticism I try to keep in the back of my head is that there’s no real way to test these models. The week of the 2016 election, FiveThirtyEight had odds on Trump’s win hovering around 1 in 3. This meant that if the election happened 100 times, Trump would have won about 33 of those times. These were higher odds than most commentators gave Trump—famously, the New York Time’s model had Trump’s chances dip to only 7 percent at one point. And so, when Trump won, there was a certain amount of “Nate-Silver-did-it-again” celebration. But there’s no way to test whether Silver’s model was, in fact, better than that of the Times.
I played a lot of poker in college, so I like to think about these sorts of problems in terms of cards. Let’s say you’re playing poker and the player to your left whispers to you that the deck has been stacked with 18 aces. You do some quick math in your head—there are 52 cards in a deck, and 18 divided by 52 is about one-third. But as you calculate these odds, the player to your right leans over and says, “The deck’s not stacked! There are only four aces in that deck.” If the first player is right, then there’s a 35 percent chance you’ll be dealt an ace. If the second player is right, there’s about an 8 percent chance you’ll be dealt an ace.
And then, you get dealt an ace. Who was right? There’s no way to know! When you look down at that ace, you don’t know whether you’re looking at one of four aces in the deck or one of 18 aces. In fact, you wouldn’t be able to test who is right until you get dealt a bunch of hands to see whether aces make up 35 percent or 8 percent of the cards you’re dealt. Trump’s victory is essentially the same.
To get a bit more nerdy, let’s talk about the way uncertainty is represented in these models. The novelty of a FiveThirtyEight-style system is that it converts polling, historical data, prior assumptions, and voting trends into a single predictive model that simulates how people will act on Election Day. These models then run that simulation a multitude of times, and see how many result in particular outcomes. We can imagine a single run of the model representing one possible outcome: It essentially pops out a card that says, for instance, “Republicans won this Senate seat,” or “Democrats won the House.” After you run the model over and over, you’ll have more cards associated with more likely outcomes, and fewer associated with less likely outcomes.
In poker, when you’re uncertain about which card will be dealt next, your uncertainty comes from the fact that there are a bunch of possible outcomes—a bunch of possible cards—randomly distributed in the deck. If a quarter of the cards in your deck are spades, there’s a 1 in 4 chance that some randomly plucked card is a spade. These odds reflect your uncertainty about the suit of a random card.
We can also use this kind of uncertainty to talk about how likely a quarterback is to complete a pass. Lets say a quarterback has recently completed half of his passes. Odds are therefore 1 in 2 that his next pass will be completed. But, we can also look at the variation around those odds. Let’s imagine this quarterback is talented but inconsistent: In half of his games, he completes 30 percent of his throws, but in the other half, he completes 70 percent. Given a particular throw, then, you’re not sure if that quarterback is having a good day or a bad day. This kind of uncertainty is fundamentally different from raw odds, in that it is dependent on more than just chance, and is the sort of uncertainty that statisticians try to capture when they talk about margins of error and confidence intervals.
Voting is more like a quarterback’s throws than a deck of cards. A voting population is likely to vote a certain way—these are basic odds. But all sorts of variation exists in voting and polling, from voters changing their mind, to weather patterns that depress turnout, to voters lying to pollsters, to how a model samples likely voters.
Simulation models package all these different types of uncertainty together by converting each outcome into a single card in a deck. When we try to interpret the results of this kind of modeling, we need to remember that the underlying cards are as variable and messy as the most finicky quarterback. These predictive models, then, shift the messiest parts of a messy process to the background.
Another thing I find myself worrying about as a reader of spiffy and high-tech digital journalism is being unduly wooed by the complexity and elegance of the data’s presentation to the exclusion of the data’s accuracy. Researchers from Colorado State did a study in which participants read multiple versions of a fake psychology article that made unfounded claims. Some versions of the article had standard graphs, while others had impressive 3D pictures of brain scans. Participants were more likely to believe the article’s outrageous claims when they were accompanied by fancy brain renderings.
Data journalists are masters at elegantly representing data. Interactive and intuitive charts render the complication of their analyses accessible and provide a gentle on-ramp for a reader wishing to engage with difficult and complex statistics. That’s great in many ways, but it’s important to check our tendency to equate elegance with credibility. We need to remember that we want to believe something that looks cool. As we read these kinds of analyses, we should appreciate their representations as accessible and elegant, but we shouldn’t check our skepticism at the epistemological door.
Very similar phenomena are afoot in podcasting. Something particularly conspicuous is how often we hear the word right at the end of our favorite data journalists’ spoken sentences. This unintentional verbal tick is designed to soften whatever idea you just asserted—it flatters listeners by letting them know that you think they understood what you just said and encourages them toward your side of the conversation by making it seem like they are already agreeing with you. I use it, and you probably do too. It’s a pretty ubiquitous part of contemporary conversation, right?
Podcasters who talk about data journalism and predictive election analytics use right all the time. Almost like clockwork, after commentators present an opinion or explanation with even a little bit of complication or controversy, they punctuate their lines with this verbal crutch. But it’s important to be aware of the difference between facts and logic versus rhetoric and stylistic ploys, no matter how subtle or subconscious.
Finally, the most fundamental sin I commit is to consume data not for my own education or enrichment but to satisfy my own worries and neuroses. Several months ago, Colin McEnroe (host of his eponymous show on Connecticut Public Radio) did a call-in hour asking his listeners how they deal with the constant disconcerting din of the Trump presidency. McEnroe cited his own obsession with politics-based analyses as a neurotic manifestation of his own political stress. Many of us are in the same boat: We obsessively read polls, models, and other data-based commentary because we just don’t understand what is going on in Trump-era politics. We go to websites and podcasts like sailors to ports in an irrational storm. These rational ports make us feel like the world is reasonable and explicable. But storm-ridden sailors navigate to ports not because they are the right port but rather because the ports are there.
Acting with these motivations—acting in your feelings—isn’t the best way to solve any problem rationally. Emotion, insecurity, and desperation are surefire ways to lose a hand of cards, and similarly, we need to check our motivations as data consumers. Are we reading an article because we want to be educated, or because we desperately want to feel like there is order in our political world? We should become nervous when we are drawn to some set of facts not because we are interested in their truth but rather because we want to feel better about ourselves. When we are motivated by selfish concerns, our search for facts becomes less about truthful veracity than how those facts make us feel. And that, indeed, is a dangerous neighborhood in which to encounter facts.
And so, as I begin to neurotically and compulsively check the forecasts of my favorite data pundits, my inner rational self will keep several things in mind—that election models aren’t perfect, that it’s important to be skeptical of elegant representations of data, and that I should be aware of my own motivations when I read political data analyses. Thomas Jefferson advocated for a well-informed electorate within American democracy; a famous gambler wrote that poker can teach us important lessons about democracy. Combining these two impulses can be a productive way to navigate the complexities of data journalism in 2018.