If there’s one skill almost everyone agrees schools should be teaching, it’s critical thinking, although what, exactly, critical thinking consists of is conveniently left undefined. For the longest time, I preferred to believe that it meant learning to be skeptical about words, specifically the arguments, exhortations, and beguilements foisted upon the public by politicians, advertisers, corporations, and the dodgier elements of the press. As a former English major, I figured I had this one nailed; if there was anything I mastered in college, it’s the ability to find the hidden and sometimes manipulative meanings in language.
What I, in my complacency, chose to ignore is just how much of the persuasion now aimed at the average citizen comes in the form of numbers, specifically numbers that tell us about the future, about how likely something is to happen (or not happen) based on how much it happened (or didn’t) in the past. These numbers sing to us the siren song of cause and effect, humanity’s favorite tune. Why do we like it so much? Because knowing what causes events and conditions is the first step toward controlling them, and we human beings are all about controlling our environments. That’s how we ended up ruling this planet, and it’s how some of us hope to save it.
Whether vaccines cause autism, whether the complexity of life bespeaks an intelligent designer, whether we should invest in a stock or stop drinking red wine or blame our genes for our depression or use earbuds instead of holding our cellphones up to our heads—all these are questions whose answers rely on understanding statistics and probability. Sometimes it’s easy to deploy numerical common sense. When we’re told a “study shows” something—say, that taking vitamin D provides no health benefits to obese teenagers—a glance at the sample size, in this case, a mere 19 individuals, should give us pause. (Although not as much as it would in a study purporting to show that vitamin D does help obese teens; the argument for taking any supplement bears a greater burden of proof than the argument against it.)
Other situations call for a bit more savvy. How representative is that sample, and how well-designed is the study? When choosing which drug to take for a newly diagnosed chronic condition, do we pick one still under patent because its maker claims it reduces a side effect by 50 percent over the generic? What if it turns out the side effect is incredibly rare, affecting only 2 people in 100,000, and that the much more expensive new drug merely reduced that number to 1?
When numbers get really big, a strange and counterintuitive mathematical leap occurs. Most of us understand that if you flip an ordinary coin hundreds of times, roughly half of the results will be heads and the other half tails. The more flips you accrue, the more evenly the results are distributed between the two. Yet it’s entirely possible for your first 10, or even your first 20, flips to come up all heads. We tend to think that means the 11th or 21st flip is more likely to be tails, but there we’re wrong. A flipped coin always has a 50-50 chance of turning up heads no matter how many times in a row it has already turned up heads. The odds on each individual flip aren’t affected by any other flip in the series, yet somehow the whole series will even out if we extend it long enough. In the case of singular, nearly impossible events, our intuitions are equally wrong. The universe is a very, very large sample, and as a result we can count on preposterous things happening on a regular basis. “What are the odds?” we marvel to our friends, little realizing that the odds, though terrible, don’t rule anything out.
As David J. Hand argues in his excellent primer, The Improbability Principle: Why Coincidences, Miracles and Rare Events Occur Every Day, bizarre incidents like a national lottery selecting exactly the same six numbers twice within the course of four days—as once happened in Bulgaria—are inevitable. It’s just that you can’t predict which virtually impossible event will actually happen.
Probability gives us headaches because our minds just aren’t suited to grappling with chance at this level, with unimaginably big numbers or stretches of time. We automatically look for patterns and start speculating about the causes behind them. Ask us what a random distribution looks like (say, the number of cancer cases in the populated areas of a given county), and we’ll picture an even spattering of hits over the designated map, even though chance pretty much guarantees that some cases will end up close to each other. We immediately assume such “clusters” indicate that something on the ground must be behind the illnesses, but that’s not necessarily true. Give it enough years (that is, allow enough cancer cases to accumulate), however, and the pattern will either begin to even out across the map or rise to the level of evidence for a lawsuit against the local paint factory.
This is why everyone, even mathphobic humanities majors, needs to take a class in statistics. I wish I had, although thanks to excellent writers like Hand and Jordan Ellenberg, I’ve been doing my best to catch up. We all need to learn why the term statistically significant may very well not mean actually significant—at least not in any way that matters to the making of public policy or to deciding whether to undergo a medical treatment. We all need to understand regression to the mean and how it’s been used by quacks to peddle bogus cures, as well as how selection bias has distorted the results of everything from extrasensory perception experiments to which research gets published in scientific journals and ends up reported on in your daily paper. Statistics and the science of probability represent the ultimate in critical thinking, because they teach us how to criticize the ways we habitually think.
What classes did we miss? Send your recommendations of up to 200 words to email@example.com, and we’ll publish the best.