For reasons that are somewhat obscure to me, my son, for his 11th birthday, wanted to go to the races. So on a beautiful May Saturday evening we headed off to Charlestown Race Track in West Virginia. Upon arrival we commandeered a bench along the rail right at the finish line. Picking horses is not the sort of applied reasoning that you learn in fifth-grade science or math, so I proceeded to give my kid a short tutorial on how to assess the horses for each race based on the past performance charts that were printed in the program for that evening’s races. By evening’s end, I had also come to realize what we really mean when we talk about “science literacy.”
The past performance charts are a marvel of condensed information. For each race, we were presented with incredible detail about every contesting horse, from its pedigree to the jockey’s record to the performance specifics of recent races: the purse and track conditions, post position, position at different points in the race with split times, and position at the finish, of course, but also an assessment of the finish that is often a model of poetic density—“faltered under pressure,” or “fast on the outside but too late,” or “drove through the pack.” From all this information, one can make informed comparisons about all of the horses in each race—and informed bets on the outcome.
Using the race charts to place smart bets requires sophisticated knowledge of how the innumerable variables that influence a horse’s past performances translate into a plausible prediction of that horse’s present chances against a slate of other horses (each of which must be similarly assessed).
Now, the stereotypical view of your average racetrack bettor is not entirely unfair. Few bobos or yuppies or Ivy degrees here. Lots of cigarettes and hats with heavy equipment logos and VFW patches, lots of RVs and American cars in the parking lot. A man, perhaps in his 60s, sat down next to me with a groan, declared that his foot was killing him, and launched into a discussion of his medical woes—diabetes, hypertension, and mental health issues. “My psychiatrist almost killed me by giving me a drug that caused my blood pressure to spike. Over 700. My wife tried to call me to tell me to go to the hospital, but I was out with a friend bringing in firewood. If I had been on a tractor or watching TV I probably would have died. I fired all my doctors the next day.” This guy had a sophisticated sense of the limits of applying scientific knowledge to a complex system—the human body—when expertise is divided up into narrow specialties.
Later, a woman exhaled a lungful of smoke into my face. I must have involuntarily recoiled because she apologized and explained with some embarrassment that “all smokers aren’t rude.” She went on to say, “I’m just an addict. I’ve tried to stop, but that just makes me get even fatter.” She was poignantly well aware of the lethal dilemma that her culture, her habits, and her physiology had created.
Moreover, both people were able to decipher the racing charts using that combination of factual knowledge, tacit experience, and informed intuition that characterizes the expertise of a real-world practitioner. They probably didn’t take classes in statistics or risk analysis, and if they had, it wouldn’t have been of much help to them. In this context, my Ph.D. in geology is a silly irrelevance. I am barely literate in the horse-racing domain, able to more or less break even after an evening’s modest betting by being boringly conservative, but almost never able to come up with an analytical insight that allows me to turn favorable odds into a big win.
The late Herb Simon, an inestimably wise Nobel-winning decision scientist, noted decades ago that the skills necessary for a two-wage-earner family to manage the logistical complexities of child-rearing and home management pose challenges that would tax any organizational theorist. One could say the same for a single mother navigating the bureaucracies to make sure she gets food stamps and doesn’t get her electricity turned off as she tries to hold down a low-paying job.
I raise these points to challenge the idea of “science literacy.” We have this belief that unless a person knows that the Earth rotates around the sun and that birds evolved from dinosaurs, she or he won’t be able to exercise responsible citizenship or participate effectively in modern society. Scientists are fond of claiming that literacy in their particular area of expertise (such as climate change or genomics) is necessary so “the public can make informed judgments on public policy issues.”
Yet the idea that we can say anything useful at all about a person’s competence in the world based on their rudimentary familiarity with any particular information or type of knowledge is ridiculous. Not only is such information totally disembodied from experience and thus no more than an abstraction (and an arbitrary one at that), but it also fails to live up to what science ultimately promises: to enhance one’s ability to understand and act effectively in a world of one’s knowing. This lack of contextualized meaning contrasts with knowledge that really does underlie and inform action—knowledge of the racing charts, or of the potentially dangerous interactions of the particular drugs that doctors from two different specialties are prescribing, or of the dilemmas of addiction.
A more sophisticated version of science literacy that focuses not on arbitrary facts but on method or process doesn’t help much, either. The canonical methods of science as taught in the classroom are powerful because they remove the phenomenon being studied from the context of the real world and isolate it in the controlled setting of the laboratory experiment. This idealized process has little if any applicability to solving the problems that people face on a daily basis, where uncertainty and indeterminacy are the rule, and effective action is based on experience and learning and accrued judgment. Textbook versions of scientific methods cannot, for example, equip a nonexpert to make an informed judgment about the validity or plausibility of technical claims made by experts.
Why, then, do conventional notions of “science literacy” persist as a defining type of citizen virtue? The racetrack teaches us that the issue is not one of knowledge or competence or critical engagement with one’s world, but of acculturation and conformity. To be scientifically literate is to be conversant with an arbitrary set of cultural shibboleths (about great men like Newton and great equations like F=MA) that are necessary for legitimacy and inclusion in an increasingly stratified and competitive society; it is to be willing to accept the authority of science even though one lacks the specialized knowledge required to test this authority. It is not about “thinking scientifically,” which is probably better learned at the racetrack than in most classrooms.
Also in Slate’s special issue on science education: David Drew on the five myths that keep us from fixing science and math education; Fred Kaplan explains why another “Sputnik moment” would be impossible; Philip Plait explains why he became the “Bad Astronomer”; Paul Plotz describes how almost blowing up his parents’ basement made him a scientist; Tom Kalil says that the Obama administration is using the Make movement to encourage science education; and Dana Goldstein explains why you should make your daughter play video games. Also, share your ideas for fixing science education in the Hive. This article arises from Future Tense, a joint partnership of Slate, the New America Foundation, and Arizona State University.