Remember March of 2020, before masks? Back then, as we became aware that the coronavirus was circulating around the country at an alarming clip, packed up our offices, and pulled our kids out of in-person school, the nation’s top experts urged us not to bother covering our nose and mouths.
Among the complex reasons for the hesitation was a simple one: distrust of the public. “I worry that if people put on masks, then they’ll think, OK, I’m protected, and they won’t wash their hands as vigorously or be careful not to touch their faces,” one expert told Slate’s What Next very early in the pandemic. The White House Coronavirus Task Force, the U.K. scientific council SAGE, and the World Health Organization cited similar concerns at the time, too. Masks would only provide a false sense of reassurance, reversing any public health gains they might offer. Of course, they were wrong—by summer 2020, we were wearing masks and also adhering to other safety measures.
Huge numbers of people put time, effort, and money into masking up—and in doing so, saved lives. But these efforts didn’t stop public health authorities from raising similar concerns about public behavior again and again. When vaccines first arrived on the scene in late 2020, public health officials and doctors urged us to get the shot as soon as we were eligible, and then, worrying about a “false sense of security,” preemptively warned us about returning to normal activities—to the point where “just because you’re vaccinated doesn’t mean you can … ” became a popular joke setup. Now, with the Biden administration pledging a billion-dollar investment into rapid at-home testing, some worry that the proliferation of the swabs, which can present false negatives or be misused, will cause an increase in cases—that people will feel too free to use them as an excuse to drop all precautions.
Throughout the pandemic, each time a public safety measure arrives on the scene, some experts fret that the masses will simply use the newfound sense of security as license to behave recklessly, canceling out or even reversing any benefits of the safety measure. The concept many medical experts can’t seem to loosen their grip on is known as “risk compensation.” It’s an idea that comes from the study of road safety and posits that people adjust their behavior in response to perceived risk: the safer you feel, the more risks you’ll take. Risk compensation makes intuitive sense and can be true to an extent. If you’re driving on a precarious cliff-side road without guardrails, you’d probably drive more cautiously. But some proponents of the idea make a stronger claim: that guardrails cause so much reckless driving that any potential safety benefits of guardrails are offset or even reversed. Under this reasoning, a road with guardrails would cause more accidents than a road without guardrails. Guardrails aren’t helpful; they’re counterproductive.
This paradoxical idea has been trotted out by health experts to caution against not just pandemic safety measures such as masks, but everything from child-safety caps on medication (which, the worry goes, could lead parents to leave pill bottles lying around carelessly) to diet soda (what if people chug the stuff and it makes the obesity epidemic worse?).
But whenever risk compensation has been subjected to empirical scrutiny, the results are usually ambiguous, or the hypothesis fails spectacularly. And when risk compensation does play a part in behavior, it tends to do so in small and specific ways—hardly cause for the alarm and fervor with which it is often applied, especially during the pandemic. It might be tempting to dismiss any single deployment of risk compensation language by medical authorities as an unfortunate messaging misstep. Yet a closer look reveals there’s a reason why this zombie idea won’t die: It’s baked into the culture of institutional medicine and American political thought. And it’s going to come for us again, and again, in the future.
How individuals change their behavior in response to perceived risk has been of interest to psychologists, safety regulators, and economists for decades. In the 1940s, as experts debated safety measures to reduce the soaring number of traffic accidents, some were concerned that designing safer roads or cars would merely cause riskier driving. The hypothesis was bandied about but never rigorously tested. But in 1975, University of Chicago economist Sam Peltzman elevated what might have remained armchair speculation to a powerful argument against safety regulations. Writing in the Journal of Political Economy, Peltzman hypothesized that 1960s-era federally mandated vehicle regulations such as seat belts were actually making the roads less safe because they encouraged so much reckless and careless driving. In his thinking, any safety advantage of the new regulations was being offset. He analyzed traffic accident data before and after the regulations and found that not only did the regulations fail to decrease fatal accidents, but traffic-related fatalities increased after regulatory action. That is, the safety measures “may come at the expense of more pedestrian deaths,” he concluded. Although seat belts were here to stay, Peltzman’s findings gave serious quantitative ammunition to the anti-regulatory enthusiasm of the 1970s.
Subsequent analyses of Peltzman’s work, however, found it riddled with errors. Other researchers showed his model couldn’t predict traffic fatality rates before regulation. As one critic wrote in 1977, Peltzman failed to perform even “rudimentary checks on the validity of his model.” Decades of traffic data now leave little doubt that, overall, safety regulations have indeed reduced traffic-related fatalities. These days you would, with good reason, not even consider getting behind the wheel of a car that did not have working seat belts.
And yet, this seductive idea—that safety regulations would decrease safety—began to appear in arguments against pretty much any safety intervention. Take motorcycle helmet laws. When Peltzman’s article was published in 1975, California was the only state without a mandatory helmet law. Motorcycle associations, which opposed such mandates as an infringement of personal liberty, mounted a lobbying campaign, one that was well-timed to work thanks to risk compensation entering the zeitgeist. Twenty-eight states repealed their helmet laws, with one prominent advocate claiming that motorcycle helmets actually increased the likelihood of neck injuries. It was a tragic national experiment: As a result of the repeals, motorcycle deaths soared. The same pattern holds with risk compensation while wearing a helmet while skiing and bicycling. There’s a study here and there that suggests going bareheaded might cause you to act a little safer. For example, in a lab study, participants wearing a helmet were more likely than those in a baseball cap to overinflate an animated balloon. But a little behavioral slippage doesn’t add up to “helmets backfire.” Sober looks at the literature have arrived at a consistent conclusion: Helmets save lives.
So why does this concept stick around? It may be because it aligns with an extremely effective bit of political rhetoric. In his 1991 book, The Rhetoric of Reaction, political economist Albert O. Hirschman analyzed common rhetorical tropes used throughout history to defend the status quo. Hirschman dubbed one of these common tropes “the perversity thesis.” The perversity thesis states that well-intentioned rules and regulations ultimately exacerbate the problems they were designed to solve. We hear this sort of argument most prominently in arguments against the welfare state. (“We tried to remove the barriers of escape from poverty and inadvertently built a trap,” wrote Charles Murray in Losing Ground.) As a political tactic, such rhetoric makes for an effective appeal to the status quo, because why change anything if everything backfires? Give poor people money, the argument goes, and they’ll simply spend it on useless goods, making their predicament worse.
In American economics departments like Peltzman’s, perversity arguments dovetailed nicely with laissez-faire economics, and it became nearly axiomatic that any effort to restrain the invisible hand, no matter how worthy, was prone to achieving the precise opposite of its intention. The risk compensation hypothesis fits neatly into this worldview. For free-marketeers, the risk compensation hypothesis (or the “Peltzman effect,” as it was later dubbed) provides the perfect a priori argument to shut down discussion. If any safety measure, by definition, is offset by risk compensation, then why consider safety regulations at all?
Safety measures are not, of course, inherently beneficial. The effectiveness of a precaution that is going to be widely mandated should be studied—human behavior is complex and unpredictable. There are certainly examples of safety measures that don’t quite live up to the hype; anti-lock brakes, for instance, may have had no overall effect on fatal crashes (though it’s difficult to say risk compensation has anything to do with that). At best, risk compensation is something that happens at the level of the individual but rarely, if ever, fully offsets the social benefits of an effective safety regulation. At its worst, risk compensation is just kneejerk libertarianism masquerading as fundamental insight into human nature.
That “insight” didn’t just remain in the sphere of consumer safety. At the same time, similar risk compensation arguments were also proliferating in the medical establishment to provide cover for those who opposed medical interventions on moral grounds. When oral contraceptives were first approved by the FDA in 1960, critics at the time warned that “the foundations of contemporary sexual morality may be threatened” by the ensuing promiscuity. What’s more, some experts said, since women—especially poor women—couldn’t be trusted to adhere to daily pill-taking, the pill may not even end up reducing unwanted pregnancies.
Doctors and medical experts have raised analogous concerns for syphilis treatment, the morning-after pill, PrEP for HIV prevention, and more recently, HPV vaccination. In 2005, Reginald Finger, a onetime medical adviser to Focus on the Family and member of the Center for Disease Control and Prevention’s vaccine advisory committee, said that “there are people who sense that [the HPV vaccine] could cause people to feel like sexual behaviors are safer if they are vaccinated and may lead to more sexual behavior because they feel safe.” Subsequent work showed that the HPV vaccine did not increase sexual activity or the risk of contracting sexually transmitted infections. Each intervention aroused fear of risk compensation, and yet, in each of these cases, empirical evidence failed to support the fear.
But the paternalistic, morally charged attitude toward sexual health measures has slowly metastasized into a generalized distrust of the public’s ability to incorporate new protective tools without throwing all caution to the wind. Risk compensation has been brought up to question a wide range of public health interventions, including diet soda, low-tar cigarettes, child-safety caps on medication, hypertension treatments, and needle-exchange programs. In each case, the reasoning is that the intervention could backfire because the masses are just too dumb or too undisciplined to act in what the medical community perceives to be in their own best interests.
Tracing the uses of the risk compensation argument reveals a deep connection between the anti-regulatory rhetoric of conservatives and the moral tsk-tsking of prominent voices in the medical establishment over time, from the CDC to the surgeon general. Both arguments rely on a simplistic notion of personal responsibility. For some conservatives, if the social goal is to have fewer traffic fatalities, then we should simply educate people to regulate their own driving. For some in the medical community, if the social goal is a healthier population, then we should just educate people to make better choices. It’s easy to see the appeal of this position for the medical establishment: It shifts the onus of health from practitioners to patients.
We’re seeing the same cultural dynamics play out during the COVID-19 pandemic—even as the people making the arguments are different, and are making them for different reasons. Yes, the infamous hesitation of the CDC and WHO to recommend masks at the outbreak of the pandemic had many causes (including discounting aerosol scientists’ work suggesting that SARS-CoV-2 was transmitted through the air, and protecting the supply chain for health care workers). But a key cause is very simple: The authorities didn’t trust the public. They didn’t trust the public not to use masks as an excuse to leave their house willy-nilly; they didn’t trust the public not to use masks to ditch other protective measures such as hand-washing or physical distancing. In a confusing and fast-moving environment filled with new information, it was all too easy to lean on this flawed model of human psychology. But it probably cost lives as mask recommendations were delayed for precious weeks in the spring of 2020. When empiricism did weigh in, it became clear that masks reduced symptomatic infections.
The question—for driver safety, sexual activity, or public health—isn’t whether some individuals change their behavior in response to perceived risk. It’s whether, at the population level, an intervention makes the world a safer and better place. With masks, the answer is clear: Masks reduce spread of the coronavirus. Interestingly, some of those initial fears did bear out a little bit: One study suggests that on balance mask wearers may stand slightly closer to others than they would barefaced, and another showed that they spend a little more time out of the house. But when it comes to the big picture, these behavioral tweaks don’t matter. We know that, overall, wearing a mask makes you far less likely to spread infection. For policy decisions, we don’t need to understand the subtleties of individual human psychology. We just need to know if the intervention helps all of us lead safer, better lives.
When it comes to rapid tests, experts fearful of risk compensation may be missing this bigger point. Sure, the tests may encourage riskier behavior when it comes to COVID. Sure, people may use the tests incorrectly sometimes. And it’s true that a false negative on a rapid test just before a wedding or before school could result in spread that wouldn’t have happened if everyone had just stayed home. It is well worth considering how to reduce those instances, by educating the public on how to use the swabs and making it easy for us all to access high-quality test kits. But it is too much to ask that the tests eliminate risk. Life is not just about staying safe by avoiding everything. It’s about balancing COVID risk with the very real downsides of staying inside all day. People need to work, to socialize, and kids need to go to school. In a sense, the point of these measures is to allow for a small amount of risk compensation. Masks, vaccines, and rapid tests let two things to be true at once: Individuals can take more risks to do what they like, and society stays safer.
If you think of certain public health tools as ways to enable risk-taking, it becomes clear that the language of risk compensation—particularly without evidence to back up the fears—isn’t helpful, and may even generate mistrust. For months, public health authorities have implored politicians and the public alike to “follow the science.” When it comes to risk compensation, these experts would do well to heed their own advice.
Update, Nov. 8, 2021: An example citing water wings was removed from this article to avoid any confusion about them being an appropriate substitute for an approved life preserver.