On Thursday, Future Tense and TechCongress will host “What Our Democracy Needs to Know,” a lunchtime event in Washington. For more information and to RSVP, visit the New America website.
A version of this piece originally appeared in Issues in Science and Technology.
Hardly a day has passed in the past year without some report in the media that the nation’s highest office has offered up unfounded stories, claims without evidence, even outright lies. As the charges against the executive branch pile up, the White House counters that institutions long seen as standing above partisan wrangling can no longer be trusted. The FBI is only the latest to feel the heat of presidential pushback. The CIA, the Congressional Budget Office, and the federal judiciary have all been targeted. In this topsy-turvy world, it hardly seems surprising that the administrator of the Environmental Protection Agency rejects two decades of scientific findings that human activities are warming the planet. It is almost more newsworthy when the White House withdraws a climate skeptic’s nomination for a top environmental post because the Senate seems reluctant to confirm her.
But how can a modern, technologically advanced nation fulfill its mandate to protect its citizens if it undermines its own capacity to produce sound facts and persuasive reasons? Is the commitment to truth and trust in the public sphere irreparably damaged, or can steps be taken to restore it?
It is tempting to turn the clock back to January 2009, when President Barack Obama gave an answer that seemed both easy and overdue: “restore science to its rightful place” as humanity’s most rigorous and reliable path to truth. But if today’s questions are not easy, they are also not new.
The current attack on public facts looks unprecedented, but moral panics about the reliability of public knowledge did not originate in this century. What has shifted is the politics of concern, the focus of the panic, the actors who are troubled, and the language for describing the breakdown. Setting the present chaos of “alternative facts” and “post-truth politics” within a longer history may help us find a way from empty hand-wringing toward a more constructive response.
Democratic states earned their legitimacy in part by showing that they knew how to ensure public welfare—by securing frontiers, improving public health, guarding against economic misery, and creating opportunities for social mobility. For this they needed science and expertise. As industries multiplied, corporations grew, and governments expanded their regulatory oversight, it became less and less plausible that government could function without expert knowledge. But just as power is continually challenged and forced to justify itself in democratic politics, what power knows has also come under constant questioning. In the United States, in particular, actors from across the political spectrum pay lip service to the importance of science for policy; yet, scientific claims relevant to significant public policies seldom pass unchallenged. That long history of attack and counterattack may well have weakened the nation’s moral authority to produce “serviceable truths”—that is, reliable statements about the condition of the world, with enough buy-in from both science and society to serve as a basis for our collective decisions.
The roots of conflict reach back at least to the New Deal, when both regulation and centralized public expertise rapidly grew. In that period, federal efforts to protect the economy against another Great Depression, together with progressive ideals of informed and reasoned government, led to an enormous expansion of regulatory agencies and their policy-relevant expertise. The United States was not alone in experiencing the turn to government by experts.
In Europe, Max Weber, the first and greatest theoretician of bureaucracy, observed that the authority of detached and objective experts had displaced unaccountable monarchical power. But the evolution of expert-state relations took specific turns in the United States, consistent with this nation’s pluralistic politics, adversarial administrative process, and suspicion of centralized authority.
The growth of the U.S. administrative state drew calls for more openness and accountability in its ways of knowing. Business and industry worried that the government’s claims of superior expertise together with a state monopoly on information would hurt their interests, and so they sought legal access to the expert claims of executive agencies. Their demands led to passage of the Administrative Procedure Act of 1946, to fix what the Senate Judiciary Committee saw as “an important and far-reaching defect in the field of administrative law” and “a simple lack of adequate public information concerning its substance and procedure.” Designed to make the administrative process more transparent, the law also created—through its provision for judicial review—a powerful instrument for contesting public facts. Political interests of every stripe enthusiastically turned to the courts to question the government’s expert findings. A pattern developed that many have noted: U.S. politics played out not only in the realm of law, as a fascinated Alexis de Tocqueville observed in 1831, but also in recurrent, rancorous disputes over scientific claims.
New regulatory laws in the 1970s increased the private sector’s disenchantment with public fact making, drawing repeated charges that public authorities were using “bad” and even “junk” science. This was the period in which an electorate newly sensitized to health, safety, and environmental hazards demanded, and received, protection from formerly unseen and unknown threats: radiation, airborne toxic pollution, chemicals in food and water, untested drugs, workplace hazards, and leaking landfills. A barrage of federal legislation sought to protect a postindustrial, postmaterial society against the all-too-material hazards of older, dirtier industrial processes. These laws changed the American social contract for science, asking businesses to provide expensive information as a precondition for bringing their products to market, and also authorizing regulatory agencies to fill gaps in relevant science. Most importantly, agencies gained power to interpret technical information for policy purposes with the aid of a growing “fifth branch” of scientific advisers. Convened to help agencies carry out their statutory mandates, these advisory bodies often found themselves on the front line of political conflict, whether for over-reading the evidence in support of regulation or, less often, for giving too much latitude to industry’s antiregulatory claims.
From the late 1970s onward, U.S. industries routinely charged that federal agencies and their expert advisers were allowing politics to contaminate science, and with Ronald Reagan’s election in 1980, they found a willing ally in the White House. In the early years of the Reagan administration, charges of “bad science” led to demands for a single, central agency to carry out risk assessments for all federal regulatory agencies, as well as a call for peer review of the government’s scientific findings by scientists not too closely associated with regulatory agencies. An important report from the National Research Council in 1983, “Risk Assessment in the Federal Government: Managing the Process,” rejected the demand for such a central body but did affirm that risk assessment should be seen as a “science.” Decades of research since then have shown that risk assessment not only does, but must, blend accepted and plausible facts with judgment based on public values and purposes. Nonetheless, the label “scientific risk assessment” endures, and regulators are urged to keep that science separate from “risk management,” the process that translates scientific findings into social policy.
The science label proved to be a lightning rod for an increasingly partisan politics. Agency decision-makers found themselves vulnerable to claims that their risk assessments had deviated from a baseline of imagined scientific purity. Peer review, the tried and true method by which science maintains its objectivity, drew special attention as more political actors recognized that review offers room for flexible judgment. In the administration of George W. Bush, the Office of Management and Budget attempted to gain control over appointing regulatory peer reviewers but was held back by opposition from leading scientific bodies. Meanwhile, opposition Democrats excoriated the Bush administration for waging what the science journalist Chris Mooney colorfully labeled The Republican War on Science.
Even before the George W. Bush administration, the uproar surrounding public knowledge-making reached another crescendo around the use of science in courts. By the 1990s, prominent scientists and legal analysts had teamed up with industry to decry the courts’ alleged receptivity to junk science. They lobbied to introduce more “independent” expertise (that is, experts nominated by the courts rather than selected by the parties) into a process traditionally dominated by adversarial interests. The Supreme Court took note and in 1993 ruled, in Daubert v. Merrell Dow Pharmaceuticals Inc., that judges should play a more assertive part in prescreening expert testimony. Daubert stopped short of demanding peer review and publication as necessary for introducing scientific testimony. But, going against social science findings on this issue, the decision reaffirmed the notion that the reliability of expert testimony can be judged in accordance with objective scientific criteria. Daubert in this sense undercut judicial sensitivity to the contexts in which evidence is generated—or not generated—although lack of evidence often operates as yet another burden for economically and socially disadvantaged plaintiffs.
Through these decades of controversy over expert knowledge, challengers have continually invoked the label of science, with its connotations of facts and truth, to legitimatize as well as delegitimatize public action. Correspondingly, U.S. policy has been slow to adopt the “precautionary principle,” a cornerstone of European regulatory policy in situations where policies must be adopted without complete certainty on the facts. A European Union communication from 2000 explains that “the precautionary principle is neither a politicisation of science or the acceptance of zero-risk but … it provides a basis for action when science is unable to give a clear answer.”
The important point here is not whether the precautionary principle always translates into good policy, nor whether European policymakers are sincere or consistent in applying it, nor even whether Europe’s precautionary approach produces more or less stringent regulation than the U.S.’s risk-based choices. Rather, the relevant point is the very recognition that when making decisions under uncertainty, there can be a position between “politicization” of science and “zero risk”—a position usefully filled by the notion of precaution. Indeed, the European Union’s statement of the precautionary principle nicely parallels the idea of “serviceable truth,” which I defined in my 1990 book The Fifth Branch as “a state of knowledge that satisfies tests of scientific acceptability and supports reasoned decision-making, but also assures those exposed to risk that their interests have not been sacrificed on the altar of an impossible scientific certainty.” Based on a detailed study of peer review in the EPA and the Food and Drug Administration, I concluded that regulators should ground their decisions in such balanced judgments when science pure and simple does not offer precise guidance.
Let us fast-forward, then, to the “post-truth” present. The shoe is now on the other foot. Liberals, left-leaning intellectuals, and Democrats, rather than conservatives, corporations, and Republicans, are complaining that politics is distorting science and propagating, in presidential spokeswoman Kellyanne Conway’s unforgettable phrase, “alternative facts.” How did “truth” become the property of the political left when once it seemed the rhetorical weapon of the political right, and how are today’s cries of outrage at the government’s denial of science, expertise, and facts different from the charges of earlier decades?
Surprisingly, it is liberals who seem now to have lost sight of the social context of science for policy. The great gains made by science and technology in recent decades led to complacency that they can provide the right answers to big social problems. Climate change with its urgent messages for humankind is the most prominent example, but scientists insist equally on the importance of facts in any number of situations where science provides support for increased state intervention, for example, the expanded use of nuclear power, vaccination against childhood disease, and genetic modification of plants. In time, we are told, even gene editing of future humans will become risk-free, just as autonomous vehicles will carry human riders safely along city streets. Lost from view is the fact that people bring nonscientific values and concerns to each and every one of these debates, such as whose definition of risk or benefit frames public policy, whose knowledge counts, and who gains or loses from the solutions that science advocates.
To address the current flight from reason—and indeed to restore confidence that “facts” and “truth” can be reclaimed in the public sphere—we need less crude and divisive labels than good/bad, true/false, or science/anti-science. Such oversimplifications, we have seen, only augment political polarization and possibly yield unfair advantage to those who hold the political megaphones of the moment. We need a language more sensitive to findings from the history, sociology, and politics of knowledge that truth in the public domain is not simply out there, ready to be pulled into service like the magician’s rabbit from a hat. On the contrary, in democratic societies, public truths are precious collective achievements, arrived at just as good laws are, through slow sifting of alternative interpretations based on careful observation and argument and painstaking deliberation among trustworthy experts.
In good processes of public fact-making, judgment cannot be set side, nor facts entirely disentangled from values. The durability of public facts, accepted by citizens as “self-evident” truths, depends not on nature alone but on the procedural values of fairness, transparency, criticism, and appeal in the fact-finding process. These virtues, as the sociologist Robert K.
Merton noted as long ago as 1942, are built into the ethos of science. How else, after all, did modern Western societies reject earlier justification for class, race, gender, religious, or ethnic inequality than by letting in the scientific findings of the underrepresented? It is when governing institutions bypass the virtues of openness and critique that public truthfulness suffers. That is when we get what the comedian Stephen Colbert called “truthiness,” the shallow pretense of truth, or what the Israeli political scientist Yaron Ezrahi calls “out-formations,” baseless claims replacing reliable, institutionally certified information. That short-circuiting of democratic process is what happened when the governments of Tony Blair and George W. Bush falsely claimed to have evidence of weapons of mass destruction in Iraq. A cavalier disregard for process, over and above blatant lying, may in the end deal the harshest blows to the credibility of the present administration.
Public truths cannot be dictated by an all-knowing science or an all-powerful state. Science and democracy, at their best, are modest enterprises because both distrust their own authority. Each gains by letting its doubts hang out. This does not mean that the search for facts in science or politics must be dismissed as unattainable. It does mean that we must ask and insist on good answers to questions about the procedures and practices that underlie authoritative claims. The following questions then seem indispensable:
· Who claims to know?
· In answer to whose questions?
· On what authority?
· With what evidence?
· Subject to what oversight or opportunity for criticism?
· With what openings for countervailing views to express themselves?
· And with what mechanisms of closure in cases of disagreement?
If those questions can be raised and discussed, even if not resolved to everyone’s satisfaction, then factual disagreements fade into the background and confidence builds that ours is indeed a government of knowledge and reason. If some are still not satisfied, they can return some other day with more persuasive data and with the hope the wheel of knowledge will turn in harmony with the arc of justice. In the end, what assures a polity that knowledge is justly coupled to power is not the claim that science knows best but the conviction that science itself has been subjected to the rules of good government.