Until late in the 19th century, the chief aim of medical research was not treatment, but merely to understand the ways in which the processes of disease affected previously normal organs. Progress from this point was at first somewhat slow. But since the introduction of antibiotics in the 1940s, extraordinary strides in therapeutics have been made—and an almost equally dazzling scenario of diagnostic methods has appeared. Today’s bedside doctor is confronted with sets of choices that magnify the importance and even the meaning of the physician’s judgment, which the profession has recognized since Hippocrates to be its most important asset.
The complexity has been compounded by further advances. Pasteur’s 19th-century germ theory led to the notion that each disease has a single cause, but it was later recognized that people become sick because of a cascade of multiple influences. To properly treat a patient’s illness, therefore, a doctor should ideally take into account every contributing factor no matter how seemingly minor, including those that fall into the psychosocial realm. In addition, the advent of molecular biology in the 1950s brought with it the understanding that intracellular biochemical mechanisms underlay all cellular pathologies and that these pathologies could be treated on a molecular level. And the unraveling of so many of the mysteries of genetics has further added to the bewildering array of facts and statistics with which a 21st-century physician must grapple. For half a century, the laboratory, not the bedside, has been the location where new knowledge is developed about the nature of disease and its treatment.
How, then, does one choose the best course of treatment for each individual patient? Traditionally, doctors have approached patient care with a mixture of book-learning and what has been called clinical sensitivity, a process in which the information in textbooks and journals is filtered through the healer’s experience and his or her sense of a patient’s singularity. This is essentially an intuitive process, which the physician cannot fully explain but nevertheless in which he or she is likely to express great confidence. Still, it can hardly be called scientific. And this approach is nowadays limited by the sheer volume of knowledge required to sustain it.
It is against this background that the field called clinical epidemiology has grown up since the late 1960s, to study the ways in which biomedical information is actually implemented one-on-one at the bedside. Some of the field’s proponents have forged a sword they call “evidence-based medicine,” defined by one of its founders, David Sackett, as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients,” with evidence understood as the documented information to be found in peer-reviewed journal articles.
Needless to say, the entire notion that there can be such a thing as an epidemiological or mass approach to individual patient care has been difficult for some observers to accept. And as to evidence-based medicine! In Evidence-Based Medicine and theSearch for a Science of Clinical Care, Jeanne Daly, a medical sociologist who is co-editor of the Australian and New Zealand Journal of Public Health, presents with admirable balance both the good reasons to be wary of such a coolly distanced and data-drenched approach to patient care and the good reasons to be welcoming of it. But though I admire her fairness, I remain skeptical of the concept’s ultimate implications. To be sure, evidence-based medicine offers a method to put diagnosis and therapy on as scientific a basis as possible, removing physician fallibility as well as guesswork. But it fails to account for the extent to which doctors’ choices are affected by the multiple complicating factors inherent in any illness, or, for that matter, simultaneous illnesses, patients’ biological differences, or the differing ways in which disease can interact with proposed therapies.
That a patient-management plan can be shown by surveys of the medical literature to be more biologically effective than a placebo or some other treatment in all but 5.13 percent of 8,597 randomly chosen Swedish men whose age is between 41 and 80 does not necessarily mean that it is the best way to treat the grizzled old New England Yankee (or better yet, his wife) in my examining room. Caleb Yank has three or four other diseases in addition to the one I am treating, and he takes (in his own inconsistent way) some seven separate medications; he is like very few of those Swedes. It does not help me much to add to the mix 4,329 other Yankees, 3,581 Brits, and 8,974 Canadians, as a method called meta-analysis does by pooling the results of many surveys. Except that if I choose the evidence-based approach and fail, I can subsequently defend myself at a hospital conference or in a courtroom by pointing out that what I did is supported by the literature—even though clinical sensitivity and, yes, intuition should have told me that what is right for so many others is wrong for this man.
Proponents of evidence-based medicine would point out that I have been using their method incorrectly. For doesn’t Sackett go on to say that practicing evidence-based medicine means integrating it with my best judgment? But in today’s science-driven, oversight-obsessed, insurance-scrutinized, and litigation-whipped medicine (and tomorrow’s will be more so if evidence-based medicine becomes the norm), how many clinicians are likely to “integrate” their individual clinical expertise into what are apparently unimpeachable statistics if that means bucking the authority of 10 or a dozen Internet-digested journal articles? Vanishingly few, I promise. The printed page or the Internet, searched by increasing numbers of patients, will increasingly trump hard-earned intuition.
Another problem with evidence-based medicine is the nature of just what it is that the experts call evidence. More than one critic has pointed out that the articles published and the data presented in peer-reviewed journals have been screened by the sometimes biased eyes of academic editors and referees, who are likely to have prejudged opinions about what is worthy of publication. John R. Hampton of the University Hospital in Nottingham, England, goes so far as to call the new approach “opinion-based medicine.” It is a strength of Daly’s book that she does not shrink from discussing these kinds of objections. But even her nuanced and fair-minded presentation leaves me unconvinced of its validity in the real world of patient care.