The Book Club

How Can We Stop Doctors From Making Deadly Mistakes?


We last spoke over tea in your office at Harvard Medical School a year or two ago, and had a great discussion about the controversial euthanasia of newborns with birth defects in the Netherlands. (At the time, I was a fellow in pediatric cardiology across the street from you.) As I mentioned then, I’ve admired your work for some time, and it’s a great pleasure to participate in this Book Club with you now.

In your new book, How Doctors Think, an examination of decision-making processes used in medicine, you recount a memorable story from your first night of internship in Boston: While speaking with you, a 66-year-old man suddenly ruptured his aortic valve—and you realized, horrified, that you didn’t know what had happened. You panicked and drew a blank; everything you’d learned flew out of your “empty head.” Luckily, a senior cardiologist was visiting friends in that area of the hospital and diagnosed the problem in seconds with just a stethoscope. The patient was rushed to the operating room—and you were left feeling like a failure.

The problem that night wasn’t a lack of supplies, fancy technology, or adequate education of the staff. As you write, many diagnostic errors in medicine are caused by “cognitive errors,” or, if we are less charitable about it, by sloppy thinking. Under certain conditions, a doctor’s mind either stops working or relies on unexamined assumptions, whether in an acute emergency (like the cardiac failure of your patient) or in an outpatient office where a demanding person returns year after year with vague complaints (as in the case of an anorexic woman who turns out to have a deadly food allergy her doctors repeatedly fail to diagnose).

You argue that doctors sometimes jump to erroneous diagnostic conclusions, a tendency exacerbated by professional arrogance, lack of time, and (as with spinal fusion surgery, which you explain is often unnecessary) economic and political pressure.  That’s why it’s often intensely frustrating to be a patient, as you found when a famous sports-medicine specialist treated your wrist pain with casual neglect followed by outright incompetence. Using numerous fascinating examples—and personal anecdotes—you explain how some doctors avoid mental traps by finding a way to keep thinking. These expert clinicians often save lives written off as unsavable and serve as examples for all of us physicians. So, in many ways the book isn’t so much about How Doctors Think; it is about How Doctors Should Think.

To be sure, we all want smart doctors. And How Doctors Think certainly brings up a pressing issue in medical care: the ways that doctors get carried away by their own sense of expertise and fail to communicate fully with their patients. But reading your book, I felt I detected (if I can needle you in a friendly manner about this) a sneaking affection for a perhaps old-fashioned view of medical care, epitomized by the lone specialist who distills clinical data into a satisfying diagnosis when all others have failed. In several chapters, these heroes save the day for hapless patients who’ve been poorly served by the medical system.

In offering up this diagnosis of medical mishaps, you prescribe a highly individualistic treatment, which is better education and better role models for doctors. That’s analogous to concluding that the main problem with American education is that we need teachers who are better trained—which is partly true but misses the bigger problems in the system.

I bring this up because, as of course you know, doctors rarely work in isolation today; they are part of teams that must work together efficiently. And that should shape how we think about decision-making and responsibility in health care. A child having heart surgery, for example, interacts with a group of anesthesiologists, cardiac surgeons, echocardiographers, and ICU physicians, and also dozens of nurses, radiology technicians, and other hospital staff. Suppose the day after surgery, the child has a sudden cardiac arrest from excess fluid around the heart that was not diagnosed soon enough. One wonders who to blame: Should the surgeon have done a better job? Did the nurse miss subtle changes in the child’s vital signs? Or did the ICU doctor miss the problem on an X-ray?

These missed diagnoses can be framed either as a failure of personal responsibility (where an individual doctor made a “cognitive error”) or a so-called systems failure (where no single person erred, but the overall process could be improved), a concept popularized by Dr. Donald Berwick of the Institute for Healthcare Improvement. As a paean to doctors who think outside the box, How Doctors Think is largely devoted to the former viewpoint—which is certainly worth exploring—but touches only briefly on the latter. In the wake of the child’s cardiac arrest after surgery, a hospital could have an involved discussion about clinical problem solving, which might improve the performance of those involved. Or, we could just mandate that all children get an ultrasound of the heart the morning after cardiac surgery, which would help prevent this from ever happening again. I certainly agree that doctors should be educated more thoughtfully, but my money is on the second strategy.

A good amount of new medical literature suggests that process control and standardization of care, not greater physician independence and education, saves lives. In Boston, for example, standardizing asthma care and follow-up for children led to reductions in hospitalizations. Today, all babies with fever under 3 months of age get blood, urine, and spinal tests to look for infection, since years ago we learned that clinicians simply can’t tell when babies are really sick. In Blink, your colleague Malcolm Gladwell from The New Yorker describes a simple, standardized clinical formula now being used at Chicago hospitals to identify patients experiencing hard-to-identify heart attacks. The formula improved detection rates by 70 percent.

To return to the first night of your internship, it’s possible that you should have realized the man had a ruptured heart valve. Maybe you didn’t “think” right. But there’s another, possibly more constructive, way to define the problem from a systems perspective: The man should have been in a closely monitored bed (where early changes in his heart rate or blood pressure may have tipped somebody off earlier), had access to in-house staff cardiologists (instead of relying on the fortuitous presence of a visitor), and perhaps better long-term control of his high blood pressure for years before the event (from better contact and treatment from his primary care doctor).

So, I’d like to start our discussion by asking, do you perhaps oversell the ability of heroic individuals to resuscitate our medical system? Shouldn’t we instead focus our limited resources on improving the procedures and structure of medical care?