If a study came out claiming that the average American had a pulse rate of 10 beats per minute, most people would probably put a finger to their wrist for half a minute, ask a couple of friends to do the same, and quickly decide that this was an absurd claim. Yet in his piece headlined “Number Crunching: Taking another look at the Lancet’s Iraq study,” Slate columnist Fred Kaplan showed that he was not even capable of measuring the pulse in Iraq.
Our estimate of 600,000 Iraqi deaths is far greater than most attempts to measure mortality in Iraq. Last December, President Bush stated that 30,000 “more or less” had died. The president’s estimate roughly matched the estimates of Iraq Body Count, which derives its total by monitoring newspaper reports of violent deaths. Today, IBC estimates there have been 45,000 to 50,000 violent deaths. The Brookings Institution has a somewhat higher estimate, suggesting the number of deaths to be about 65,000. Both systems rely on information that can either be observed in press reports or from Iraqi governmental institutions. Our basic approach of interviewing randomly picked clusters of households is the standard way of measuring mortality in times of war. It was how U.S. government researchers estimated death rates during the war in Kosovo and recently in Afghanistan, and it is how the United Nations estimates death rates in dozens of nations annually.
What is curious about the press coverage of this report is that no one has attempted to corroborate the findings on the ground in Iraq. At least 130,000 people would die from natural causes in Iraq each year, even if it was one of the healthiest nations in the world. That would mean roughly 500,000 natural deaths since March of 2003, regardless of whether there was a war or not. Our report implies that over that period, a preponderance of deaths in Iraq were from violence. If President Bush, IBC, and the Brookings Institution are correct, roughly one in 10 deaths over the occupation were from violence. If we are correct, there are now three times as many deaths across Iraq each week as there were in 2002. If IBC and Brookings are correct, there is roughly 10 percent more. A visit to graveyards in Iraq or a look at the morgue data from a few locations would quickly show which crude estimate was in the right ballpark—and overwhelming evidence and testimonials from Iraq suggest that it is the Lancet estimate.
Surveillance by monitoring media reports or bodies in the morgue captures only a small percentage of the deaths occurring, particularly when the Iraqi health-information system and infrastructure are practically nonexistent. In July, for example, the Ministry of Health reported exactly zero violent deaths in Anbar Province, in spite of the contradictory evidence we saw on our televisions. Is that a surveillance network on which our understanding of what is going on in Iraq can depend? For most of the people commenting on our report, the answer appears to be yes.
This brings us to Fred Kaplan’s column regarding our study. In 2004, Kaplan worked to discredit our first Lancet report on Iraqi deaths in 2004 by focusing on the imprecision in one of the three findings and ignoring the other two. Kaplan’s latest article focused on two baseless criticisms of our 2006 study. First, he claimed that our measured base line rate, the rate of natural deaths for the year before the invasion, was too low. We had estimated the rate to be 5.5 deaths per thousand per year. Kaplan claims that the rate was really 10, according to U.N. figures. He wrote, “[I]f Iraq’s pre-invasion rate really was 5.5 per 1,000, it was lower than almost every country in the Middle East, and many countries in Europe.” This is just wrong! If Kaplan had checked the U.N. death-rate figures, most Middle Eastern nations really do have lower death rates than most European countries, and in fact have lower death rates than 5.5. Jordan’s death rate is 4.2, Iran’s 5.3, and Syria’s 3.5. The reason for the lower rate is simple: Most Middle Eastern nations have much younger populations compared to most Western nations. Obviously, the elderly die at a greater rate than young people and account for a disproportionate number of deaths. Even if it were true that our prewar death rate for Iraq was too low, Kaplan doesn’t emphasize that such an error would almost certainly make our post-invasion estimate too low, not too high.
Kaplan’s second fault with our study was his claim that our sampling suffered from “main street bias.” That is to say, we systematically missed the back streets and focused on main streets, where he assumes more deaths have occurred. This claim is not credible for a number of reasons. Our study team worked very hard to ensure that our sample households were selected at random. We set up rigorous guidelines and methods so that any street block within our chosen village had an equal chance of being selected. Once we started, we went to the next nearest 39 doorways in a chain that typically spanned two to three blocks. Thus, the first-picked block usually did not provide most of the houses within a given cluster. It is also important that most violent deaths probably happened outside of the home, making the location of the house on the street irrelevant. It is interesting to note that in a world where there are thousands of epidemiologists and hundreds of people who have done epidemiological surveys in conflict settings, Kaplan’s source for this presumed flaw was the speculation of two physicists and an economist.
If Kim Jong-il were to state that he felt there had been, more or less, 200 American deaths during the 9/11 attacks, we would be appalled. To have our leaders claiming that there have been only 50,000 or 60,000 violent deaths is equally appalling to those in the Middle East who know better. The media could be a force for correcting this fanning of international tensions, but not if they squander their opportunities, saying what they feel or hope for instead of going to the field and reporting what is.
Fred Kaplan replies:
What a bizarre rebuttal. Burnham and Roberts chastise me for not traipsing through Iraq’s morgues and graveyards, an adventure that they claim would “quickly show” which estimate of Iraqi deaths “is in the right ballpark.” But a) neither they nor their team conducted this sort of research, b) no mortal could these days, and c) it’s not at all clear what such a survey would reveal. They themselves say that “bodies in the morgue captures [sic] only a small percentage of the deaths occurring.” Yet they also cite “overwhelming evidence and testimonials from Iraq [that] suggest” their numbers—and not any other organization’s—are correct. Two questions: What are these “testimonials”? And how many such testimonials do you need to infer that 650,000 Iraqis have died, as opposed to, say, 250,000 or 50,000?
They liken the task of finding the average death rate to that of finding the average pulse rate. But humans have very similar circulatory systems. Neighborhoods, on the other hand, have very different patterns of bombings, shellings, strafings, and the like.
“Cluster sampling” is indeed a standard technique for estimating mortality in conditions where precise counts are impractical. But sampling—of any sort—means little unless the sample reflects the overall population; and in order to ensure that it does, the surveyors have to follow highly rigorous methods that ensure the sample is chosen randomly.
It is in this regard that the Lancet study’s methods and claims are highly questionable. Rather than rehearse the arguments once more, I urge readers to take another look at my original column (especially to the hyperlinks). Burnham and Roberts chide me for relying on physicists and economists instead of epidemiologists. First, the issues at hand concern statistical methods, for which my sources have impeccable credentials. Second, the authors conveniently ignore my citation of Professor Beth Osborne Daponte, an eminent demographer at Yale University who has conducted precisely these sorts of studies. They may also be unaware that the Oxford team includes Professor Gesine Reinert, who is experienced in this field as well.
The second part of their rebuttal on the matter of “main street bias”—that people might have been killed away from home, so it doesn’t matter if the houses surveyed were disproportionately close to violence—might be valid for many countries. But in Iraq, unemployment is high, transportation is limited (in part due to scarcity of gasoline), and in many cities, Sunnis don’t go into Shiite neighborhoods, and vice versa. This controversy would be easier to settle if Burnham and Roberts were clearer—and less contradictory—about just how their main streets were chosen.
Then there is the issue of Iraq’s prewar mortality rates, a crucial issue that they willfully misconstrue. The aim of their study was to measure the number of “excess deaths” since the war began—i.e., the number of Iraqis who have died since the war started minus the number who would have died in this time period if there hadn’t been a war. A surrogate for this latter figure is the number who died in the same time span before the war started. They claim this figure is 5.5 Iraqis per 1,000 of population per year. I noted that the United Nations puts Iraq’s prewar mortality rate at 10 per 1,000. I should add here that Jon Pederson of the United Nations Development Corporation thinks even that may be too low. The Lancet study estimates that, on average, 13.3 Iraqis per 1,000 have died, of all causes, since the war started. If their pre-war assumption is right, the excess deaths—7.8 per 1,000 (13.3 minus 5.5)—are high. If the U.N. assumption is right, the excess deaths—3.3 per thousand (13.3 - 10)—are less high.
Pederson tells me, in an e-mail, “I simply do not believe that it is possible to obtain accurate mortality estimate[s] with the methods they are using as far back in time as they purport to do.” He adds (and I agree here wholeheartedly), “That being said—a lot of people have died in Iraq.”