Every year, in the name of medical progress, scientists breed and nurture hundreds of millions of mice, rats, and other subordinate mammals. Then they expose the critters to substances that could become the next Zocors, Prozacs, and Avastins. Since the alternative is to experiment on people, most everyone other than hardcore animal lovers accepts animal testing. Periodically, however, a spectacular failure raises new questions about the enterprise—not for ethical reasons, but scientific ones.
In March, London clinicians injected six volunteers with tiny doses of TGN1412, an experimental therapy for rheumatoid arthritis and multiple sclerosis that had previously been given, with no obvious ill effects, to mice, rats, rabbits, and monkeys. Within minutes, the human test subjects were writhing on the floor in agony. The compound was designed to dampen the immune response but it had supercharged theirs, unleashing a cascade of chemicals that sent all six to the hospital. Several of the men suffered permanent organ damage, and one man’s head swelled up so horribly that British tabloids refer to the case as the “elephant man trial.”
Animal rights activists in Britain pounced, declaring the uselessness of animal experimentation in the development of human drugs. A group called Uncaged declared that it was immoral “to subject animals to painful, distressing and lethal experiments when the results are not applicable to humans.” This is fundamentally dishonest, of course—there would be no medical advance without animal experimentation. Examples range from Frederick Banting and Charles Best’s diabetic dogs, which proved the existence of insulin in the 1920s, to the mice who confirmed the value of anti-angiogenesis drugs, which block the growth of blood vessels that feed tumors. Still, it is true that animal tests, even on multiple species, do not always predict the toxicity of pharmaceuticals or industrial chemicals in humans. This doesn’t make animal testing any less crucial to the development and testing of drugs. But in an era in which drug development is growing increasingly sophisticated, it may point to the need for new designs in animal testing.
Over the years, toxicologists have developed rules of thumb for which animals will best model the toxic symptoms they expect in man. An early example was the canary in the coal mine. As the Bureau of Mines’ George A. Burrell noted in 1912, in the presence of excess carbon monoxide, “a bird sways noticeably on its perch before falling, and its fall is a better indication of danger than is the squatting, extended posture that some poisoned mice assume.” Dogs, it turns out—usually beagles, in particular—are man’s best test animal, in that the same compounds frequently sicken dogs and their masters (though dogs tend to vomit more than we do).
But just how often do animal tests predict side effects in humans? Surprisingly, although it is central to the legitimacy of animal testing, only a dozen or so scholars over the past 30 years have explored this question. The results, such as they are, have been somewhat discouraging. One of the scientists, Ralph Heywood, stated in 1989 that “there is no reliable way of predicting what type of toxicity will develop in different species to the same compound.” The concordance between man and animal toxicity tests, he said, assessing three decades of studies on the subject, was somewhere below 25 percent. “Toxicology,” concluded Heywood, “is a science without a scientific underpinning.”
In 1999, the Health and Environmental Science Institute, a Washington, D.C.,-based group that brings together business, academic, and government experts to assess risks in public health, began a thorough examination. Working with confidential data provided by 12 pharmaceutical companies on 150 compounds that had produced a variety of toxic effects in people, an institute-hosted workshop found that only 43 percent of the drugs produced similar problems in rodents, and 63 percent did so in nonrodents. These are not reassuring numbers. (Though they would look better if the institute’s review had included the 90 percent of drug candidates that are screened out by animal toxicity, and thus never even tested on humans.)
Industry, academic, and government scientists agree that science is in need of better animal models for testing drug safety. “Put simply, the inability to predict the human toxicity of drugs is what’s breaking the promise of genomics to drug development,” says Paul Watkins, a North Carolina physician who is advising the institute. The high-tech biology era has seen the discovery of thousands of new targets for pharmaceuticals, but the number of drug failures remains as high as ever. It’s painful for the drug industry when $500 million goes toward developing a drug that then must be scrapped because of side effects that only surface in human trials. And it’s bad for the public as well when a product like Rezulin, Warner-Lambert’s diabetes drug, is withdrawn from the market for causing liver disease and deaths after 800,000 patients have taken it.
An equal source of human suffering may be the dozens of promising drugs that get shelved when they cause problems in animals that may not be relevant for humans. Studies of the comparative biology of humans and animals have established that some problems in animals aren’t worrisome for humans. For example, during preclinical, high-dosage tests of Viagra, the drug constipated mice, swelled rat livers, and gave beagle dogs “beagle pain syndrome,” which included arching of the back and stiffness—in the neck. Pfizer’s scientists determined, correctly, that these side effects had no relevance to humans.
Since drugs often fail by causing side effects in small groups of vulnerable people who take them (think Vioxx), scientists try to breed and use rodents with special problems—the “hypertensive rat”—to eliminate drugs that will hurt special medical populations. But such methods don’t stop government regulators from insisting on tried-and-true animal-testing schemes, because doing the same experiment on each new drug means the result can be measured against a historical record. Drug companies have grown restive under certain requirements, such as the two-year rat test for carcinogenicity, which, it is generally agreed, isn’t reliable. Edward Calabrese, a University of Massachusetts toxicologist, once wrote that “it seems almost incredible that the rat is the model so heavily relied upon when predicting human responses to toxic carcinogenic agents” given the “profound differences between the values of the human and the rat” in many bodily processes.
The hope has been that thousands of new lines of transgenic mice—with genes knocked out, inserted, or imported from the human genome—will prove the perfect test animals. But that’s not likely. Tinkering with a few genes doesn’t make them perfect stand-ins for people. In 2003, for example, Elan Pharmaceuticals had to stop trials of an Alzheimer’s vaccine that had cured the disease in “Alzheimer’s mice,” after the substance caused brain inflammation in human test subjects.
And then there’s the monoclonal antibody TGN1412, an artificial antibody designed to bind to certain T-cell receptors, thereby cutting off autoimmune attacks. Instead of dousing the immune response, TGN1412 seems to have bound to cells in a way that unleashed a chemical chain reaction. Animal tests are particularly tricky for monoclonal antibodies—a hot area of development for cancer and autoimmune disease—because these drugs target very complex, specific human proteins. According to a recent FDA-authored review article, only chimpanzees and humans provide realistic models for testing many monoclonal antibodies. And even our fellow primates have divergent immune systems. They can be infected with HIV-like viruses, for example, without getting sick. Plus, the endangered chimps are not the ideal test animals. Different as they are, they seem too much like us to be guinea pigs.