Last week, the Food and Drug Administration ignored the advice of its own expert advisory committee and approved the first new treatment for Alzheimer’s in 18 years. Called Aduhelm, it carries a substantial risk of painful brain swelling and bleeding, requires monthly infusions, and comes with an eye-popping list price of $56,000 per year. These caveats might be fine if the drug, which is manufactured by Biogen, miraculously restored the memories lost by the 6 million Americans with Alzheimer’s—or at least measurably improved the lives of patients in some meaningful way. But according to even the FDA’s own statisticians, the clinical data fail to show the new drug can slow Alzheimer’s devastating cognitive decline.
The FDA’s surprise approval has ignited a firestorm within the medical community. People are justifiably angry about the felonious cost for a risky drug that may offer little, if any, benefit. Aduhelm is, for now, a confusing and foregone conclusion; Biogen is slated to ship the drug starting next week. But a closer look at the FDA’s approval process reveals a deeper scientific issue at stake about what constitutes adequate evidence for desperately needed treatments. How the Aduhelm saga played out could have far-reaching implications not just for Alzheimer’s patients, but for anyone taking a drug approved by the FDA in the future.
To understand why the FDA approved the drug, and why this approval is so problematic beyond Aduhelm itself, it’s helpful to understand a bit about how clinical studies are designed. Some drugs are approved based on real-world outcomes, such as whether they prevent death. But others are approved based on so-called surrogate outcomes, such as whether they, say, suppress an abnormal heartbeat. If the surrogate outcome (abnormal heartbeat) is meaningfully associated with the real-world outcome (death), it follows that a drug suppressing the abnormal heartbeat should help prevent life-threatening heart attacks. Aduhelm doesn’t seem to show much benefit to patients according to a clinical dementia rating scale (a real-world outcome, approximately—some researchers say it’s not quite as firm as an outcome like “death”). But the drug does do one thing well—it removes some of the amyloid plaques that build up in the brains of Alzheimer’s patients (a surrogate outcome). It’s seemingly on the basis of this surrogate outcome that the FDA saved Aduhelm from joining a long list of failed Alzheimer’s treatments.
Approving drugs based on the surrogate-outcome approach, in principle, offers huge advantages. It’s easier, cheaper, and faster to design a clinical study that measures a drug’s effect on abnormal heartbeats than to wait for enough people to suffer a heart attack and die. Drug approvals based on surrogate outcomes have indeed proved to be lifesaving in the past. In 1992, in the midst of the HIV crisis, the FDA created a special “Accelerated Approval” program to fast-track urgently needed treatments based on surrogate outcomes alone. (The only catch was companies had to perform post-approval confirmatory studies to assess real-world outcomes like sickness and death, or the FDA could withdraw the drug.) The first wave of treatments approved in this program were HIV drugs, brought to market based on improving surrogate outcomes such as levels of a type of white blood cell. These drugs were later proved to prevent sickness and death from AIDS, which wasn’t surprising because scientists had established a tight linkage between white blood cell count and disease progression. The surrogate-outcome approach was considered a success.
But HIV drugs are among the only unqualified successes for surrogate approvals. Joseph Ross, a professor of medicine and public health at Yale University, studies the FDA regulatory process and notes that apart from the HIV drugs, “in almost every other instance, surrogates have proven to be far more fallible.” Several diabetes drugs have been approved because they lower hemoglobin A1c, a measure of average blood sugar levels. That might seem clearly useful. However, these drugs have wildly different effects on diabetic complications and overall mortality. At least one drug is thought to increase heart attacks. Cancer drugs with astronomical price tags might reduce tumor size but fail to prolong life or improve its quality. The list goes on: Blockbuster drugs lower cholesterol but fail to prevent cardiovascular disease. Osteoporosis drugs improve bone density but fail to decrease fractures. And in some cases, these drugs have damaging side effects only discovered years later.
In fact, the writing has long been on the wall that the surrogate-outcome approach can be a bit dicey. Take the heartbeat example above—which is actually real. In the mid-’80s, two class IC antiarrhythmics were brought to market largely based on the evidence that they suppressed abnormal heartbeats. Yet, no one had ever done a clinical study to confirm the drugs actually prevented death. When such a study was conducted years later, it had to be stopped midway through: It turns out these drugs actually increased the chance of death. Although the human toll is impossible to know, the surrogate-outcome approach in this case may have caused thousands of unnecessary deaths. Although the link between abnormal heartbeats and death was plausible in theory, it turned out quite differently in the real world. There’s a clear lesson from using surrogate outcomes to evaluate a drug. Without strong evidence of a link between the surrogate outcome and the clinical outcome, the FDA should be wary to greenlight treatments based on surrogate outcomes alone. For HIV drugs, this link was already established, and the drugs panned out. For the heart drug, this link was less established, and the drug did not pan out.
Aduhelm now joins this group of drugs that fiddle with the body’s physiology but potentially leave patients worse off and much poorer for it. Recall that with this drug, about 30–40 percent of study participants experienced either brain swelling or bleeding. Not to mention the $56,000 annual price tag—much of which Medicare is required to cover—could be a fiscal catastrophe for American health care. No one disputes that amyloid plaques appear in the brains of Alzheimer’s patients, and no one doubts Aduhelm can clear amyloid plaques (by a respectable 30 percent), but how valid of a surrogate outcome are amyloid plaques for the real-world outcomes Alzheimer’s patients care about? Given the pent-up demand, the worrisome side effects, and the steep price, you’d hope the amyloid hypothesis is a slam-dunk. Unfortunately, it’s not even close.
What amyloid plaques mean—and in turn, what Aduhelm’s actual effect on people might be—is still a hotly debated scientific question. Do they cause Alzheimer’s? Do they worsen it? Are they incidental? Are they protective? Scientists have made reasonable cases one way or another, but it’s very far from settled that amyloid plaques are a valid surrogate outcome for Alzheimer’s. What’s worse, the amyloid hypothesis doesn’t have a great track record with regard to therapeutics. Several drugs targeting amyloid plaques have been developed and then fizzled during clinical trials because not a single one had any effect on dementia. And yet, when approving Aduhelm, the FDA relied almost entirely on the amyloid hypothesis.
What makes the FDA’s decision so baffling is that surrogates are used when researchers can’t, or don’t, collect data that better reflects the real-world outcome of interest—in this case, slowing or preventing cognitive decline due to Alzheimer’s. But that’s exactly the data that Biogen submitted. The two clinical studies—which looked at cognitive decline, something that patients and doctors definitely care about—showed almost no benefit. One study was a total dud, and the other raised a score on a cognitive scale by a minuscule amount. Although that improvement reached statistical significance, it is almost certainly not going to make a difference in the lives of patients. “The effect sizes are trivial from a clinical point of view,” said Chiadi Onyike, a professor of psychiatry and behavioral sciences at Johns Hopkins and a member of the 11-person committee of outside experts tapped by the FDA to review the data. “They are meaningless—no different from the ebb and flow a patient might show from week to week.” Not a single member of the FDA’s outside advisory committee recommended approval.
The fallout has already been widespread. So far, three members of the FDA’s advisory committee have resigned, one of them calling the process a “sham.” Doctors who helped run Biogen’s clinical studies are speaking out, and others are penning editorials that they won’t be prescribing Aduhelm until they see evidence of effectiveness. But no one should hold their breath. When the FDA greenlit Aduhelm for use, it told Biogen it had nine years to run the confirmatory studies necessary to prove Aduhelm’s effectiveness. Nine years of people taking this drug that existing data suggests might not do anything meaningful. With Aduhelm poised to become among the biggest blockbuster drug in history—analysts estimate annual revenues could peak at $10 billion—Biogen probably isn’t in a hurry. But they might not even have to collect that extra data at all (for its part, Biogen said in an email to Slate, “We are working diligently to initiate the confirmatory trial”). Ross, the Yale FDA regulatory expert, looked at FDA approvals from 2005–12, and found that post-market confirmatory studies—ones that truly verified the clinical value of a surrogate outcome—only took place about 10 percent of the time. Despite this dismal compliance rate, according to Ross, the FDA has never fined a company for failing to do a confirmatory study and rarely uses its power to withdraw a drug later shown to be clinically ineffective. In an email to Slate, the FDA did not offer comment on whether it would use its power to withdraw Aduhelm should the drug ultimately prove clinically ineffective but “will carefully monitor trial progress and support efforts to complete this trial in the shortest possible timeline.”
What specifically caused the FDA to approve Aduhelm based on such a shaky outcome and over the protests of its own committee is anyone’s guess, even in the context of the FDA’s frequent reliance on surrogate outcomes—nearly 45 percent of all drugs, according to the agency’s own analysis. (In a press release acknowledging the contention around the decision, the FDA explained that it ultimately decided “the benefits of Aduhelm for patients with Alzheimer’s disease outweighed the risks of the therapy.”) Aduhelm may well help fuel a trend in leaning too heavily on surrogate outcomes: Because the Aduhelm example is so egregious, it establishes a far-reaching precedent that some believe could undermine the regulatory process. “Presumably,” says Ross, “[companies] could look at what just happened with this product and say, ‘Hey, you have to treat me the same way.’ ” In other words, when a company fails to show a drug actually works, why not try for back-door approval based on an unproven idea of how the drug is supposed to work?
This might seem grim, and it is. But there’s a lesson you can take as a patient: just because a number goes up or down at the doctor’s office—whether it’s cholesterol, blood pressure, or even weight—that doesn’t necessarily mean you’re better or worse off for it. Because of the FDA’s widespread endorsement of surrogate metrics, it’s been all too easy for patients—and doctors—to believe these metrics for health are health itself. With a false sense of understanding comes a false sense of hope, a false promise of control. That’s the true tragedy of the Aduhelm approval.