Retracting a paper is supposed to be a kiss of death to a career in science, right? Not if you think that winning a Nobel Prize is a mark of achievement, which pretty much everyone does.
Just ask Michael Rosbash, who shared the 2017 Nobel Prize in physiology or medicine for his work on circadian rhythms, aka the body’s internal clock. Rosbash, of Brandeis University and the Howard Hughes Medical Institute, retracted a paper in 2016 because the results couldn’t be replicated. The researcher who couldn’t replicate them? Michael Young, who shared the 2017 Nobel with Rosbash.
This wasn’t a first. Harvard’s Jack Szostak retracted a paper in 2009. Months later, he got that early morning call from the Nobel committee for his work. And he hasn’t been afraid to correct the record since, either. In 2016, Szostak and his colleagues published a paper in Nature Chemistry that offered potentially breakthrough clues for how RNA might have preceded DNA as the key chemical of life on Earth—a possibility that has captivated and frustrated biologists for half a century. But when Tivoli Olsen, a researcher in Szostak’s lab, repeated the experiments last year, she couldn’t get the same results. The scientists had made a mistake interpreting their initial data. Once that realization settled in, they retracted the paper—a turn of events Szostak described as “definitely embarrassing.”
The simplistic message might be: Want to win a Nobel Prize? Try retracting a paper. That logic is obviously ridiculous. It confuses correlation with causation in a way that—wait for it—should be retracted. The vast majority of those who’ve won Nobel Prizes have not retracted any papers, and the vast majority of retractions were not by those who’ve won Nobels. The advice is as tongue-in-cheek as Nobelist Richard Roberts’ “Ten Simple Rules to Win A Nobel Prize,” which include “Be Sure to Pick Your Family Carefully” (meaning, yes, your biological family) and “Always Be Nice to Swedish Scientists.”
But what isn’t absurd is the idea that admitting mistakes shouldn’t be an indelible mark of Cain that kills your career. Quite the opposite. A growing body of evidence points to this encouraging conclusion: Scientists who acknowledge honest errors and retract their flawed findings send a signal to their colleagues and peers that their future studies are worthy of trust. In turn, those researchers are no less likely to cite those studies—an essential form of endorsement in science. (We should also note that when it’s clear a retraction is for misconduct, researchers see a significant dip in citations, which is a reminder that scientists still look down on such behavior.)
Still, although the noble actions of the aforementioned Nobel winners are encouraging, they’re not likely to trigger a flood of nostra culpa from scientists. And despite the hints of a trust dividend for transparency, researchers still have few incentives to be open about their errors. But that, too, might be changing—in part thanks to the reproducibility crisis rippling through science. One recent analysis of 100 published psychology studies famously found that less than 40 percent of primary findings held up to repeat experiments. (Rates seem similar across many fields, though psychology has been the focus of media coverage of the issue.)
In 2016, a pair of scientists in Texas and France—since joined by a third colleague in Germany—launched the Loss-of-Confidence Project. This effort encourages researchers in psychology to notify the field when they have reason to doubt their own findings by submitting a form expressing the reason for the doubt (statistical flaws, for example, or a problem with methodology).
As the creators of the Loss-of-Confidence Project rightly point out, the authors of the original studies are in the best position—presuming they’ve tried to build on their work—to say if the findings are robust, or if they warrant concern. “However, except for few notable exceptions … researchers do not share this type of information: It is anything but common to publicly declare that one has lost confidence in one’s own previous findings,” they write.
But the project goes beyond simply encouraging researchers to admit their mistakes. Indeed, here’s where their idea is particularly clever: The end product of the whole form process is a publishable paper detailing the soul-searching effort. That’s a juicy carrot; after all, publications are the coin of the realm in science—which is a big reason that retracting them can be so traumatic.
Of course, coming up with a solution for the “publish or perish” culture is by no means easy. A few retraction-Nobel pairs are almost certainly not enough to do the trick, and the Loss-of-Confidence Project likely isn’t either, as smart as it is. It might even be the topic of a future Nobel prize in economics—which, someone will no doubt point out in the comments, is not technically a Nobel, but instead the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel. There, we got the ball rolling with a retraction. The rest is up to you, economists.