British journalists are having their dirty laundry washed in public at the moment. Their prime minister, David Cameron, has commissioned the Leveson inquiry to investigate the role of the press and police in the recent national scandal of tabloid journalists hacking cellphone messages. What has that got to do with science, though? Everything!
The celebrities lining up to give evidence at the hearings in London have been making the headlines, but the wider goal of the inquiry is to investigate press standards and explore how inaccurate reporting can damage the public interest.
I am not in favor of treating science as a special case, but I think it can be argued that some science stories are of such great public interest that the highest standards of journalism must apply.
The Science Media Centre, of which I am the chief executive, was set up in the United Kingdom in 2002 to help scientists engage more effectively in the media storms around issues like the measles, mumps, and rubella (MMR) vaccine and GM crops. The immediate aim was to persuade them to learn the rules of the media game rather than forever shout from the sidelines. Now that so many scientists have taken to the pitch, however, it’s refreshing to get the opportunity provided by the Leveson inquiry to step back and reflect on the ways that newspapers could cover science better.
When the press gets it wrong on science, the results can be devastating. The furor over MMR, which started in 1998 after a rogue doctor claimed a link between the vaccine and autism, is the best-known example of how poor reporting can cause harm. Vaccination rates dropped to 80 percent, and cases of measles in England and Wales rose from 56 in 1998 to 1,370 in 2008.
The media were not solely responsible for the MMR scare, but some of the news values that caused the problem are alive and well: the appetite for a great scare story; the desire to overstate a claim made by one expert in a single small study; the reluctance to put one alarming piece of research into its wider, more revealing context; journalistic “balance”—which creates the impression of a significant divide in scientific opinion where there is none; the love of the maverick; and so on.
It’s my view that if you put the best scientists, science communicators, and science journalists in a room, it wouldn’t take long for them to agree on the basics of good medical science reporting.
A checklist would look something like the following. Every story on new research should include the sample size and highlight where it may be too small to draw general conclusions. Any increase in risk should be reported in absolute terms as well as percentages: For example, a “50 percent increase” in risk or a “doubling” of risk could merely mean an increase from 1 in 1,000 to 1.5 or 2 in 1,000. A story about medical research should provide a realistic time frame for the work’s translation into a treatment or cure. It should emphasize what stage findings are at: If it is a small study in mice, it is just the beginning; if it’s a huge clinical trial involving thousands of people, it is more significant. Stories about shocking findings should include the wider context: The first study to find something unusual is inevitably very preliminary; the 50th study to show the same thing may be justifiably alarming. Articles should mention where the story has come from: a conference lecture, an interview with a scientist, or a study in a peer-reviewed journal, for example.
Another concern is the sometimes misguided application of “balance” in science reporting. An obsession with including both sides of a story has often obscured the fact that the weight of scientific evidence lies firmly on one side—witness some coverage of climate change and GM crops.
Previous attempts at drafting guidelines for science reporting failed because they came from the scientific community, looking like tablets of stone handed down from a priesthood of scientists. But these days many science reporters agree that basic guidelines would protect them from the vagaries of their news editors’ preferences. The Science Media Centre also suggests making sure that newspapers include science in the training package for all reporters, editors, and copy editors.
The Leveson inquiry invited examples of prominent stories that have turned out to be false. Sadly, science coverage is littered with these. Nine years ago, front-page headlines claimed that the first human clone had been born. The claims came from maverick scientists operating outside the mainstream, and in one case from a U.S. sect called the Raelians. Of course the first human clone had not been born; there was no evidence the claim was true.
If the press were to hold back from reporting extraordinary claims until they found extraordinary evidence, we would have a very different media landscape for science. Gone would be spurious stories about finding “the cure for” or “the cause of” our most common diseases. And we would never have had a massive scare over a safe vaccine based on a small single study not replicated anywhere else in the world.
I am not proposing that the media ignore big stories—after all, it’s only a matter of time until someone does clone a human. But the Science Media Centre is proposing that Leveson call on the press to treat such stories with extra caution and demand strong evidence before printing them. Caution may simply mean putting these stories inside the paper rather than on the front page, ensuring that the voices of top scientists casting doubt on the findings are included, and following up stories with equally significant coverage if claims are refuted.
One suggestion is that all journalists using the word cure or breakthrough should agree to publish a long-term follow-up—a “batting average”—of how many “breakthroughs” actually panned out. Unrealistic perhaps, but Leveson has given us our chance to dream.
This article originally appeared in New Scientist.