Let Industry Fund Science

The methods, not who did or paid for the research, should be enough to tell us whether we can trust the evidence.

sugar cubes.
Headlines recently claimed the sugar industry “distorted health science for more than 50 years” via a recently released narrative case study. 


A study came out that conflicted with nutritional dogma—it showed that eating breakfast actually causes weight gain. Immediately, people who disliked eating breakfast expressed vindication for listening to their own hunger instead of general dietary recommendations. Some professionals, on the other hand, dismissed the results as “just one study,” disregarding the totality of evidence; after all, challenging established advice just “adds to confusion.” Others simply remarked that nutrition was flip-flopping again. Headlines popped up declaring that the idea that breakfast helps control weight was a myth now busted, with the helpful context that everything you were ever told was wrong. Personal beliefs, calls to action or inaction, and criticisms of dietary guidance abounded.

This is just one real example of the discourse that surrounds nutrition science—often full of belief, but with little discussion of the actual science.

Part of the problem is that nutrition is uniquely suited to more personal attachments. After all, everyone eats. And then, as with any other field, there are professional pressures and expectations. Someone who studies a particular food likely has preconceptions about what he or she expects the food to do to health. A professor attempting to get tenure is under pressure to come up with fundable and noteworthy results. Researchers who built their careers on demonizing or lauding particular foods have a vested interest in substantiating their views.

All of these human factors can influence how science is conducted, reported, or discussed. They shape the window through which questions are asked and data are interpreted. For instance, to a passionate obesity researcher, a statistically significant change in weight, no matter how small, may seem important. But whether the data warrant, for instance, shifts in policy depends on values such as autonomy, informed choice, and public health. Discussing what to do with evidence is not the same as discussing the evidence itself; the evidence itself can only tell us what to expect, not what to do. The breakfast study cannot tell someone to skip breakfast; it only shows that in the conditions of the study, on average, people who were assigned to skip breakfast lost more weight than those fed oat porridge or frosted cornflakes. This observation does not address what would happen if the breakfasts were composed of different foods (though additional studies certainly could) or whether all people should skip breakfast to lose weight.

The purpose of applying scientific principles and methods to nutrition research is to help determine the way the world is, the way the world works, and what will happen if we change something in this world. Applying rigorous scientific principles is designed to prevent, as best as possible, human foibles from sneaking into the evidence. Unfortunately, a commonly applied but erroneous approach to separating biases from evidence is to devalue or outright reject evidence because of from whom or where the evidence comes, regardless of the evidence itself. This approach falls into at least two classical argumentative fallacies that undermine its validity. Ad hominem arguments attempt to discredit the science based on characteristics of the scientists: “Clearly if he eats a Paleo diet, his research on wheat is suspect.” Genetic fallacies, on the other hand, judge science based on how it originated: “The research is surely tainted because it was funded by industry.” In neither case is the evidence itself actually addressed. Evidence is neither supported nor discredited by how infallible or vile someone thinks certain researchers, their institutions, or their funders are. (Evidence, of course, can be discredited.)

Of all potential biases, research funding seems to get the most attention. This was put on full display once again last week when JAMA Internal Medicine published a narrative case study about a two-part, 50-year-old narrative review. The review addressed dietary fats, carbohydrates, and cardiovascular diseases, and it suggested cardiovascular disease risk had more to do with dietary fat consumption than carbohydrate consumption. And it was commissioned by the Sugar Research Foundation (now the Sugar Association). The facts presented in the case outline that a trade association 50 years ago was interested in maintaining the image of its product and funded research it believed would exonerate its product. Broadly speaking: nothing new.

Headlines saw it differently, spinning it as a shocking conspiracy of how the sugar industry “distorted health science for more than 50 years,” “manipulated heart studies,” and “shifted blame to fat.” But the case study itself pointed out that there was “no direct evidence that the sugar industry wrote or changed” the review and that “the evidence that the industry shaped the review’s conclusions is circumstantial.” Still others pointed out that a single narrative review was unlikely to sway academic thought for 50 years. The evidence that the nutrition evidence-base was compromised by the review is weak.

Of course, the dramatization of this case study was exacerbated by the authors themselves, who recommended in part that “policymaking committees should consider giving less weight to food industry–funded studies.” Such a recommendation does not discuss the merits of the science to be down-weighted but rather assigns too much importance to a single characteristic of the origin of the evidence—a fallacy that is equal parts ad hominem and genetic. The recommendation is also poorly defined, and it opens the door to more opportunities for human bias. How much less weight should the studies receive? And based on what criteria? In fact, the scientific community has tried to use subjective weighting methods in the past and found them to be unreliable because decisions are either made fully at the whim of the evaluator or fail to capture the errors we’d like to eliminate.

The light version of this fallacy is to demand increases in transparency of the fiscal relationships a study or researcher has. But that, too, is fraught with issues. For example, some physicians were asked to rate the rigor of research, their confidence in results, and the likelihood of prescribing medications from a series of abstracts. The abstracts were randomly attributed to industry funding, government funding, or no funding source mentioned. The evidence was the same; only the apparent funding was different. And yet, the studies attributed to industry funding were rated as less rigorous, the physicians had lower confidence in the results, and they were less likely to prescribe the medicine. Transparency, in this case, induced bias.

Down-weighting or ignoring data from people or sources we dislike without empirical reasons to mistrust the data is to willingly position ourselves in a world with less information in the thin hope that the remaining information will somehow be better—but with no such guarantees.

And attempting to evaluate evidence based on factors beyond the data and methods negatively affects our ability to objectively understand and advance nutrition science.

So in the face of human nature and innumerable potential biases, what are we to do to improve our evidence base?

There are already substantial efforts underway to counteract the documented influences of human nature on scientific quality and reliability, including influences from both financial and nonfinancial biases. None of them require delving into logical fallacies.

“Blinded” or “masked” research methods help prevent participants and researchers from unwittingly or deliberately interjecting their preconceptions into the research. In clinical trials, this often means both the people giving treatments and subjects are unaware of who is getting what.

Trial registration is another such example. Trial registration is the practice of declaring up front what is going to be studied to prevent serendipitous results from usurping the original purpose of the study.

Narrative reviews, such as the critiqued 50-year-old review, have since given way to systematic reviews.  Narratively reviewing literature is a process rife with the opportunity for humans to interject their opinions and beliefs, while systematic reviews can be thought of as reviewing the scientific literature in a scientific way. Decisions on what will be included in the review can be influenced by human nature, so scientists have gone one step further to ask researchers to register what they plan to review before they do so.

Of course, these efforts have not solved all of the problems. No matter how unbiased we humans think we are, we are still human and likely to err. Blinding treatments in nutrition studies is notoriously difficult. The COMPare Trials effort has tracked numerous examples where outcomes were switched from the time of trial registration to trial publication. And we have noted many quantitative errors in the literature that need correction.

These efforts provide an objective outlet for accountability. I was taught that in science, only three things matter: the data, the methods used to acquire the data, and the logic used to connect the data to the conclusions. If a trial switches its outcomes or data are misanalysed, the motives are immaterial to science as an enterprise. Whether the error resulted from researchers personally liking the food, an industry funding the study, or a lapse in training does not change the end result: The scientists did not uphold his or her commitment to objective, reliable evidence. No conspiracy theory, innuendo, or indignation is required to identify or resolve the errors.

Human foibles are real, and keeping their influences out of our interpretation of evidence is a noble if not impossible goal. But dwelling on potential sources of bias instead of deeply investigating the science perverts our ability as humans to understand the world as objectively as possible. We all have biases and preconceptions when it comes to nutrition, informed by tradition, reason, scientific literature, and personal experiences. Each step we take to remove human nature from the scientific process, the more likely we are to get closer to an objective truth.

Disclosure: The author has received travel expenses from Academy of Nutrition and Dietetics, Alberta Milk, American Heart Association, Danisco, DC Metro Academy of Nutrition and Dietetics, Federation of American Societies for Experimental Biology, and International Life Sciences Institute; speaking fees from Academy of Nutrition and Dietetics, Alberta Milk, American Society for Nutrition, Birmingham District Dietetic Association, International Food Information Council, International Food Information Council Foundation, and Rippe Lifestyle Institute, Inc.; monetary awards from Alabama Public Health Association, American Society for Nutrition, Science Unbound Foundation, and University of Alabama at Birmingham Nutrition Obesity Research Center; and grants through his institution from NIH/NIGMS-NIA-NIND, and UAB NORC. He has been involved in research for which his institution or colleagues have received: unrestricted gifts from National Restaurant Association and grants from Coca-Cola Foundation, National Dairy Council, NIH/NIDDK, and NIH/OD.