Cornell University marketing professor Brian Wansink is famous for surprising findings about food—e.g., that people eat more popcorn when it comes in bigger tubs, or that the characters on cereal boxes are drawn with eyeballs looking down, as if to make eye contact with children in the supermarket aisle. Recently, though, the scientist has been in the news not for his findings but for how he made them. “Think of all the different ways you can cut the data,” Wansink wrote in July 2013, to a woman who was about to join his lab. He’d given this new student numbers from a field study he conducted at an all-you-can-eat pizza restaurant near Binghamton, New York; now he wanted her to analyze different subsets of the data in as many ways as possible, in the hopes of pulling out some publication-worthy findings. “Work hard, squeeze some blood out of this rock, and we’ll see you soon,” he said.
This email, among others, appeared on Sunday in a damning, new report on Wansink’s famous Food and Brand Lab from BuzzFeed’s Stephanie M. Lee. The story, which builds on Lee’s months of careful coverage, reveals the best-selling author, media darling, and former U.S. Department of Agriculture official as perhaps the most egregious—or at least the most cartoonish—villain of the replication crisis in psychology, someone who seems to have embraced questionable research practices with astonishing enthusiasm. “There’s been a lot of data torturing with this cool data set,” Wansink wrote to one collaborator. The data “needs some tweeking” to reach statistical significance, he wrote to another. One email says that Wansink’s colleague had run 400 different tests behind the scenes to try to find a positive result. It’s been 15 months since Wansink first revealed his fondness for these sketchy methods in a public blog post; since then, data critics have uncovered flaws in dozens of his papers. As of this week, Wansink’s lab has made six retractions. More could be on the way.
As the Food and Brand Lab’s reputation continues to melt down, some psychologists have wondered how to take the news: Were Wansink’s sketchy practices—his blatant “p-hacking” of results—unusual, or does his method represent a woeful norm? That question may be worth parsing, but I’ve begun to think it overlooks the underlying story. Wansink’s fall is both singular and parabolic: It can be taken as a moral teaching on what happens when an expert gorges on his good intentions and fattens off his expertise.
Brian Wansink’s mission as an academic, celebrity, and civil servant has always been the same: appropriate the marketing tools of Big Food propaganda, and use them as a fix for public health. His lab would show how Sesame Street characters could coax a child into eating apples and argue that vegetables should be rebranded as “X-Ray Vision Carrots,” “Super Salad,” or “Tiny Tasty Tree Tops.” In these and other projects, he aimed to be a white-hat hacker of consumer drives, seeding better habits in our brains. But in the end, it looks as though this scholar of the science of marketing lost track of the difference between science and marketing. In the end, the “Sherlock Holmes of Eating” was overtaken by his own, high-flying brand.
Like any decent CEO, Wansink likes to tell the story of his start. In Mindless Eating, his 2006 book for Bantam, he describes how he got started on his “mission.” It all began when he was a boy in Iowa, he says, and heard his farmer Uncle Lester wonder what might encourage people to eat more corn. Wansink went on to study business as an undergrad, and communications for his masters, then found himself working as a marketing consultant for Better Homes & Gardens. When he learned how the magazine analyzed potential readers—testing different covers to determine which would sell the best—he was stunned, and he was hooked. It seemed to him the power of this method could be redirected as a force for good. Within six months, he was trying to enroll in Stanford’s Graduate School of Business, for a Ph.D. in marketing. On his application, he wrote: “My goal is to get people to eat more fruits and vegetables.”
At the center of this project would be Wansink’s fascination with advertisers’ dirty tricks. He made himself a student of Vance Packard’s 1957 book The Hidden Persuaders, a critical exposé of subliminal messaging and other forms of media manipulation, and modeled himself as a Packard of the pantry—an expert on unconscious factors that determine what and when we eat. If Packard was a social critic, though, Wansink meant to be an activist: He’d first identify the hidden persuaders that make us fat, then he’d rewire them to make us thin.
In the years that followed, Wansink’s lab would draw lots of lessons in the tactics of the food industry. If supersizing portions make us overeat, then Wansink tried to show us how we might fight back by downsizing our consumption norms. (One line of his research suggests we should all be using smaller plates.) In 2010, shortly after he finished up his stint for USDA, revising the nation’s Dietary Guidelines, Wansink co-founded the “Smarter Lunchrooms Movement” with major funding from the government; its mission was to fiddle with the hidden persuaders one might find in school cafeterias, using evidence from “economics, marketing, and psychology” to “nudge students to voluntarily elect the healthiest food.” The program would rely on research from the Food and Brand Lab, suggesting, for example, that fruits and vegetables were more attractive when rebranded with kid-friendly names (the aforementioned “X-Ray Vision Carrots,” for example) or decorated with Sesame Street stickers.
Wansink thought he’d figured out how to market healthy food to kids. But as an expert on the ways of business, he also knew that his ideas wouldn’t sell themselves. If his research were to make a difference—if it would be “impactful,” to use one of Wansink’s favorite words—he’d have to figure how to market his ideas on marketing. He’d have to find an “Elmo” sticker to put on to his data, so people paid attention. You can see this hunger for attention in the emails published Sunday: “We want this to go virally big time,” he wrote in one; “let’s think of renaming the paper to something more shameless,” he wrote in another.
This was not a secret focus. Wansink has been very clear about the need for clever marketing of his and others’ research findings. When he and a scientific partner, Koert van Ittersum, edited the first issue of a brand-new scientific journal in 2016, they invited authors of accepted manuscripts to a workshop retreat in Ithaca, New York.* As the pair explained this process in an editorial for that journal, the workshop was supposed to teach each author how to clarify their paper’s “positioning” so it would stand out “in a crowded research area,” or else to “craft a repositioned ‘takeaway,’ so that it resonates with consumers.” Participants then crafted, rehearsed, and filmed “party pitches” of those takeaways. You can still watch their awkward science-promo videos on Wansink’s website: “Can a McDonald’s Happy Meal make you want to eat less?” one workshop member asked; “Think about the last time you ate. Were you even hungry?” began another acolyte.
These promotional campaigns weren’t meant to sell the research to the public, though—or not only to the public. When Wansink and van Ittersum said the papers in their issue of the journal should “resonate with consumers,” they were referring just as much to the readers of that journal, which is to say their fellow scientists. They aimed to market science to their peers, and their product sales would be measured in citations.
The impulse to treat everything—even scientific publishing—according to the logic of big business shows up in the BuzzFeed emails: “Too much inventory; not enough shipments,” Wansink writes of one former student who had failed to get tenure, in part because she had too many unfinished papers lying around. “As Steve Jobs said, ‘Geniuses ship.’ ” He put some similar advice into another paper with van Ittersum on how “boundary researchers” like them—i.e., those doing risky, cutting-edge research—might elbow out competitors in the market of ideas. That article describes various “transforming behaviors” that would help scientists “take their ideas to a new level of influence” and orient themselves toward “surprising impact” that maximizes their chances of citation and promotion. These might as well have been transposed directly from the work that Wansink did for Better Homes & Gardens in the 1980s: If you want your paper to get downloads, he says, you’ll need to make it “notable and quotable,” and you’ll need to write a proper cover line.
There was never that much room for subtlety in this marketing of research findings. In a 2006 interview with the late Seth Roberts, Wansink described his realization, while he was still a Stanford student, that research counts for more when it “gets buzz.” “I’m a big believer in cool data,” he told Roberts. “The design goal is: How far can we possibly push it so that it makes a vivid point? Most academics push it just far enough to get it published. I try to push it beyond that to make it much more vivid.”
The science businessman had decided early on that he’d look for data that’s a cinch to sell. With “cool” results in hand, he’d have a better chance of moving research inventory into journals, and that would lead to greater sales—bigger grants—which would in turn allow for further reinvestment in his research. In the end, the tools of marketing would help him to pursue his big idea, that the tools of marketing could help to make us eat more healthy food.
Wansink even did his own laboratory research on the marketing of laboratory research, trying to identify hidden persuaders in his own métier. For a 2014 paper that he wrote with Aner Tal, “Blinded With Science,” Wansink tested whether janky graphs, figures, and formulas could increase belief in scientific claims. It wasn’t just that people put more faith in stories told with “science-signaling,” this paper said; it also looked like graphs and formulas worked best when they were marketed to people who believed in science. “The research demonstrates how easily companies can create a scientific appearance,” Wansink and Tal conclude. “The fact that elements associated with science can so easily enhance persuasion urges caution in the communication of purportedly scientific claims,” they add, before suggesting that we all adopt “a more critical eye when it comes to assessing claims that are given a scientific veneer.”
In what can only be described as a staggering irony, this paper seems to itself contain some janky findings, which have, of course, been packaged in a way that makes them resonate with a science journal’s science-minded readers. For instance, two of the study’s three major findings of significance have p-values of .04 and .07, respectively, hugging close around the standard cutoff for statistical significance. (In other contexts, this pattern has been taken as a sign of publication bias and p-hacking.) There are other signs of problems, too. James Heathers, one of the data detectives who has identified mistakes in Wansink’s other work, notes an error in the very first line of the “Blinded With Science” paper’s Methods section: It describes the first study as having tested 61 participants, of whom 51.7 percent were male. As Heathers pointed out to me, that percentage doesn’t work. (It would make sense if Wansink and Tal tested 60 participants, instead of 61—and 31 of those were men.) There are some other red flags in that paper, too, Heathers says.
Writing 61 for 60 might not sound like a huge deal. But Heathers and several colleagues—notably Nick Brown, Jordan Anaya, and Tim van der Zee—have turned up a cache of such mistakes in Wansink’s output. Some are rather small and could be explained by things like rounding errors. Others seem somewhat more preposterous. The X-Ray Vision Carrots study, for example, was said to have been conducted on kids between the ages of 8 and 11; in fact, it was done on kids in preschool. Heathers noticed that one of Wansink’s studies of potato chips seems to have a shifting sample size (as well as chips that somehow weigh the same as strawberries). In reviewing Wansink’s famous study of a bottomless soup bowl, he concluded that its reported numbers made no sense. Meanwhile, Brown has found (among other things) that three different survey studies from the Wansink lab somehow ended up with the exact same number of respondents, and in the study of Elmo-branded apples, one graph seemed completely wrong.
Could these errors have been introduced in the rush to ship out findings from the factory floor? Did they emerge somewhere in the process of extraction, while results were being mined from data quarries and blood was getting squeezed from rocks? We may never get specific answers to these questions. But here’s one thing we know: For Wansink, the marketing was science and vice versa. He thought he’d figured out a way to get kids to eat more fruits and vegetables. Then he treated his results as fruits and vegetables, and figured out a way to feed them to the rest of us.
Correction, March 1, 2018: This piece originally misstated that Wansink and van Ittersum launched this journal on their own. In fact, they were the guest editors for its inaugural issue.
Support our independent journalism
Readers like you make our work possible. Help us continue to provide the reporting, commentary, and criticism you won’t find anywhere else.