Future Tense

News Coverage Says a Study Claimed Fake News on Facebook Didn’t Affect the Election

But news coverage is wrong.

Facebook on mobile.
Photo illustration by Slate. Photo by Thinkstock.

In January, it seemed like we got some good news about Facebook for once when three highly regarded political scientists released the results of a study of fake news on Facebook during the 2016 election to the public. The media coverage of the study was extensive and made it seem like all that hand-wringing over fake news—those crazy stories dreamed up by Macedonian teenagers and Russian troll farm workers—was an overreaction. A piece in the New York Times claimed the study showed that fake news stories only had ”little impact” on voters, because the “false stories were a small fraction of the participants’ overall news diet.” Quartz concluded the study said that “Just a small group of Americans consume fake news.” Perhaps unsurprisingly, the conservative website the Daily Caller found the study to show that fake news had “considerably less significant impact on voters leading up to the 2016 election than many commentators would have you believe.”

But though the news coverage largely presented the study as if it concluded that fake news didn’t have much impact on the election, that is not the case. The study had an important limitation: It looked only at Facebook users who actually clicked on one of the fake news links littering their news feeds during the election.

For their research, Brendan Nyhan from Dartmouth, Jason Reifler of the University of Exeter, and Andrew Guess of Princeton looked at actual visitors to fake news websites during the last presidential election and concluded that these sites mostly influenced visitors who were already of a certain persuasion. They also found that few undecided voters actively engaged with the same websites. That’s an interesting finding, but it “significantly undershoot[s] the true exposure to this material,” says Michael Suman, research director at the USC Annenberg School of Communication Center for the Digital Future. “I would argue that most exposure to the content of fake news stories is probably incidental, a headline, references in all kinds of places, online discussions.” Suman points to the classic media studies concept of the two-step flow model, which he says “showed how news affected voters, but mostly secondhand, through opinion leaders who passed on to others what they got from the media.”

Colin Doty teaches courses on misinformation at UCLA and California Lutheran University and is currently writing a book about the phenomenon. He agrees that the study leaves out a substantial amount of people who may be exposed to fake news: “It implies that people who didn’t click on the stories were not persuaded by them. The study doesn’t really know that, because people might also share the stories without reading them.”

Doty is quick to point out, however, that the authors aren’t at fault here: “I think the press is overemphasizing the impact, which the paper doesn’t actually address,” he says. “It talks about who clicked on the stories.” The study’s authors never claimed that it showed anything about impact, either: “Our study is very clear that we did not measure how much fake news affected an individual’s opinions about the election or whether fake news affected the outcome of the election,” professors Nyhan, Guess, and Reifler told Slate in an email. They also acknowledge that they only revealed the tip of the misinformation iceberg: “It’s important to emphasize that our estimates are likely a lower bound for total fake news consumption.”

If that’s the “lower bound for total fake news consumption,” then exposure could be exponentially larger than what the media gleaned from the study. But it’s impossible to study the questions Doty and Suman raise in more detail. Facebook simply isn’t willing to part with the data, say the three professors behind the study. Emphasizing the importance of studying indirect exposure to fake news, they make it clear that they “specifically state in the paper that we do not capture other types of exposure. Unless Facebook chooses to share its data with us or other researchers, we cannot observe what happens in the news feed of a proprietary social media platform.”

Still, in the news coverage of the study, the influence of fake news was downplayed. The New York Times claimed, for instance, that it “paled in influence beside mainstream news coverage.” But some researchers disagree about that, too. The Oxford Internet Institute in the U.K. is home to the Computational Propaganda Project, a group of researchers who study, among other things, fake news campaigns in elections. They found that in Michigan, a state that Trump won in 2016 by less than 12,000 votes, fake news stories were shared as frequently as real news in the last days of the election. Stories from news outlets made up two-thirds of all content being shared, and one of those thirds consisted solely of fake news. Nyhan, Guess, and Reifler generally praise the Computational Propaganda Project’s work, but they also point out to Slate that the Michigan study was based on Twitter users. Facebook, the authors say, is a more reliable platform to study, because it has a much larger reach.

But there’s another, perhaps bigger, limitation of the study: It focuses on people who use Facebook on a computer, even though Facebook says that worldwide, about 93 percent of its daily active users in December 2016 accessed the site from a mobile device. (That percentage is probably slightly lower in the U.S., though.)

“Relying solely on click-thru rates and desktop web browsing data to measure the influence of fake news avoids different ways people engage in mobile social reading,” says Amelia Acker, professor of information studies at the University of Texas. It isn’t just that there are far more mobile users than desktop users; where you consume Facebook also changes the experience. “Motivations for reading an article, broadcasting, or reposting stories to your social network will be different across contexts, whether you are sitting at a laptop or looking at your phone while waiting in line,” she says. “So, what qualifies as ‘exposure’ to fake news will also be different.” You may be less likely to check the validity of a news story before sharing it on your smartphone than you would on a computer, for instance.

The study’s authors agree that lack of mobile data is a limitation. They say that they have included the mobile data in an appendix showing that “the patterns we observe are similar in mobile browsers, but these data are more limited in both panelist coverage and the lack of page-level information.”

Nyhan, Guess, and Reifler have now submitted the study to an academic journal, where it is undergoing peer review. In other words, the version of the paper that received all the media attention was not the final product but more of a call for comments. Sharing unpublished research that has yet to undergo review is a process Nyhan believes in for academic reasons: “The feedback makes the work better,” he wrote on Twitter last year.

With a seemingly appropriate amount of foresight, his tweet continued: “But the media has to be careful. Essential to consult other experts, explain limitations, and put studies in context of literatures.” It’s a warning those covering his study would have been wise to heed.