Future Tense

Lies Travel Faster Than Truth on Twitter—and Now We Know Who to Blame

It isn’t bots, according to a major new study. It’s us.

A picture taken on December 28, 2016 in Vertou, western France, shows logos of US online news and social networking service Twitter. / AFP PHOTO / LOIC VENANCE        (Photo credit should read LOIC VENANCE/AFP/Getty Images)
Don’t trust those birdies.

It’s hard to remember now, but there was a time when some intelligent observers of social media believed that Twitter was a “truth machine”—a system whose capacity for rapidly debunking falsehoods outweighed its propensity for spreading them. Whatever may have remained of that comforting sentiment can probably now be safely laid to rest.

A major new study published in the journal Science finds that false rumors on Twitter spread much more rapidly, on average, than those that turn out to be true. Interestingly, the study also finds that bots aren’t to blame for that discrepancy. People are.

The paper, authored by scholars at the MIT Media Lab, analyzed an enormous data set of 126,000 rumors that were spread on Twitter between 2006 and 2017, generating tweets from more than 3 million different accounts. Specifically, they looked at claims that were subsequently evaluated by major fact-checking organizations and found to be either true, false, or some combination of the two. They found that false rumors traveled “farther, faster, deeper, and more broadly than the truth in all categories of information,” but especially politics. On average, it took true claims about six times as long as false claims to reach 1,500 people, with false political claims traveling even faster than false claims about other topics, such as science, business, and natural disasters.

That wasn’t because the people tweeting false news had more followers. On the contrary, the researchers found that the people spreading lies generally had smaller followings than those spreading the truth. The lies won the race anyway, suggesting there’s something inherent in Twitter falsehoods that makes them more prone to spread than truths.

This is fascinating and important research, even if its findings won’t exactly shock most people familiar with the workings of social media. The study appears to be the most careful and comprehensive of its kind, and the researchers report that their results stood up to all manner of robustness tests. It suggests that if companies such as Twitter and Facebook care about combatting falsehoods, they’re going to be fighting an uphill battle not just against bad-faith actors intentionally disseminating lies, but also against human nature and the dynamics of their own platforms.

“Human” is a key word there, because the study makes it clear that automated accounts aren’t the ones driving the discrepancy. Sure, bots retweeted false news stories—but they did so at a rate commensurate with their amplification of true news stories. The authors found that lies spread faster than truths regardless of whether bots were included in the sample or systematically excluded from it.

One thing the study did not answer definitively is why lies spread so much faster than truth on Twitter. But the authors did offer up at least one prime suspect: novelty. Using a complex scoring system to measure the novelty of information in each rumor, they found that false news stories were more novel than true ones—and also that novel information is more likely to be retweeted. Taken together, those findings strongly suggest that novelty plays at least some role in the speed at which falsehoods proliferate on Twitter. That accords with previous studies that suggest people are more likely to find new information valuable, and to share it on social media. (It’s also sort of obvious, right?)

It might be tempting to conclude here that the real culprit in online misinformation is simply human nature. After all, that old saw about lies outpacing the truth is a lot older than Twitter. It would be a mistake, however, to assert that this study proves that.

What it shows, provided we accept its conclusions, is that the truth-falsehood gap arises from some combination of human nature and the nature of Twitter. There’s no guarantee that these findings would apply to some other platform. And there is plenty of reason (by now) to suspect that Twitter in particular lends itself to the rapid spread of falsehoods: It’s explicitly built to facilitate the rapid, public sharing of information, and does little to incentivize people to independently verify the information they’re retweeting. In fact, the study’s findings suggest you’re more likely to be rewarded on Twitter with piles of retweets for spreading lies than you are for spreading truths. (I wrote this week about the centrality of these metric rewards to the Twitter experience—and perhaps also to the platform’s biggest problems.)

The MIT Media Lab study’s comprehensiveness doesn’t mean it’s infallible, of course, and critics may find flaws with it in the coming days. One question that stood out for me on first read is whether the authors properly controlled for the possibility that false rumors are easier to trace, since they’re likely to emanate from a single source (or, at least, a smaller number of sources). True news stories, in contrast, can often be independently verified and framed in different terms by numerous original sources, potentially making them harder to collate.

Soroush Vosoughi, a postdoctoral associate at MIT Media Lab and the study’s lead author, told me that pulling together all of the tweets around a particular rumor was in fact the researchers’ toughest challenge. Although they used sophisticated natural language processing software to do that, Vosoughi said they couldn’t be sure they captured every relevant tweet. “It’s possible that the automation biased false news over true news,” he said. “But even if there was a bias, I don’t think it was big enough to cause such a huge difference” as the study found.

Vosoughi volunteered one more critique of the study, which is that it only looked at contested claims. Obviously, there are plenty of presumably true news stories that never get vetted by professional fact-checkers—those weren’t included in the study. That makes sense, because the sample of uncontested true news stories is essentially limitless. Still, it leaves open the possibility of some selection bias: Presumably, only the most viral and convincing false news stories rise to the level of debunking by professional fact-checking sites, whereas a lot of the most viral and convincing true news stories do not require third-party fact-checks. (To address this, the researchers did run at least one test involving stories not previously fact-checked, and found “nearly identical” patterns.)

All that said, the study appears well-constructed and persuasive, and the question now is: What can Twitter do about it? It’s no longer safe to assume that social media platforms constitute a level playing field on which truth can reliably conquer falsehood. If Twitter and others want to fight fake news, hoaxes, and misinformation, they’re going to have to make some big changes—bigger than the spotty, inadequate human-moderation approach they’ve largely relied on until now. For what it’s worth, Twitter CEO Jack Dorsey seems to be listening: He tweeted Thursday that Twitter “supported this research” and will use its findings to improve its platform. It’s another example of Twitter showing the right amount of curiosity about what ails it—and still having no real prospect of a cure.

Will Oremus is Slate’s senior technology writer. Email him at will.oremus@slate.com.