Future Tense

Misinformation Telephone

How people and platforms spread stories during a global health crisis.

A red corded telephone
Miryam León/Unsplash

“I don’t know … but I think this … epidemic is a form of population control.”

“Two weeks ago hardly anyone had heard about the virus. Now the … vaccine is almost ready. How big do you want the red flag to be?”

“Pharma has captured the governments of the world.”

“Our friends are being targeted & attacked & the media is lying about what’s actually going on.”

All of those sentiments, which came from popular social media posts, sound familiar to anyone who’s been following the news recently. But they aren’t about the coronavirus—they aren’t even from this year. The first is from the 2014 Ebola outbreak, the second from a meme popular during 2015’s Zika outbreak, and the last about the 2019 measles outbreak in Samoa. Online misinformation and hoaxes have become a kind of secondary infection that erupts in the time of outbreaks.

Advertisement
Advertisement
Advertisement
Advertisement

In outbreak scenarios, there is always misinformation about the cause and progression of the disease: It was engineered in a lab, it was released by an entity that patented it, the government screwed up and released a bioweapon. There are conspiracies about treatments and vaccines: Pharma profiteers are using the panic to force vaccines on the public, the elites are getting clean vaccines, it’s all part of the depopulation agenda. There are the grifters peddling fake cures: vitamin C, colloidal silver, hemp oil, all available for purchase on the website of the person posting the meme, of course. There are partisan media figures politicizing the outbreak. Although the specifics change, these narratives recur.

Usually, the false narratives and conspiracies stay largely confined to particular affected communities or geographical regions. Zika, with its highly visible birth defects, held the attention of the world for a few weeks, but as a primarily mosquito-borne and sexually transmitted illness, it stayed largely geographically confined. That regional centralization was reflected in social media posts about the epidemic. On Facebook, much of the misinformation and hoaxes that spread were limited to Brazilian communities and, to a lesser extent, the United States.

Advertisement
Advertisement

This time, the disease itself—and the attention paid to it—are on an entirely different scale than Zika, measles, and prior outbreaks in the era of social media.

Advertisement

The early death toll and speed of spread were deeply troubling, and the stories of affected people that trickled out (despite China’s controlled social media environment) compelled the world to follow along. The increasing toll of the pandemic is reflected in the social media conversation. Narratives are going global; “cure” hoaxes and information “from a friend of a friend who is a doctor” are spreading like wildfire and being translated into other languages, hopping from online community to community around the world. It’s a global game of misinformation telephone—no one has any idea what the source was, or even what was originally said, at the end of it.

Advertisement
Advertisement
Advertisement

Social media is a continuously evolving environment, as new apps and features crop up, some of which serve specific countries or regions. Each platform bears its own norms and behaviors, which makes addressing misinformation something that has to be tackled across a wide range of individual environments. TikTok, for example, hadn’t yet been widely adopted in prior outbreaks; in the days of the coronavirus, it’s become a place for its young user base to share information. Shelter-in-place memes have become part of the culture on the app. The World Health Organization’s content is also prominently featured in #coronavirus searches as TikTok’s trust and safety team works to keep bad information from going viral.

Advertisement
Advertisement

Other, older platforms can provide lessons from outbreaks past: When Zika hit, 50 percent of Brazil’s population was on WhatsApp during that outbreak. As it turned out, having a large portion of the population in one digital gathering place had both pros and cons that we can learn from today: Doctors across Brazil used WhatsApp to share information in medical chat groups, discussing odd clusters of symptoms that they were seeing in the early days of the outbreak. Public health organizations pushed PSAs into groups, and pregnant women started support channels.

Advertisement
Advertisement

For all of their flaws and pockets of misinformation, these are valuable communication channels and they offer a significant opportunity for authoritative sources to reach the public where they are. The challenge for the platforms is in enabling that, and enabling those community social support functions, while protecting the communities themselves from being overrun with nonsense and grift. They have to elevate authoritative content and voices while still allowing people to discuss their experiences.

Until the 2019 Brooklyn and Samoa measles outbreaks, tech companies had not really accepted the responsibility to surface authoritative health information and down-rank misinformation. Researchers—including me—had spent years suggesting that platforms needed to address quack-cure and anti-vaxxer groups. Health conspiracies began to fall under “fake news” fact-checking programs beginning in 2017, but the viral false content from organic posts on Facebook groups and pages was still largely treated as a free-expression issue. Since the harms were rarely immediate, the platforms didn’t get involved; the downstream impact on individual or public health wasn’t fully considered. One notable exception to this laissez-faire approach was Google search, which in 2013 had instituted a policy called “Your Money or Your Life,” acknowledging that search results for topics that could have a significant personal impact should be subject to a higher standard of care and not determined solely by popularity or other gameable metrics.

Advertisement
Advertisement
Advertisement

But as preventable measles cases spread in Brooklyn last year, media and elected officials alike began to look at the role that social platform information was playing in vaccine hesitancy and the resultant outbreaks of disease. They found a complex picture: Social dynamics played a role, though these included both online and offline factors. Rep. Adam Schiff wrote letters asking YouTube, Facebook, and Amazon to account for the steps they were taking to ensure that conspiracies spread on the platform weren’t negatively affecting public health writ large.* The companies released new policies: The inaccurate content could remain on the platform, but the platforms would no longer serve it in ads, or recommend groups or pages that shared it in the recommendation engine. False health information was down-ranked and deprecated.

Advertisement

Those policies have recently been applied to the coronavirus as well. Any time a user includes the word coronavirus in a search, Twitter displays a banner linking to the CDC; Pinterest, which limits results for queries for which it can’t ensure scientifically reputable results, is only returning prominent health organization content; YouTube is returning results from authoritative sources and actively working to delete cure-hoax content. Reddit is quarantining conspiratorial communities. And on Facebook, which struggles with peer-to-peer misinformation in highly conspiratorial communities, Mark Zuckerberg has put up a number of posts—and detailed Facebook’s evolution in recognizing the responsibility that platforms bear in addressing health misinformation.

Advertisement
Advertisement

These measures are a marked improvement over outbreaks past, but grift and misinformation are still proliferating—and there are fewer human moderators to do the work because companies have closed their offices, leaving us even more dependent on A.I. The challenge of managing misinformation in a crisis in general is compounded by the sheer speed and scale of this disease’s spread. Every country is, or is likely to be, affected by the coronavirus. The crisis of authority in certain countries is compounding the problem. In the U.S., for example, the coronavirus is already politicized—depending on which media environment you trust, you’re seeing very different things. Amplifying authoritative sources leads to allegations of censoring the other side’s point of view (even if it’s factually inaccurate and not in any way backed by sound science).

Advertisement
Advertisement
Advertisement

Social media platforms are under pressure to ensure that sensationalism and misinformation on their platforms don’t exacerbate an epidemic—as they should be. They are the gatekeepers of good information during this crisis. The problem is that much like the disease, misinformation spreads wherever people congregate. It’s on groups, on pages, in subreddits, and in Discord servers. But it’s also in iMessage and texts, which are being translated and forwarded in this game of digital telephone. It’s spreading over email, which has seen a remarkable return of the kind of chain-letter forwards that used to go viral in the late 1990s. The people who share this wrong information are doing so because they have good intentions.
They want to warn their communities and friends. They want to share the latest news and stories. And so, it falls to each of us to thoroughly check anything we come across before spreading it. Think of it like washing your hands—do it to protect yourself, and others.

Correction, March 20, 2020: This article originally misstated that Twitter was one of the tech companies that received letters from Rep. Adam Schiff about health misinformation on their platforms. Twitter did not receive a letter.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement