The coronavirus pandemic is spurring social media companies to implement some of the most aggressive misinformation policies yet. On Thursday, Facebook announced that it will start notifying users if they have liked, reacted to, or commented on harmful misinformation about COVID-19 that the company has since removed. Examples of such posts include claims that social distancing is ineffective or that drinking bleach will cure the disease. These notifications will also connect users to COVID-19 myths debunked by the World Health Organization. The goal is essentially corrective: Alert users of fake news they’ve encountered, then steer them toward the truth.
Facebook plans to unveil this new feature in the coming weeks, but it’s still testing how the notifications will look, a Facebook spokesperson told Axios. The company has also added a new “Get the Facts” section to its COVID-19 Information Center at the top of the news feed that features fact-checked articles that disprove misinformation about the disease.
This is the latest step in Facebook’s efforts to counter coronavirus misinformation. In the past month, the company has grown its fact-checking program, which works with a network of over 60 fact-checking organizations around the world. Facebook attached warnings to about 40 million coronavirus-related posts in March alone, and the company said it’s removed hundreds of thousands of posts with false claims “that could lead to imminent physical harm” since the start of the pandemic. So far, over 350 million people on Facebook and Instagram (which is owned by Facebook) have clicked through pop-ups and the COVID-19 Information Center to learn more about the disease.
Facebook is not the only social media platform to expand its policies amid the pandemic. Last week, WhatsApp imposed strict limits on forwarding messages to slow the spread of misinformation, though it can’t flag or remove specific information like Facebook, its parent company, since its end-to-end encryption blocks moderators from seeing users’ messages. On March 18, Twitter created a policy to combat COVID-19 misinformation that includes removing tweets with “content that increases the chance that someone contracts or transmits the virus.” But a study by the Reuters Institute at Oxford University found last week that almost 60 percent of false claims about COVID-19 remain on the platform without a warning label. YouTube has been removing thousands of COVID-19 videos, including some from Brazilian President Jair Bolsonaro’s channel, for spreading medical misinformation.
Yet some of these plans—including Facebook’s misinformation notification—may not be as foolproof as social media companies hope. Facebook said that users who encounter warning labels don’t go on to view the original content 95 percent of the time. But misinformation warnings and notifications don’t always have the intended effect: They can lead to what researchers have called the “implied-truth effect,” where the selective labeling of false information makes all unlabeled content seem legitimate. “This is a huge problem, because fact-checking is hard and slow, whereas making up fake stories is fast and easy,” David Rand, an associate professor of management science and brain and cognitive sciences at MIT, told Intelligencer.
In Facebook’s case, 24 percent of the platform’s false or misleading content has remained up without a warning as of last Tuesday, according to the Reuters Institute study. While Facebook’s latest policy will help to fix the damage done by posts that have now been removed, it doesn’t address the harmful content that still exists, unlabeled, on its platform. According to a recent study on false news labels, which Rand co-authored, that might require labeling verified content “true” to eliminate ambiguity, or employing more fact-checkers to assess every single piece of COVID-19-related content on the platform. But it will be difficult to fully address: After all, social media platforms were designed to increase engagement and traffic, accuracy notwithstanding. As the director general of WHO put it, we’re in a middle of an “infodemic,” where information—false or not—“spreads faster and more easily than this virus.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.