On Tuesday, Twitter added a warning below President Donald Trump’s tweets about mail-in ballot fraud with an exclamation point symbol and the text “Get the facts about mail-in voting.” When clicked, the warning led to tweets and other information about the reliability and security of mail-in voting under the header “Trump makes unsubstantiated claim that mail-in ballots will lead to voter fraud.” Twitter justified this choice by saying the tweets were in violation of the platform’s civic integrity policy because they “could confuse voters about what they need to do to receive a ballot and participate in the election process.” Trump responded by threatening the social media industry for its censorship of conservatives.
In this small microcosm, we see some of the major problems that social media companies are going to face until Nov. 3 as they attempt to moderate election misinformation. Threats and interference won’t just come from ne’er-do-wells or foreign powers, but from the sitting president and his supporters. In many ways, these companies are (unenviably) responsible for upholding the integrity of the election while the president undermines it and political parties and institutions fail to hold him accountable. Because of this, platform policies and how they are enforced matter.
There are three important things to know about how social media platforms will deal with election misinformation. First, false information is generally allowed unless it falls into one of the specific categories of content that platforms are willing to enforce the truth in, such as democratic processes. That’s good news: Social media platforms have said they will do something. Second, social media platforms’ policies differ significantly, as does their enforcement, so what’s banned on one may be fair game on another. And finally, social media companies do not want to enforce truth without scientific or institutional consensus—a worrisome requirement in these unprecedented times.
False and misleading information is generally acceptable under social media companies’ community standards. Twitter doesn’t remove tweets claiming that Joe Scarborough is a murderer (a claim the president has also made on Facebook and Twitter) or that the Earth is flat, even though those things aren’t true. That’s because social media platforms don’t want to be arbiters of truth (as Mark Zuckerberg said in a Fox News interview Thursday morning).
Rather—based on the policy analysis of Facebook and Instagram, Reddit, Snapchat, Twitter, and YouTube I conducted over the past several months—the platforms are taking on the position of enforcers of truth in up to just four categories of content: health, manipulated media, tragic events, and civic processes. These are limited areas with institutional and scientific consensus where falsehoods can cause demonstrable harm. For instance, platforms have been willing to take action enforcing the truth of vaccines’ efficacy, as there is scientific consensus that they work and not using them causes outbreaks of disease. The same was true for COVID-19. Civic processes, like voting, are treated similarly: There are established facts, agreed upon by authoritative institutions, about such things as how and when to vote and who is on the ballot. When it comes to election misinformation, all five of these companies are supposedly willing to step in and remove or flag false information about voting.
But where these companies draw their lines around what exactly is prohibited matters, and their rules vary significantly. For instance, Reddit and Snapchat don’t explicitly call out election misinformation, but both have cited specific policies to the press that the companies would apply. Reddit focuses on impersonation, prohibiting content pretending to be from someone (or something) it’s not or that is falsely attributed to a source. Snapchat, meanwhile, has an overarching rule against “deliberately spreading false information that causes harm.” Most importantly to many readers, Facebook, Twitter, and YouTube draw strict lines around false or misleading information about the process of voting, like the time or place of an election or what the qualifications are to vote. Twitter put the fact check around Trump’s tweets as part of this policy, interpreting the post as false or misleading information about election processes. Facebook did not. (While Facebook has a third-party fact-checking program, politicians are exempt from being fact-checked, so that program doesn’t apply to Trump’s posts.)
Social media companies’ reliance on scientific and institutional consensus to choose what to enforce is not as foolproof as it sounds. COVID-19 highlighted this in multiple ways. Health authorities and government administrations that should have agreed put out contradictory information. Since social media platforms want to rely on institutional authorities for what is fact and what is fiction, the independence and reliability of authoritative institutions is paramount. And when it came to COVID-19, that independence and reliability was tested. Reuters reported that the guidance on hydroxychloroquine that the Centers for Disease Control and Prevention issued was highly unusual (based on anecdotes, not clinical studies) and crafted at the request of the White House’s coronavirus task force. Because of the language of the guidance, social media platforms did not consider the promotion of hydroxychloroquine dangerous health misinformation. After all, it did not violate the guidance from the CDC. This is probably why Trump’s tweets and Facebook posts promoting hydroxychloroquine were not removed, flagged, or labeled as false or misleading in any way.
In other words, the institutional guidance that Facebook, Reddit, Snapchat, Twitter, and YouTube were following was potentially manipulated by the very source of the misinformation they needed to correct.
To complicate matters further, platforms’ policies come with unspoken caveats. As I’ve written about previously, social media platforms’ policies aren’t static. They are malleable, influenced by the actions of major public figures. Sometimes, platforms would rather change their rules than sanction an influential account. Sometimes, the specific limitations set by their policies prove interpretable, such as what counts as a banned political advertisement on Twitter.
Right now, the policies ban the worst of election misinformation, but their enforcement will be unreliable and insufficient. Sure, every platform will likely remove hundreds of “vote-by-text,” “election postponed,” “Republicans vote on Tuesday and Democrats on Wednesday,” and “ICE is at the polls” type posts. But like misinformation about COVID-19, some of the greatest threats will likely come from established media outlets and the president. These are the posts that companies will be most reluctant to take down or flag, even when they seemingly violate the policies.
Right now, our political leaders and government institutions can’t be relied on to uphold democratic processes. That’s not social media companies’ fault, and it’s not a problem they can fully solve. But it gives them the choice to step in—to uphold democratic principles and keep the public informed about the election—or not.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.