On Thursday, Facebook announced a set of U.S. election season changes to political advertising and content moderation. While Mark Zuckerberg argued that these changes are intended to do Facebook’s part “to protect our democracy,” we’re concerned that they’ll have the opposite effect by suppressing important political speech and neutering get-out-the-vote campaigns at the most important moment in the election.
To start, Facebook banned new political advertising beginning Oct. 27, a week before the U.S. election on Nov. 3. Because the ban is for a fixed period and applies only to new advertising, it’s far more limited than the restrictions Twitter announced in 2019, which banned most political ads. Still, as we pointed out when Twitter announced its policy, bans are likely to muzzle important political speech and disproportionately burden challenger campaigns while benefiting more powerful incumbents with large organic reach on social media platforms. (We should also note that one of us, Matt, previously worked at Facebook as director of public policy.)
Ostensibly, the intent of this policy is to force political advertisers to make all of their advertising known in advance so third parties, such as fact-checkers and rivals for political office, have a chance to scrutinize them. That’s a good goal. Transparency helps to inform the public about election dynamics and makes people more accountable for deceptive practices.
But in practice, the blackout period will likely suppress important political speech. Facebook will cut off new ads precisely at a time when voter mobilization is at its height, hampering get-out-the-vote campaigns, which use social media advertising to provide voters with important time-sensitive information and reminders in key states in the days before an election, such as “Early vote ends tomorrow.” These appeals are especially important for mobilizing harder-to-reach and less politically engaged voters, such as young people.
Even more, a blackout will prevent campaigns from responding on Facebook to late-breaking events during that final week through paid speech.
For instance, during the 2016 election, FBI Director James Comey announced his plan to investigate a new trove of information about Hillary Clinton’s email servers on Oct. 28, and then stated on Nov. 6 that he had found no evidence of wrongdoing. (The election was held Nov. 8.) Under Facebook’s new policy, campaigns could run ads based on the October letter, but couldn’t run new ads about the announcement that Clinton had done nothing wrong.
In other words, what this blackout will likely mean in practice is a de facto ban on campaigns responding to late-breaking events on Facebook. (At least campaigns will still be able to respond on other platforms such as Google and TikTok and through news.) It will also likely mean a significant ban on counterspeech during the final week before the election.
Campaigns and consultancies will dump massive amounts of advertising content on Facebook immediately before the deadline so they have maximum flexibility to run them during the blackout period of new ads. With such a flood of new ads, it will be hard to scrutinize them and will diminish the value of making them transparent. Even more, the blackout will prevent rival campaigns from generating new ads that respond to things such as at-the-deadline attacks or false claims.
Who does all this likely reward? Candidates with large followings on Facebook who can spread their speech and counterspeech organically. And these changes likely benefit far-right and fake news content that spreads organically through engagement.
In addition to the blackout period, Facebook announced that it will make its Voting Information Center more prominent at the top of users’ news feeds and that it will take additional steps to counter election disinformation, false claims of victory, and early declarations of election outcomes. While making accurate and reliable voter information available at the top of the news feed is a laudable move, its effects may be limited if voters have to seek it out on their own. As an expert working group recently recommended, a better solution is to push reliable information out to users’ feeds so it is incorporated with the content they see as they scroll.
Facebook’s new policies include adding more scrutiny to false claims about polling conditions and expanding its removal of voter suppression to include implicit misrepresentations about voting as well as explicit ones. These are promising, but critical details remain uncertain. How will Facebook define “explicit” or “implicit” attempts to suppress the vote? What will be grounds for removing the content versus leaving it up with a label? How long will these policies be in effect? Months, if there is a contested election? To date, labeling has been particularly confusing. When Facebook labels content, it is simply not clear to users whether the company has determined that speech to be false, in need of greater context, or interesting enough to spur further action. For example, when Trump posted on Facebook encouraging people to vote both by mail and in person, the social network’s response was to add a label saying that voting by mail is secure. But that label doesn’t explicitly say that Trump was encouraging something illegal or that he was spreading false information.
The company’s ongoing vacillations also make it hard to feel confident that today’s policies will be in place tomorrow. Only a few months ago, Facebook refused to remove or label Donald Trump’s posts on voting and racial violence, claiming that political speech by a sitting president shouldn’t be mediated by a tech platform. Now, Facebook is taking much more aggressive action against the president’s speech, removing claims it views to be harmful and labeling others. There are benefits to each approach, but swinging from one to the other in a matter of mere months makes it difficult to understand the company’s principles and makes it hard for users to know what to expect from the platform. These recent shifts are particularly odd since they come less than a year after Zuckerberg gave an important policy speech attempting to outline a clear philosophy of free speech for the platform.
The reality is that we don’t have the data we need to evaluate Facebook’s decision-making, so it is simply not clear whether any of these changes in policy or enforcement were the right ones to make, or whether they are enough to stamp out voting disinformation while also preserving speech that’s a vital component of free and fair elections. Facebook and others are supporting new research in this area, but it’s important to get more accurate data on the costs and benefits of paid online speech before deciding to cut it off.
Better decision-making will likely come from better data. For instance, most academic research shows that political advertising on social media is generally used to mobilize and that persuasion is difficult, meaning that a blackout period may have little impact on whether people see misinformation, but might have a significant detrimental effect on efforts to get more people to vote.
As we said last year, several alternative approaches are more promising. Companies could add product features that make it easier to engage in counterspeech, such as by enabling rival campaigns to publish ads to the same audience. Companies could also focus on ensuring that advertisers don’t violate their existing policies by surreptitiously using targeting to undermine voting integrity. Finally, they could ensure that they don’t profit financially from paid election speech by committing all political advertising revenue to nonprofits working in election integrity or to their own election integrity products.
In the end, decisions about electoral speech should be made by governments, not private companies. Facebook’s power to set the terms of political speech reveals a failure of the U.S. to establish a regulatory framework that will secure free and fair elections—a cornerstone of democracy. While we appreciate that Mark Zuckerberg is attentive to his company’s role in democracy, two months before the most consequential election of our lifetimes isn’t the right time to experiment.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.