The silencing of Donald Trump arrived last week in a “gradually, and then suddenly” manner that is typically reserved for the breaking of a dam or the explosive death of a star. One moment, it seemed, he was everywhere—propagating lies about election fraud and stoking riotous, violent mobs to storm the Capitol—and then, the next moment, he was gone.
In the span of a couple of days last week, Trump was banned or suspended by Twitter, Facebook, Instagram, Snapchat, and Twitch. YouTube suspended him for at least a week on Tuesday evening. Shopify took down the online stores for both his campaign and the Trump Organization. Stripe stopped processing payments for his campaign’s website. Reddit banned the r/DonaldTrump subreddit, and Discord removed the server connected to the pro-Trump group TheDonald.win. Although it was widely assumed that Trump would simply pivot to Parler, the social network catering to Trump supporters, it too went dark soon after Apple and Google removed the app from their stores and Amazon declared that it would no longer host it on its cloud computing services.
The platforms’ reasons for these moves were entirely valid and incredibly serious: Trump and his allies were inciting violence and spreading dangerous lies. Indeed, the dangers of their speech became unmistakably clear on Jan. 6, when Trump’s demands that his supporters “fight like hell” led to the deaths of at least five people, the construction of a gallows on the Capitol lawn, and camouflaged men roaming the halls carrying zip ties and rope.
Many observers found comfort in the platforms’ ability to point to community guidelines, content-moderation policies, and prior warnings as justification for their bans, because they suggested that something resembling neutrally enforced due process had occurred. Yet, as Mark Zuckerberg himself has acknowledged, the decision to ban Trump represented a dramatic shift in the platforms’ approach to the speech of the president and his allies. It was a decision that had less to do with the rule of law than a dawning (and, arguably, much too belated) recognition of the harmfulness of speech the platforms had previously largely tolerated.
Ultimately, the decision to deplatform Trump reveals the tremendous power that companies like Twitter and Facebook possess over our public debate. Do these companies have the constitutional right to censor Trump? Sure. Should their stunning collective show of force still concern us? Absolutely.
It is the underlying dilemma of the First Amendment that no matter how many checks and balances we try to embed into our government structure, eventually we have to give someone or something the final power to decide who gets to speak, when they may speak, and what they may (or may not) say. The problem, however, is that all of the possible alternatives of where to entrust this power are deeply flawed. The Supreme Court, meanwhile, has spent the last 50 years backing us into a doctrinal corner that has left us at the mercy of Big Tech.
So how did we get here? It starts with the long-standing American discomfort with giving too much power to regulate speech to legislators and elected officials. They are the ones, of course, who are charged with representing the will of the people, including defending the public and our democracy against dangers foreign and domestic—powers that seem like they might’ve come in handy to prevent the chain of events that led to invading marauders scaling the walls of the Senate chamber. Yet even incredibly popular speech regulations that are enacted by democratic institutions can have the effect of stifling speech that is important to democratic flourishing.
Enter the courts. The Constitution establishes the courts as our apolitical branch. We guarantee judges life tenure in order to free them to stand up for the rights of all speakers. Yet, obviously, judges are human beings and thus not immune from the biases and influences of their time. Like legislatures, courts can also be easily persuaded to go along with the unjustified repression of unpopular speech during times of mass political hysteria. Consider, for example, decisions from the McCarthy era, where the Supreme Court rejected the First Amendment defenses of Communist Party USA leaders who were criminally prosecuted for propagating Communist ideology. Despite the government’s lack of any evidence that the defendants planned a violent revolution, the justices concluded that the mere teaching of Communist ideology posed a profound threat to the nation’s existence.
In an attempt to avoid repeating the errors of the past, therefore, the court has imposed increasingly rigid constraints on the government’s ability to regulate speech in general. In particular, it has insisted that the government lacks the power to regulate almost any kind of speech because of its message, no matter how pressing the public’s interest or how minor the incursion it imposes on speech. In case after case, the court has protected intentional lies and displays of violence, while setting a high bar for the regulation of threats and incitement. It has attempted to lock the courts, along with the political branches, into something of a constitutional straitjacket, from which they cannot depart even at moments of political turmoil.
While there is much to be said in favor of this approach, it also imposes very significant costs—not only to those who might be harmed by the speech but to the vitality of the marketplace of ideas itself. By enabling the pervasive harassment of women and minority speakers, for example, the modern First Amendment doesn’t just encourage speech; it also silences it. An unregulated marketplace of ideas can also make it extraordinarily difficult to distinguish truth from lies. In this respect, as the Trump years have demonstrated, it can threaten the same democratic values it is intended to foster.
For these reasons, many on the left have embraced another rigid rule that the court in recent decades has read into the First Amendment—namely, the rule that the First Amendment only constrains the actions of government actors (except in very unusual circumstances). During the mid-20th century, it was liberals who were the strongest critics of the idea that the First Amendment should only apply to government actors. Today, however, the strongest criticisms of this doctrine are coming from the political right, while many liberals are celebrating the fact that Twitter and Facebook have no First Amendment obligations to grant access to their platforms to any speech or speaker they dislike.
Yet liberals should be wary of the power of private actors like Twitter and Facebook. Although the platforms don’t have the coercive power that the government possesses, they nonetheless wield considerable control over our public discourse—opaque control that they can use (and have used) to mute the voices of some and promote the voices of others. Indeed, last Wednesday’s tragic events are nothing if not a stark real-world example of how the speech policies of these private companies can have a profound impact on the health and functioning of our democracy.
If modern First Amendment jurisprudence were a Venn diagram, then the strength of our public dialogue now lies at the center of two overlapping circles: the Supreme Court’s strict restrictions on the government’s ability to single out speech based on its content and its equally strict distinction between public and private regulation of speech. In this world, only powerful private companies can legally take the types of widespread and effective actions against dangerous and harmful speech that we have seen recently. Yet also in this world, they wield this broad power free from the checks of either the democratic process or the Constitution.
Thus, whether the deplatforming of Trump violated the Constitution should be only the beginning of the First Amendment discussion, not the end of it. It is possible to conclude that the blocking of Trump’s access to these sites was, on the whole, good for democracy and yet still acknowledge that it raises all kinds of questions: Has contemporary free speech law overcorrected, and does it now impose too many constraints on speech regulators? Should the court’s rules regarding incitement or false speech be relaxed? Do we need new laws to govern the platforms—laws ensuring that the decisions they make regarding speech are more transparent, less ad hoc, and reflect values beyond their motivation to make money? We ignore these questions at our peril. Because today it is Trump who was deplatformed, but who will it be tomorrow?
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.