A number of public thinkers are pining for a culture capable of hosting spirited debate in a neutral “marketplace” of ideas. In this vision, intellectual exchange is unencumbered by personal attacks or harsh judgment or, indeed—to preserve freedom of inquiry—the risk of professional consequences. And at the moment, many intellectuals seem most focused on curbing these “illiberal” tendencies on the left. The left, they say, have declared certain ideas off-limits for debate, dismissing those who want to debate them with insults or social opprobrium or even calls for firing. This leftist speech, the lament goes, is having a “chilling effect,” impeding the free flow of ideas, and making good thinkers hesitant and risk-averse. If you espouse the wrong position, you may pay with an internet pile-on or even your livelihood.
This sounds like a nightmarish state of affairs indeed. But there’s something crucial missing in these analyses, which grow vague and blame “the present climate” when they draw their comparisons to Orwell’s 1984. To hear them say it, it’s this climate that is responsible for unjust firings, even more than the actual employers. This climate is angry. This climate won’t be reasoned with. But what I think is largely responsible for this phenomenon they’re observing—without understanding—is Twitter. And the internet at large. And how years of arguing on social platforms, mixed with the incentives that they supply, has distorted not just the way most of us talk about things but also the way we manage ideological dissent. In short: Political discourse has been warped less because of “cancel culture” or “illiberalism” than by the way social media platforms have been poisoned, like wells, that poison us in turn.
I get the longing for better discourse. I even share it. But blaming people on the internet—as most of us are, helplessly—for not engaging in “good-faith debate” doesn’t just misdiagnose the problem; it’s stunningly naïve. Have you met the internet? Chilled speech isn’t new. Members of marginalized groups online have from the start dealt with threats, insults, and harassment campaigns for the crime of articulating their ideas in public. But free speech defenders didn’t sound the alarm about the marketplace of ideas then. I’m not sure what’s changed.
My bigger objection, however, is this: Pundits who do their work online don’t get to be naïve. They especially should know better than to act as if the death of good-faith debate—which I agree is a problem—came out of nowhere, or out of identity politics run amok. You can’t cut the far-right out of the picture, as if “censorious” rhetorical strategies emerged out of a void. And you can’t separate the platforms on which political speech is happening from the effects you’re condemning. Anyone weighing in on the state of political discussion should know, and factor into their analysis, that social media has made an internet public square where good-faith debate happens a thing of the past, if it ever existed at all. (I came closest to experiencing such a thing back when there were blogs.) The fact is that on Twitter, where much political news gets generated and disseminated and discussed, disagreement is usually expressed through trolling, sea-lioning, ratios, and dunks. Bad faith is the condition of the modern internet, and shitposting is its lingua franca. On—yes—both sides. Look: A professional Twitter troll is president. Trolling won. Perhaps it’s time to acknowledge that despite their centrality, online platforms aren’t suited to the earnest exchange of big ideas.
I understand that’s frustrating, especially to those who wish to freely debate difficult questions with smart adversaries and can’t find any takers. You could call that refusal to debate “illiberalism,” I suppose, or you could recognize that there’s a history here. And if you want to know why people aren’t bothering to engage seriously or at length (or shout at you when you try), that history is worth trying to understand. For one thing, social media platforms got flooded by devil’s advocates who wasted the time and sapped the energy of people who were actually invested—sometimes cruelly, and for sport. That tends to weed out good-faith engagement. Add to this that most arguments worth having have been had and witnessed thousands of times already on these platforms, in multiple permutations. Those of us who’ve been here for a while know their tired choreographies, the moves and countermoves. If I see someone bring up “black-on-black crime” in response to an article about racist policing, I know how almost every step of the interaction will go should I choose to engage. Rather than learn from these exchanges, people of all persuasions on Twitter mostly enjoy the style of whichever “dunk” we happen to agree with. This isn’t universal, of course. One can try to engage in good faith, and some people do. But given that the reward for all that effort is likely to be mockery or contempt, one learns not to bother. “Black-on-black crime” becomes a cue to sign off. (Or lob an insult. Or quote-tweet with a mocking meme. There are lots of things Twitter is good for, and building solidarity among people who agree—sometimes by starting movements, sometimes by ruthlessly dunking on a minority opinion—is one of them.)
Now, you may wonder: Doesn’t this world-weary presumption that you know how arguments will go lead to paranoid readings and meta-debates that seem totally batshit to onlookers who aren’t internet-poisoned? Yup! And that crosses over into real-life engagements too, since at this point it would be foolish to insist that online patterns aren’t having offline effects. Take “All Lives Matter.” Most people by now understand how the phrase works to undermine social justice protests, but for a long time, it did exactly what it was meant to: It made people who knew what it was actually saying seem paranoid and crazy for objecting to an anodyne statement that seemed bighearted and self-evident. “Why would you refuse to debate someone who’s simply saying that all lives matter?” is the kind of question an Enlightenment subject longing for a robust exchange of ideas might ask. Well, the reason is that most of us have learned, through bitter experience in the mirror-halls of the internet, that it would be a waste of time. It probably wouldn’t be a true exchange. We’ve tried. We’ve watched others try. And we know by now what “All Lives Matter” signals, and that what it signals is orthogonal to what it says. Your fluency in this garbage means you take shortcuts: Maybe, if you’ve been online a lot, you don’t even bother to refute the text anymore. You leap to the subtext—which is that black people don’t deserve public advocacy or concern despite being disproportionately abused and killed by police. So maybe you don’t argue. Maybe you just call that position racist and call it a day.
To outsiders, that leap will look absolutely nuts. But that’s the point of a certain kind of troll-poisoned political messaging: to make the other side look paranoid and unhinged. It’s certainly what all the coded Nazi signals are for—the 14 words, the numbers, the OK hand sign that both is and isn’t a white power sign, the boogaloo junk. They’re all ways to divorce surface meaning from intentional subtext. And they work. Try explaining any of these to someone who isn’t online; convince them that Hawaiian shirts are the costume of choice for members of an extremist movement hoping to start a second civil war. Hawaiian shirts!
Yes, this dynamic is very bad for discourse. Yes, it inhibits intellectual exchange. Yes, it makes productive dissensus almost impossible. But that isn’t because of “cancel culture” or “illiberalism.” It’s because in this discourse environment, good-faith engagement is actually maladaptive. If you tried to carefully explain to every single person who posts All Lives Matter on the internet why they shouldn’t and how they might not know that it sounds racist, you’ll lose your mind. Many of them know what they’re saying and are doing so on purpose. The ones who do it innocently are rare. You could engage in good faith in hopes of finding the latter, but instead, people do something pretty rational given the context (and the volume of stuff they have to sort through): They take shortcuts. Filter. Classify. All Lives Matter = racist. Deadnaming someone = transphobe. If these exchanges feel abrupt and supercharged, it’s because a lot of people are at the end of their rope anyway—if you’d spent years fielding the same devil’s advocate arguments about the inferiority of your race or gender or sexuality, even a hint of one of those talking points might tempt you to shut the discussion down too.
It’s possible and likely that knowledge gaps between people who are online too much and folks who aren’t are making things a lot worse. Someone who isn’t online much might be shocked to see people at a protest accusing a nice-looking young man in a Hawaiian shirt of wanting a second civil war. It might indeed look like cancel culture gone mad. He’s just standing there! Civilly! Offering support to Black Lives Matter protesters, of all things! Can’t we all, whatever our disagreements, come together in support of a good cause?
It’s also true that people who’ve learned to read through texts (to whatever bummer of a subtext we’re used to finding there) can overdo it. We sometimes skip the content of the text itself and reflexively fast-forward to the shitty point we “know” is coming even if maybe it isn’t. This will frequently aggravate the other party, especially if they weren’t headed in that direction; it sucks to have people assume the worst about you. That’s all pretty bad for a healthy discourse, but it’s a learned response to a platform that has fundamentally skewed the cost-benefit analysis of engaging. The rational move has become to presume bad faith.
Even free speech—the concept at the heart of this debate—is embattled territory. Take free speech defender: The term will mean one thing to an idealist and something completely different to someone who has seen Reddit hordes viciously defend revenge porn and sites like r/beatingwomen, r/Jewmerica, and r/creepshots while people whose pictures got posted there begged for help. Free speech! they were told. (I used to be a free speech absolutist myself, but the banning of those toxic subreddits—the very act that violated the sensibilities of many free speech champions—ended up transforming a site known for its unfettered human perversity into one of the few places I visit to witness actual good-faith debate.)
Sure, it would be nice to be able to discuss hot-button issues civilly, even or especially from opposite ends of the spectrum. But the internet pressure-cooked rhetoric. Folks can watch the same argument be conducted a million times in slightly different ways now, and that’s interesting, and a blessing, and a curse. It’s produced a kind of argumentative hyperliteracy. If people on all sides can foresee every step of a controversy (including the backlash to the backlash), it makes perfect sense to meta-argue instead—over what X really means, or implies, or what, down a road we know well, it confirms. No, this isn’t conducive to rational exchanges on neutral ground. People talk past each other. Question each other’s motives. Sic their followers on their targets, or shitpost just because they can. We’re not reenacting the Lincoln-Douglas debates here. That all this is understandable does not mean that it is good. But these behaviors didn’t develop in a vacuum. I don’t know how (or if) we get sincerity back. I have no prescriptions; it seems to me we ought to get the descriptions right first. We can agree that things are bad, but it’s just not the fault of illiberalism that good faith is in short supply. If that’s where the analysis begins, I can’t actually tell whether that reaction is naïve or trolling. And I’m no longer sure which is worse.
For more of Slate’s news coverage, subscribe to What Next on Apple Podcasts or listen below.
Support work like this for just $1
Slate is covering the stories that matter to you. Become a Slate Plus member to support our work. Your first month is only $1.