Depending on whom you ask, the “Twitter Files” have been either a total flop or a vindication. Nowhere is that more true than in the debate over “shadow banning.”
Right-wingers have long alleged that Twitter (and other social media) suppresses conservative voices and runs interference for the libs. So now, Elon Musk, in his new capacity as Chief Twit, is granting internal peeks at network systems and communications to prominent right-leaning writers in his personal circle. As such, Substack stars Matt Taibbi and Bari Weiss have been tweeting out screenshots that demonstrate how the bird app became a liberal propaganda arm in collusion with the Democratic Party.
Except, not really. Taibbi’s first and second Twitter Files drops mainly clarified that the company had time and again followed its own rules: in halting the spread of Hunter Biden revenge porn, and in taking the prospect of banning politicians’ dangerous rhetoric rather seriously. Bari Weiss’ Thursday evening “report,” meanwhile, claimed to validate a long-held conservative belief: that right wingers were being “shadow banned.” Taibbi promises even more shocking Twitter shadow-banning revelations to come.
But what, really, is “shadow banning”? And have the Twitter Files actually proved conservatives right all along?
If you, like me, were a Redditor during the early 2010s, you likely remember drama about “shadowbanning” (as one word) within various subreddits; on those forums, a word that started as a brief, throwaway Something Awful joke became a manner of serious import. As the Verge defined it 10 years ago, shadowbanning on Reddit entailed “an admin-enforced measure which lets the user post and browse the site normally but hides them from other users.” Basically, a shadowbanned Reddit account could use the site as normal, commenting where they please, but other Redditors wouldn’t see those comments for, usually, a few days at a time. The directive for this came from employed administrators at the top of the Reddit chain; individual subreddit moderators could not deploy such a measure (and could, in fact, get shadowbanned themselves). Most often, shadowbans were deployed to deal with spammers, but their use was later expanded to discipline Redditors who broke other important sitewide rules.
All this naturally became controversial, as it became difficult to tell whether or why someone may have been shadowbanned, and to figure out how to appeal the restriction. It often could appear as though admins were doling out such bans incorrectly or arbitrarily, to the detriment of the Reddit experience. And the prospect of being shadowbanned without notice was unsettling. If a Redditor’s posts weren’t getting voted up or down, they couldn’t tell whether their comments were simply boring or not appearing for others. By late 2015, there was enough frustration across the forums that Reddit did away with shadowbanning altogether, opting to punish misbehaving users through account suspensions instead.
The term experienced a resurgence just a couple of years later, when Twitter users began alleging that it was happening there. In 2017, BuzzFeed News reported that the company was “throttling” the reach of accounts that had engaged in abusive behavior. Other outlets began to call that shadowbanning—even though 1) limiting an account’s reach was quite different from making sure no one could see it at all, and 2) the affected tweeters were notified by the platform.
The following year, a Vice article claimed that Twitter was shadowbanning famous Republicans like Donald Trump Jr. by preventing their account names from autopopulating within the website’s search bar. This also was not a shadowban as it had been known in previous times: The Republican accounts themselves still appeared in Twitter search results, and it turned out the search-population measure was deployed in a manner that affected leftist accounts as well. Still, the controversy was pronounced enough that Twitter’s official account shared a blog post about how it did not, in fact, engage in shadowbanning (and the company later removed the offending search code). In the blog, two former Twitter executives reference “the best definition” of shadowbanning they could find, which was: “deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster.”
The company, fairly, denied any shadowbanning by that definition. Instead, the post outlined the types of actions it did take—namely, “ranking tweets and search results” based on guidelines that would downgrade “tweets from bad-faith actors who intend to manipulate or divide the conversation.” But the posts would still be visible and searchable; there was no wholesale or universal visual obscurity.
Nevertheless, various netizens kept running with the shadowban line, claiming it was happening with Instagram hashtags and YouTube comments. And right-wing pundits weren’t the only one making the allegations—so were brand influencers and social justice advocates. Along the way, users seemed to expand the term to encompass just about any content moderation action, no matter the outcome.
As they’ve publicly admitted, social networks often do restrict certain words or links or accounts, without advance or public notification, for content moderation purposes—to stop spammy posts from spreading, to curb disinformation, to halt hate speech. Rarely are these complete “shadowbans,” as the content involved is still visible by a significant number of viewers. And sometimes, these very outlets may claim that a given person’s account was affected because of a faulty code deployment, nothing more.
Yet by now, few users buy this. There have been countless examples of digital forums denying the specific accusation of “shadowbanning,” only for it to be revealed that they personally interfered in the visibility of anti-authoritarian activism on TikTok or sex worker discussions on Facebook. Maybe none of that was technically shadowbanning, but who can really know?
People rightfully distrust the shadowy decision-making by such companies, the lack of clarity on their business and content operations, and their approach to ideological rifts. A provocative user whose online presence centers around politics may find that their overall engagement has fallen on recent posts, uncover no sensible explanation as to why that is, look at how influential figures like Donald Trump and Elon Musk invoke the specter of shadowbanning (however misleadingly), and come to the conclusion that this is the contagion that has come for them, too. At least, it’s a more self-satisfying conclusion than grappling with the fact that no one reads or likes your posts either because they suck or are actively harmful to other users and, resultingly, should be reined in.
Where does that leave us now that Elon Musk is determined to uncover a shadowbanning agenda at Old Twitter? By all accounts, the revelations from Bari Weiss and co. still do not demonstrate any shadowbanning, either by its Reddit-era context or its Twitter definition. Perhaps the continued use of the word, then, reveals less about any real-life shadowbanning and more about some defining aspects of the modern social media experience. It shows the way that a word once informed by specific context can get hijacked beyond recognition, transforming into convenient interjection to be used however one wishes (see also: woke, canceled, fascist). It also lays bare the amount of work it really takes to run a social network wherein one can complain about shadowbanning. At their best, the Twitter Files show that, well, content moderation is really hard—it’s reasonable people making tough decisions. Not only is Elon Musk learning that lesson firsthand, but he’s even celebrating the curbing of hate speech on his platform by denying “freedom of reach.”
One thing the shadowban debate does show is that unyielding belief in one’s persecution by Big Tech is just a truism for netizens of all stripes, no matter how (in)accurate. The mission of the Twitter Files, at least as Musk and co. put it, is to increase transparency around the murky back-end deliberations that fuel this persecution complex due to lack of straightforwardness, education, and understanding. Still, that supposed goal is undermined by employing a misleading definition of shadowbanning, and by invoking it as a convenient catchall grievance instead of as a term with actual meaning. Musk should be more careful about that, because it’s only going to fuel more suspicion of the way that even he runs Twitter down the line.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.