Even the most serious students of the law sometimes get the law wrong. That’s why I’m willing to cut some slack for Justice Clarence Thomas, who took the occasion this week to propose that some plaintiff, somewhere, should bring the right kind of Section 230 case to the Supreme Court. Thomas practically issued an engraved invitation for parties to bring a case that would enable the court to narrow the law that law professor Jeff Kosseff has rightly classified as containing “the 26 words that created the internet”: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This language, a single sentence in Section 230 of the Communications Decency Act, establishes a baseline of protection for internet platforms from being held liable for things published by their users. That’s why Section 230 is widely regarded as the law that permitted the internet as we know it today—from Facebook and Twitter to Wikipedia and Reddit, filled with user-generated content—to thrive.
Section 230 has had its critics since it was first passed in 1996 as part of the omnibus Telecommunications Act of 1996. But the ongoing debate around Section 230 accelerated in late 2016, when the most prominent social media companies, including Facebook and Twitter, began weathering a ton of public criticism in the wake of the Brexit vote and the election of President Donald Trump. Back then, the critics, largely from the left, argued that companies should intervene to keep credulous users from spreading misinformation, particularly the disinformation spread by both foreign governments and domestic clickbait farms. Nowadays, others—notably conservatives—complain that the companies, in response to the earlier round of criticisms, are censoring too much content (specifically, too much right-wing content).
The irony here is that Section 230 was expressly designed not merely to permit these companies to curate user-generated content and remove the rotten stuff, but actually to encourage that kind of action. Nevertheless, as I’ve pointed out in a previous Slate article, some companies’ lawyers slid into thinking that Section 230’s protections might actually depend on their companies’ refusing to curate very much. That’s why internet companies in previous years frequently limited their interventions mostly to removing content they were bound by law to remove, such as alleged (or actual) copyright-infringing content or child pornography. That pattern persisted until recently, particularly in the wake of the pandemic, when the platforms began banning misinformation about the coronavirus. More recently, the companies have cracked down on misinformation about voting, the baseless QAnon conspiracy theory, and, just this week, Holocaust denial. Then, on Wednesday, Facebook and Twitter took steps to limit the spread of a controversial New York Post article, angering Trump and others. Many conservatives’ claims about Section 230 come down to whether social networks qualify as “platforms” or whether they are “publishers.” The critics mistakenly believe that this distinction, which can’t be found in the statute itself, governs whether Section 230 should protect any given internet companies—even though the law was passed to enable, and even to encourage, the companies to moderate controversial or problematic content.
That brings us back to Thomas, who adopted wholesale the conservatives’ read of Section 230 in his statement this week that accompanied the court’s list of appeals that it has chosen not to hear. (One of the cases the court turned down raised a Section 230 issue.) He begins by embracing a particular idiosyncratic version of the platform/publisher distinction (although he uses the word distributor instead of platform):
Traditionally, laws governing illegal content distinguished between publishers or speakers (like newspapers) and distributors (like newsstands and libraries). Publishers or speakers were subjected to a higher standard because they exercised editorial control. They could be strictly liable for transmitting illegal content. But distributors were different. They acted as a mere conduit without exercising editorial control, and they often transmitted far more content than they could be expected to review. Distributors were thus liable only when they knew (or constructively knew) that content was illegal.
The justice invokes a New York state case from 1995, Stratton Oakmont Inc. v. Prodigy Services Co. In that case, Stratton Oakmont, the investment banking firm co-founded by Jordan “the Wolf of Wall Street” Belfort, sued Prodigy because a user had stated on the service that the firm was committing fraud. (The user turned out to be correct about that.) Prodigy sought to get the case dismissed on the grounds that it wasn’t responsible for what the user had had posted, but lost that motion. In Stratton Oakmont, the court reasoned that the central question was whether Prodigy was a distributor or a publisher; the court basically decided (incorrectly) that it was a publisher. But Thomas states that the decision in Stratton Oakmont “blurred” the binary distinction between publishers/speakers and distributors. (Thomas is flatly wrong that a distributor acts “as a mere conduit without exercising editorial control.” Take note, because we’ll be coming back to this notion in a bit.)
To fix that blurriness, Thomas interprets Section 230 as having clarified and restored that distinction by providing one “definitional” protection for distributors and one “direct immunity” protection for publishers and distributors:
First, §230(c)(1) indicates that an Internet provider does not become the publisher of a piece of third-party content—and thus subjected to strict liability—simply by hosting or distributing that content. Second, §230(c)(2)(A) provides an additional degree of immunity when companies take down or restrict access to objectionable content, so long as the company acts in good faith.
Thomas seems to be saying that if your company meets the definition of publisher, it can be held legally liable for any content it ever carries, whether or not it originated the content. That notion flies in the face of what most legal scholars consider to be the First Amendment case protecting publishers, New York Times Co. v. Sullivan (1964). In Sullivan and in the cases that follow it, the Supreme Court has held that the First Amendment requires that no publisher should be held responsible for defamatory content without being shown to be at fault—e.g., by publishing falsehoods negligently or with “actual malice.” But the justice is untroubled by the fact that his framing of publisher liability would undo Times v. Sullivan—just last year Thomas let it be known, by concurring in the court’s refusal to hear another case, that he’s ready to dispense with that precedent altogether.
For First Amendment lawyers, proposing that Times v. Sullivan be overruled, and that newspapers face strict liability for anything they ever publish that doesn’t get all the facts exactly right, would mean declaring open season on newspapers. This explains a lot: If Thomas is ready to see traditional newspapers killed through litigation, it’s no surprise that he’s willing to toss Facebook or Wikipedia or Google on the rubbish heap as well.
But even more troubling is that Thomas interprets Section 230—specifically the 26 words—in a way that’s at odds with how most courts have. (Even worse, Thomas’ statement is already having an impact elsewhere in the federal government: Federal Communications Commission Chairman Ajit Pai has announced his intention to begin a rule-making proceeding that interprets Section 230 more or less the way Thomas has.) Since it was enacted in 1996, courts have typically read that section as applying to both publishers and distributors. In practical terms, this has meant that even services (like online newspapers) that operate primarily as producers of traditional editorial content normally won’t be held liable for the often fractious and sometimes even illegal content that appears within a “readers’ comments” forum hosted by the service.
In making the argument that Section 230 needs to be narrowed, Thomas presents himself as what legal philosophers would call a “textualist,” relying on what in his statement he terms “the most natural reading of the text” in its entirety. Like the late Justice Antonin Scalia, Thomas insists that the courts, when interpreting the Constitution or a federal law, must focus primarily on the words of the Constitution or statute itself. Thomas would like to see the Supreme Court interpret Section 230 in the “textualist” way he favors, hence his invitation to plaintiffs to challenge it so that he, together with like-minded justices, can cut Section 230 to fit this year’s fashions.
Yet what Thomas calls “the natural reading of the text” is hardly as obvious as he claims. Indeed, some courts have interpreted Section 230 as providing broad protection for both publishers and distributors not least because that appeared to those courts to be the most “natural meaning.” But what really hurts Thomas’ argument is his baffling reading of Stratton Oakmont v. Prodigy Services. Thomas invokes the Stratton Oakmont decision to provide his “natural meaning” for the language of Section 230. But even though he thinks the Stratton Oakmont “blurred” the distinction between publishers and distributors in 1995, he also somehow thinks the case correctly summarized earlier cases as establishing a binary distinction between publishers and distributors.
Thomas’ reliance on Stratton Oakmont’s review of prior law leads him into a profound, fatal misunderstanding of the meaning of Section 230. As veteran media lawyer Robert Hamilton has explained in a recent article at Techdirt’s Greenhouse, the Stratton Oakmont court got the relevant case law wrong. Hamilton, as it happens, won the federal case on which the Stratton Oakmont court primarily relied—a case that Thomas, in his characterization of the prior law, somehow fails to mention. That case is Cubby Inc. v. CompuServe Inc. (1991), in which one user sued CompuServe for libel over comments published by another user. CompuServe won because the ruling used as precedent a long-standing Supreme Court case, Smith v. California (1959). In the Smith case, a bookstore owner was prosecuted for selling an allegedly obscene book; in ruling for the defendant, the Supreme Court struck down a city ordinance that imposed strict liability. Writing for the majority, Justice William Brennan reasoned that strict liability for booksellers was inconsistent with the First Amendment:
By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.
Interpreting the Smith precedent in the 1991 CompuServe case, Judge Peter Leisure noted that “the [Supreme] Court struck down an ordinance that imposed liability on a bookseller for possession of an obscene book, regardless of whether the bookseller had knowledge of the book’s contents.” Just as bookstores are distributors of other people’s content, so too are online services like CompuServe (and, later, Facebook and Twitter). In effect, CompuServe’s relationship to the content its users produce was analogous to a bookstore’s relationship to the books it carries.
But nowhere in identifying the protections the First Amendment gives to bookstores and later to online service providers did the Supreme Court in the Smith case or the federal district court in the CompuServe case require that, to qualify as a distributor protected by the First Amendment, a company must refuse to exercise any editorial choices. Booksellers have to make editorial decisions all the time! The Other Change of Hobbit can choose to carry only science fiction and fantasy; it doesn’t face increased risk of liability if it refuses to carry mysteries or biographies. My favorite D.C. bookstore, Politics and Prose, chooses to feature some books and not to carry others—and even hosts events boosting some authors while choosing not to do so for other writers. These are all editorial interventions that don’t subtract an atom of the bookstores’ traditional protections as distributors, outlined by the Smith and CompuServe cases. Yet the state court in the Stratton Oakmont case mistakenly interpreted editorial choices regarding content as depriving Prodigy of the First Amendment protections available to distributors.
Thomas and other critics who want to trim Section 230 protections aim to impose a simplistic binary taxonomy for online service providers: Either you’re a distributor who exercises no control or you’re a publisher if you exercise any control at all. But even Thomas is compelled to acknowledge that “recognizing some overlap between publishers and distributors is not unheard of.” What he doesn’t recognize is that the “overlap” is precisely where distributors like bookstores and newsstands—as well as Facebook, Twitter, and Wikipedia—actually operate.
Despite Thomas’ purported reliance on the “natural meaning” of the words of Section 230, even he is compelled to rely on outside sources for his interpretation. “Congress enacted this statute against specific background legal principles,” he writes, explaining that a court must interpret a law by taking note of the “backdrop against which Congress” acted. “If, as courts suggest, Stratton Oakmont was the legal backdrop on which Congress legislated,” he writes, “one might expect Congress to use the same terms Stratton Oakmont used.” There’s one big problem with this notion, however: The whole point of Section 230 was to negate not only the result in Stratton Oakmont (Prodigy was held liable for content it didn’t originate) but also the reasoning that led the Stratton Oakmont court to that bad result. In reality, the meaning of Smith v. California is that, for First Amendment purposes, there aren’t just two classes of First Amendment–protected enterprises, but three:
1) common carriers, which exercise no editorial judgment as to the content they carry;
2) bookstores, libraries, and newsstands, which exercise some editorial judgment about what to carry and what not to carry, including post hoc decisions (as when the science fiction bookstore sends back the mystery books it mistakenly received); and
3) traditional publishers, including online newspapers and journals like the Los Angeles Times, which have First Amendment protections against strict liability but which can be held responsible for illegal or tortious content that they directly produce.
The actual “legal backdrop” for Section 230 was the need to expunge the false binary approach of Stratton Oakmont and restore the bookstore-appropriate middle category that applies to online services. Unfortunately for Thomas, this means going beyond his purported “natural meaning” of the text. In theory, Thomas could have done so by resorting to his other favorite approach to legal interpretation: resorting to “original understanding” analysis when statutory text is ambiguous. If he had done that, he would have discovered that Section 230(c)(2)(A) was added so that a service provider that “restricts access” to objectionable material or that enables users to do so easily (the way Google Search’s “SafeSearch” function does) doesn’t automatically become a publisher for having done so. The “original understanding” of this section concerned software tools that users, the companies, or both might use to automatically screen out pornography or other material users may not want to see or that internet services may not want to distribute. Supporters of these kinds of tools typically referred to them as “filters”; critics labeled them “censorware.” In the 1990s, when the Communications Decency Act was passed in response to a moral panic about internet porn, it was commonly thought that a separate market would arise for such filtering tools. It mostly didn’t, but Google’s SafeSearch is a distant descendant of that idea.
But you don’t need to take my word for this. As it happens, former Reps. Chris Cox and Ron Wyden (now a senator), who wrote Section 230, are still with us. It seems nuts to pass over the fact that, when it comes to “original understanding” of the terms of Section 230, we don’t have to speculate about what “the legal backdrop” or the “original understanding” of Section 230’s language is. We can still ask the originators themselves. Chris Cox put it in his recent Senate testimony this way:
In an imagined future world without Section 230, where websites and internet platforms again face enormous potential liability for hosting content created by others, there would again be a powerful incentive to limit that exposure. Online platforms could accomplish this in one of two ways. They could strictly limit user-generated content, or even eliminate it altogether; or they could adopt the “anything goes” model that was the way to escape liability before Section 230 existed. We would all be very much worse off were this to happen. Without Section 230’s clear limitation on liability it is difficult to imagine that most of the online services on which we rely every day would even exist in anything like their current form.
To put it another way: If you value virtual communities in which any member can speak, but in which all members have to obey some basic rules, the only way to get there is through something very much like Section 230.