During oral arguments at the Supreme Court on Tuesday, a police officer asked several children to leave the courtroom after they fell conspicuously asleep. Who could blame them, though? The case, Gonzalez v. Google, was supposed to be a blockbuster, handing the justices an opportunity to eviscerate Section 230—a landmark law that’s frequently described as “the 26 words that created the internet.” Republican politicians have spent years gunning for the statute, which shields websites from liability over third parties’ posts, arguing that it allows Big Tech to silence conservative voices. Some progressives, meanwhile, are disgruntled by the expansive immunity it grants to monopolistic corporations like Meta and Google.
From the start of arguments, however, it was clear that Gonzalez will not be the case to challenge the consensus understanding of Section 230. If anything, the justices’ evident frustration and boredom—shared by courtroom spectators of all ages—illustrated how hard it’ll be to transform the law through the judiciary alone. It’s genuinely unclear whether courts can take away Section 230 protections without destroying the internet in the process.
Gonzalez has its roots in ISIS’s 2015 Paris attacks. In the aftermath of that tragedy, the family of one victim, Nohemi Gonzalez, filed a lawsuit against Google. They claimed that Google was liable for damages because it “aided and abetted” the terrorists. How? Because YouTube—which Google owns—failed to remove ISIS recruitment videos, then allegedly recommended those videos to users through an algorithm that sorts and suggests content. The plaintiffs have zero evidence that any of the Paris terrorists saw these suggestions. They simply speculated that users may have been radicalized into joining ISIS because of YouTube’s algorithmic recommendations.
At this point, you might ask: If there’s no proof the algorithm played any role in radicalizing or inciting the terrorists, why did the plaintiffs mention it at all? Why not sue over the mere existence of ISIS recruitment videos on YouTube, which is the true gravamen of the complaint, anyway? That’s where Section 230 comes in. The law, passed in 1996, generally bars lawsuits against a website for hosting other people’s expression—even if that expression is harmful and illegal. It states that “no provider” of “an interactive computer service shall be treated as the publisher or speaker of any information” posted by others. This shield breaks from the traditional rules of civil liability, which put media on the hook for what they publish.
That’s why the plaintiffs in Gonzalez didn’t just sue Google for hosting ISIS videos: The federal courts have long agreed that Section 230 bars such a claim. And so, instead, the plaintiffs targeted YouTube’s algorithm, where the case law is much less established. They argued that YouTube actively promoted dangerous speech through a program of its own creation, stepping outside the protection of federal law.
The district court dismissed the suit, citing Section 230. So did the 9th U.S. Circuit Court of Appeals, though two liberal judges, Marsha Berzon and Ronald Gould, harshly criticized the 9th Circuit precedent that shielded YouTube from the suit. Their opinions contributed to a cross-ideological concern about the current interpretation of Section 230: In 2020, Justice Clarence Thomas wrote that it was time to shrink the law down to size. These jurists from opposite ideological sides had very different concerns: Thomas shares the GOP’s paranoia that Big Tech is silencing conservative voices, while Berzon and Gould sounded alarmed that unaccountable monopolies could recklessly ignore the presence of hateful and violent speech in their own websites. But both paths lead to the same result. And when SCOTUS took up Gonzalez, it looked like the stars had aligned for a major judicial assault on websites’ legal immunity.
Except that didn’t happen. Instead, on Tuesday, the case fizzled out almost immediately. The basic problem is that the plaintiffs’ argument would not just weaken Section 230 but essentially destroy it. The plaintiffs’ lawsuit was designed to be the camel’s nose under the tent, a workaround that finally got courts questioning Section 230’s shield, an opportunity for them to chip around the edges of Big Tech’s immunity. But the legal theory behind the suit turns out to be the opposite: an existential threat to the functioning of many (perhaps most) major websites. As Justice Elena Kagan put it, algorithms “are endemic to the internet; every time anybody looks at anything on the internet, there is an algorithm involved.” They’re how websites tackle the daunting task of “organizing and prioritizing material” when they are flooded with third-party content.
Every minute, for instance, about 500 hours of video are uploaded to YouTube. The company must use an algorithm to organize that content and present it in a coherent way. It is going to make some mistakes along the way, inadvertently promoting videos that are defamatory or violent or otherwise objectionable. And if each mistake opens up YouTube to civil liability, the company can no longer continue to function.
That’s true of many other companies that weighed in on YouTube’s side—including Yelp, Reddit, Twitter, ZipRecruiter, and Wikipedia, which are terrified of getting sued into oblivion because their algorithms prioritized content that may be illegal. Luckily for them, the plaintiffs’ theory clashes with the text of the law, for at least two reasons. First, Section 230 protects websites when they act as a “publisher,” and organizing content is the essence of publication. When YouTube designs an algorithm that promotes videos to certain users, that process is a part of publication, too—and thus legally protected.
Second, Section 230 expressly protects any “interactive computer service” that chooses to “filter, screen,” or “organize” content. Filtering and organizing content, of course, is precisely what algorithms do. It makes no sense to claim that a website simultaneously gains and loses immunity by organizing speech. As Justice Brett Kavanaugh explained: “It would mean that the very thing that makes the website an interactive computer service also mean that it loses the protection of [Section] 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”
It was also Kavanaugh who delivered a remarkable defense of the law as it’s read today. “Congress drafted a broad text,” he told Eric Schnapper, who represented the plaintiffs. “That text has been unanimously read by courts of appeals over the years to provide protection in this sort of situation. You want to challenge that consensus,” but doing so could “crash the digital economy.” Why not let Congress “take a look at this and try to fashion something along the lines of what you’re saying?” Or, as he put it later: “Isn’t it better to keep it the way it is [and] put the burden on Congress to change that?” Or as Kagan told Schnapper: “We’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet.”
Justices Neil Gorsuch and Sonia Sotomayor echoed Kagan and Kavanaugh’s hesitation. Chief Justice John Roberts, always mindful of business interest, fretted about an avalanche of ruinous lawsuits. “Billions of responses to inquiries on the internet are made every day,” he said. “Every one of those would be a possibility of a lawsuit” if plaintiffs could merely assert, without evidence, that the algorithm showed them “defamatory” or “harmful information.” The result, the chief justice warned, may be that “the internet will be sunk.” For his part, Thomas, who once sounded so eager to take up a case like this one, didn’t show much interest.
Only Justice Samuel Alito, a committed foe of Big Tech, made any real effort to take Section 230 down a notch. He asked Lisa Blatt, who represented Google, why YouTube’s recommendations don’t count as “YouTube’s speech.” And he tried to get Blatt to say that Google should be held liable “for posting and refusing to take down videos that it knows are defamatory and false.” But that’s a different issue for another day—and also, at heart, another policy question for Congress, not a justification for SCOTUS to unravel the internet.
There are, indeed, reasonable arguments that Section 230 needs new exceptions. Nonconsensual pornography (“revenge porn”) is an especially thorny and horrible problem, as some websites have refused to remove these images after they’ve been identified as nonconsensual. But the justices are not the ones who should be making these decisions. On Tuesday, a majority of them appeared to recognize that fact, ready to dump all these quandaries into Congress’ lap where they belong. That’s a far better option than breaking the internet by judicial fiat.