The Supreme Court’s decision last fall to hear a case about one of the legal cornerstones of the internet was concerning. Nothing good, it seemed, could come from its review of Section 230 of the Communications Decency Act, the law that protects those who maintain online forums from liability for how people use those spaces.
Advocacy groups and nonprofits submitted dozens of briefs and arguments seeking to influence the justices’ decision. During oral arguments in February, justices were encouragingly concerned about what narrowing or removing Section 230 protections would do to free expression online. Still, given its tumultuous, precedent-agnostic recent history, it seemed possible the court could, despite its stated reservations, pick apart the internet’s free-expression machinery.
Fortunately, last week the court declined to weigh in on what First Amendment scholar Jeff Kosseff has labeled “the 26 words that created the internet.” It dismissed Gonzalez v. Google in an unsigned, three-page decision that concluded that the case had “little, if any, plausible claim for relief.” Emergency averted.
It’s easy to ignore a non-decision. But Gonzalez—a case in which the family of a terror attack victim contended that social media firms should be liable for making radicalizing ideas available to people who ultimately harm others—could have radically reshaped the internet. While Section 230 is an imperfect tool, an online universe without it would be characterized by substantially less free expression. If providers were fully liable for the things posted on their platforms, the flow of information would slow, and much more information, including political ideas, would never see the light of day. If justices had gone the middle route, narrowing the law but not striking it down, forum providers would have been left in legal purgatory as they sought to identify the new legal realm in which they functioned.
The court’s decision to not get involved was not, however, an endorsement of the law, which was put in place in 1996 to encourage good-faith content moderation and to shield those who create spaces for online discourse from an endless torrent of lawsuits. And Section 230’s time in front of the Supreme Court holds some important lessons for the future of online speech—because Gonzalez was just one of many catastrophes left to weather.
First, lawmakers from both parties dislike the law. The Biden administration filed a brief in the Gonzalez case arguing that justices should narrow the scope of Section 230 protections so those who maintain online spaces would be liable for some of the content on their spaces. The brief echoed long-standing arguments by Democrats that the law protects too much harmful speech, shielding technology firms from responsibility.
Crucially, that position is infused with some politicization of Section 230. The First Amendment generally protects online expression and would safeguard most of what is published in spaces such as Instagram and TikTok. Section 230 merely limits the countless number of lawsuits providers would face, allowing ideas and innovation to flow more smoothly. Absent Section 230, providers would almost always win lawsuits against them, but the time and expense of doing so would undermine their business models.
Republicans dislike Section 230 for different reasons, mostly because they think that Big Tech firms are biased in deciding what content to keep on their platforms. It’s a similarly politicized concern, since the First Amendment, rather than Section 230, protects private companies’ right to control speakers and ideas in the spaces they’ve created. Section 230 helps facilitate providers’ right to ban, block, and otherwise moderate content and speakers, but it didn’t give them that right.
After the court’s dismissal last week, Section 230’s future remains in the hands of lawmakers, which, conceptually, is where it belongs. Practically, however, Congress presents another problem for the future of the provision. Lawmakers have repeatedly shown they are unwilling to understand Section 230, favoring sound bites over actually trying to revise and improve a law that influences the flow of ideas in society.
Lawmakers’ struggles with how the internet works are often on display when they call up tech leaders for flashy proceedings, like the March TikTok hearing. Time and time again, committee members’ questions betray a lack of understanding and nuance, which are both crucial to internet regulation, especially when it comes to revising Section 230. Generative A.I. makes all this even more complex—when Sen. Lindsey Graham asked OpenAI CEO Sam Altman in a recent Senate hearing if Section 230 applied to his company, he dodged the question.
There is a better way forward: Congress should commission a nonpartisan working group of experts to study alternatives to Section 230 and present options for revisions. We can snicker at lawmakers for their seemingly limited knowledge of networked technologies, but lurking beneath questions about Section 230 are complicated legal and technical concerns. Legal experts, software developers, and social scientists could all contribute valuable perspectives about what liability should look like in this emerging, A.I.-infused generation of the internet.
European lawmakers used a similar model of expert input to create the Digital Services Act last fall. The DSA establishes liability for Big Tech firms that host hateful, extremist, and false ideas. This model is not a good fit for the U.S. system, in which the First Amendment generally bars government regulation of speech, even hateful speech. But the DSA’s protections could spill out beyond Europe, which would make Section 230’s protections, in some ways, less relevant. This is because, while the DSA legally protects only European Union residents, Big Tech firms may revise their algorithms and moderation practices globally in their efforts to comply with the massive new law. In other words, a European law could circumvent American debates over Section 230 and force firms to change the types of ideas Americans see.
The EU’s General Data Protection Regulation, which came into force in 2018, had a similar effect on Americans’ privacy. The law has led Big Tech firms to change some of the ways they gather and use Americans’ information—in practice, doing more to protect U.S. privacy than any Congress has done in the past two decades or more. If the DSA has a similar effect, EU law and sensibilities about free expression will undermine Section 230’s relevance.
Throwing up our hands in the face of this possibility would be a mistake. The U.S. has been the epicenter of internet innovation since its creation, and Section 230’s protections are a big part of that story. Section 230 created an environment that incentivized growth and experimentation, and it could continue to do so if it’s carefully revised.
So, sure, we can celebrate Section 230’s surviving its risky encounter with the Supreme Court—but not too jubilantly. In this case, the court did its job in recognizing when something wasn’t its to touch, but the waters are still choppy. The law faces criticism from both parties, lawmakers who often do not understand (or do not wish to understand) what it does, and a massive EU law that might just undermine it from across the Atlantic.
The most sensible step forward is a serious, nonpartisan effort to update a law that was written before social media, artificial intelligence, and other defining characteristics of online life took hold. That would be cause for real celebration.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.