This article was originally published in March 2021 to accompany the launch of the Section 230 Reform Hub.
2020 was the year of COVID-19 and lockdowns, of George Floyd and Breonna Taylor, and of a presidential election that included an impeachment and muted debate mics.
But in tech policy, 2020 may be remembered as the year that an arcane provision of the 1996 Telecommunications Act became front-page news. Both Republicans and Democrats used Section 230 as a political football to bolster their arguments on racism, misogyny, censorship, elitism, public health, and tech company power. Oddly, it was one of the few issues of agreement between the two presidential candidates, with both Donald Trump and Joe Biden calling for repeal.
In policy circles, there is active discussion about what Section 230 does and doesn’t do, and misunderstandings are rampant. But most experts agree that it has been the bedrock of the growth of the internet sector since it became law in 1996 and that it is a cornerstone of online expression. It has been referred to as the “26 words that created the internet” and as the internet’s Magna Carta.
The core idea of Section 230 is that content creators are liable for content they post online, but hosts are not. If John posts a defamatory comment on a website, he can be held liable, but the website can’t. The protection applies to tech platforms serving as hosts, but tech platforms can be liable when they create content (think Netflix), and news publishers can use Section 230 as a defense when they serve as hosts (think the New York Times Cooking comment section). The protection also provides important procedural benefits in litigation: It enables defendants to kick a suit out of court before discovery, which is often the most expensive phase of litigation.
Section 230 has come under criticism by both the left and the right. Many on the left argue that it has enabled tech platforms to host harmful content with impunity, while many on the right argue that it has enabled tech platforms to disproportionately suppress conservative speech. The left is particularly concerned with harms against vulnerable communities, such as women and people of color. The right has alleged that tech companies’ content policies and enforcement practices reflect the viewpoints of left-leaning employees. Both sides point to Section 230 as the problem, even though they have opposing views on what that problem is.
As Section 230 became a fixture of election debates and Trump’s Twitter feed, legislators from both sides of the aisle introduced bills to try to address their concerns. Democrats introduced bills that reduced the scope of Section 230 protections in civil rights cases and terrorism cases, while Republicans introduced bills that sought to compel platforms to be more “neutral” in their moderation of online content. And there were even a few bipartisan proposals, which focused on child sexual exploitation, content moderation operations, and reduced protections for content that courts determine to be illegal. Some of the proposals may be reasonable; others could collide with the First Amendment.
With a flurry of bills introduced in 2020 and 2021—and roughly 12 bills introduced in the last four months of 2020 alone—it’s been tough for researchers, company employees, and policymakers to keep track. Even for tech policy diehards who spend their mornings reading tech policy newsletters and keep close tabs on press releases from congressional offices, trying to piece together the Section 230 reform puzzle can be a full-time job.
That’s why we created a project to track all legislation to reform Section 230. This Section 230 Reform Hub includes information on each bill that has been introduced in Congress to reform Section 230 since 2020. The legislative summaries include the date the legislation was introduced, co-sponsors, status, a short overview of the substance of the legislation, a description of the type of reform that is proposed, and a link to the full text. It includes all legislation in the last Congress, as well as ongoing tracking of legislation introduced in the current one. That’s why we have created a project to track the Section 230 Reform Legislative Tracker. The Section 230 Reform Legislative Tracker was originally created by Future Tense; the Tech, Law, & Security Program at the Washington College of Law at American University; and the Center on Technology Policy at the University of North Carolina at Chapel Hill. As of September 2022, it is being presented in partnership with Lawfare. In the 2020-2021 academic year, the legislative analysis was led by American students Kiran Jeevanjee, Timothy Schmeling, and Irene Ly and Duke students Brian Lim, Niharika Vattikonda, and Joyce Zhou. In 2021-2022, it was led by UNC students Daniel Johnson and Noelle Wilson and Stanford student Meghan Anand. In 2022, it is being led by UNC student Daniel Johnson and Lawfare contributors Etta Reed and Brady Worthington.
Our hope is that this work will enable researchers to better understand the scope of the proposals that have been introduced and to keep tabs on their progress through the legislative process. Perhaps most importantly, we hope that providing this legislation in one place will help to facilitate research on the policy mechanisms that legislators are seeking to use to change a regulatory framework for online speech that has existed for decades and to develop insights into which of these mechanisms may be more or less successful for achieving particular policy objectives.
Section 230 skeptics and defenders alike sometimes suggest that there are easy answers here. Alter everything or alter nothing. Prohibit speech or protect it. To many, Section 230 is the cornerstone of all that’s good about the internet, or the cause of everything that’s bad. But if reform were straightforward, we wouldn’t have a list of dozens of colliding and overlapping proposals. By cataloging the proposed reforms, perhaps we can help create a more productive discussion around the 26 words that have launched endless debates.
Read the Section 230 Reform Hub.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.