Future Tense

Where Do We Go From Here With Section 230?

Three legal scholars discuss the internet law that everyone seems to hate right now.

Word cloud.
Photo illustration by Slate.

In 2020, Section 230 of the Communications Decency Act took a hell of a beating from all sides of the political spectrum. Section 230 grants broad legal immunity to interactive computer services for user-generated content and for the moderation of that content—and it made possible the internet we both love and hate today. President Trump and President-elect Joe Biden have both criticized it, for very different reasons, illustrating the unusual position Section 230 stands in today.

Recently, Mary Anne Franks, a professor at the University of Miami School of Law and the author of The Cult of the Constitution: Our Deadly Devotion to Guns and Free Speech; Jeff Kosseff, an assistant professor of cybersecurity law in the U.S. Naval Academy’s Cyber Science Department and author of The Twenty-Six Words That Created the Internet; and Mike Godwin, who has worked on internet rights issues for more than 30 years and is the author of The Splinters of Our Discontent, joined Future Tense editorial director Andrés Martinez to discuss the past, present, and potential future of Section 230.

Advertisement
Advertisement
Advertisement
Advertisement

Andrés Martinez: Hello all, thanks for agreeing to do this Slack roundtable. We are being duly reverential in sitting down to Slack about Section 230 at 2:30 p.m. The law seems to me a bit like Coldplay—we all loved it until we started hating on it (actually, I still love Coldplay), for reasons that aren’t always clear, and that are often contradictory. Jeff, you wrote the book on Section 230, so why don’t we start with you: Why is 230 under siege?

Jeff Kosseff: I’ve wondered about that a lot recently. And there isn’t really an easy answer.
People are angry at big technology companies. Some think that they do too much moderation, particularly of certain political viewpoints, while others think they do too little moderation. Section 230 has become the proxy for anger at big tech companies, even though changing 230 won’t necessarily fix the issues they are angry about. And 230 applies to more than just the biggest tech companies. (The views I express here are only my own, and don’t represent the Defense Department, Department of Navy, or Naval Academy.)

Advertisement

Martinez: Those are some useful clarifications, off the bat. What are some of the other common misunderstanding about what Section 230 does or doesn’t do that you all encounter?

Advertisement

Mike Godwin: I agree with Jeff’s caution that changing 230 won’t necessarily fix issues people think it might; if anything, its removal would lock in current Big Tech dominance in a lot of spheres.

Mary Anne Franks: SECTION 230 DOES NOT REQUIRE NEUTRALITY (sorry for shouting).

Andrés: Slack shouting is allowed!

Jeff: Yes, that is a perennial misunderstanding … If only I could fit what Mary Anne said on my license plate.

Mary Anne: Also, Section 230 is not what gives tech companies the right to moderate content. As private entities, they’re protected by the First Amendment for their speech, including speech like content labels and fact checking.

Advertisement
Advertisement

Jeff: I think Mary Anne articulated the most persistent (and baffling) myth. Another one that keeps coming up is that Section 230 enables copyright infringement. Section 230 always has had an exception for intellectual property law, which includes copyright. Yet media coverage and op-eds routinely blame 230 for copyright infringement.

Mike: As someone who was in the room where it happened, I know that Section 230 was designed to free online forums to police bad content without becoming legally liable for all that they missed. But many early tech-company lawyers missed the lesson. I think that should be acknowledged more.

Advertisement

Section 230 was designed to put into statute the court rulings of Smith v. California and Cubby v. CompuServe, which Stratton Oakmont (a 1995 New York court decision that held Prodigy to be a “publisher” liable for users’ content) misconstrued.

Advertisement

Mary Anne: Tech platforms, like restaurants and other private spaces, also have broad rights of exclusion wholly apart from Section 230’s protections, subject only to anti-discrimination law.

Mike: The First Amendment allows content moderation, but does not require that libraries and bookstores be generally liable for the content they offer. The general problem is one of orders of magnitude. Any normal human being (let’s call her editor-in-chief) could read all the content of the New York Times every day. But a million editors-in-chief couldn’t read all the content of Facebook every day.

Andrés: As Jeff noted, tech companies seem to get badgered for too much moderation and for too little moderation. As we saw in the recent congressional hearings on the matter, the CEOs of Facebook, Alphabet, and Twitter took turns fielding alternative “how dare you?!” and “why wouldn’t you?!” questions for their decisions regarding things like labeling the unreliability of Trump tweets and not permitting the sharing dubious New York Post stories. But remind us why platforms can be biased if they choose to be?

Advertisement
Advertisement
Advertisement

Mary Anne: First Amendment.

Jeff: I think that many cases in which 230 has been a defense (both involving material left on the site and material taken down) would ultimately fail on the merits if fully litigated. A good deal of 230’s protections are procedural, saving the platforms from the hassle and expense of litigating on the merits.

Let’s say that a business is angry about a consumer review on Yelp. The business tells Yelp that the review is false and defamatory and demands that Yelp remove it. Without 230, Yelp probably is in a position of either taking down the review or litigating a defamation case—dealing with things like discovery as to whether the review was false. Of course, Yelp probably won’t want to do that, so Yelp likely will just remove the review, regardless of the merits.

Advertisement

Mike: Section 230 is also what gives #BlackLivesMatter and #MeToo the breathing space to indict the status quo without exposing the forums to defamation liability.

In that sense, it’s New York Times v. Sullivan reasoning—there needs to be enough “breathing space” under the law to make sure that important stuff gets said, and not just by newspaper staffs and broadcasters. (In Sullivan, the content at issue was an ad.)

Advertisement
Advertisement

Andrés: So why do we keep conflating/confusing Section 230 and platform regulation with the First Amendment?

Mary Anne: You know how people’s deeply felt intuitions about how science works produces a lot of what is called pseudoscience or junk science? Well, there’s a lot of pseudolaw, too, and it operates quite similarly. Something FEELS like law or a legal principle, so people assume it is one. This is perhaps nowhere more evident than with First Amendment law.

Advertisement

People FEEL that they not only have the right to speak, but that this is an affirmative right that must be honored everywhere. But that isn’t what the First Amendment does—it provides a negative right, and only against the government (thought quite broadly conceived).

Advertisement

So. People take the principle that the government can’t punish you for speech, and they assume that this rule applies to nongovernment actors and means that they have the right to particular platforms of speech. And then something happens—their tweet doesn’t go viral, or it gets fact-checked, or their video gets removed—and they feel that their free speech has been violated.

Mike: Mary Anne, I feel your pain. I think a lot of people have intuitions that First Amendment activists are arguing reflexively out of cultish devotion rather than out of long-considered policy and principles grounded in human-rights traditions.

Advertisement
Advertisement

Andrés: Jeff, you mentioned that people wrongly accuse 230 of being a shield protecting platforms for violating copyright. It’s my understanding that there is no exemption of liability for any violations of federal law. Is that correct?

Jeff: Correct, Section 230 doesn’t protect you from violations of federal criminal law.

Andrés: But, of course, not all bad behavior is a federal crime …

Mary Anne: Exactly. I think with nonconsensual pornography, for instance, what we need most is clear criminal prohibition at the federal level, which we don’t yet have. If we did, then platforms couldn’t raise a Section 230 defense to liability for their participation in that conduct, due to the federal criminal law exception.

Advertisement

And there’s a good lesson here about not seeing the criminal forest for the Section 230 trees. There are some abuses that need to be addressed directly by the law apart from Section 230 questions.

But Section 230 as currently written would still be an obstacle to justice for victims even if we had a federal law, because it would still bar civil claims arising from the abuse. That’s not a problem unique to nonconsensual pornography, and it is one of the most serious structural flaws of Section 230.

Advertisement
Advertisement

Andrés: To be fair regarding people’s 230/First Amendment confusion: Is there a legitimate issue in that the “sovereigns” now controlling our “public square” are now private platforms instead of the government? Maybe that means it is legitimate to migrate First Amendment principles (and rights) to private domains?

Advertisement

Jeff: There was a recent article in Lawfare, with which I disagree, that argued: “When a law regulates the dominant platforms’ content policies, the law’s downstream effects on the speech of users should determine whether it violates the First Amendment. The First Amendment is a public endeavor—not a mandate for big tech to redefine free speech in America.” But that articles captures the thinking we are dealing with these days.

Mike: If the idea is to only protect speech that no finds upsetting, if that’s the consensus these days—and maybe it is—we can go ahead and remove the First Amendment. To be clear, speech that is an element of a crime is unprotected under the law, provided that the other elements of the crime are there (such as mens rea, or, in the case of conspiracy, an act in furtherance of the criminal scheme or plan).

Advertisement

Mary Anne: Section 230 makes matters worse by using terms like “publisher” and “speaker,” feeding the conflation of Section 230 and the First Amendment, the latter of which is already misunderstood as an affirmative general right.

Advertisement

It makes people think that the internet is just some magic speech machine, when in fact many things done online aren’t speech at all, much less speech protected by the First Amendment.

Mike: And let’s not forget that the First Amendment isn’t the only backstop protecting our rights. There are of course the pesky provisions of Article 19 in the International Covenant of Civil and Political Rights, to which the United States is a signatory. Even if we repealed the First Amendment tomorrow, we’d be bound by treaty to honor Article 19, or at least pretend we do.

Advertisement
Advertisement

Andrés: So, where do we go from here? Does the broadcasting regulatory arena provide any useful lessons? TV broadcasters get a license from the FCC if they agree to certain stipulations. Is that a helpful analogy for internet speech reform? It’s clear government can’t regulate what we say online, but could it condition 230 immunity on platforms’ meeting certain standards?

Jeff: I think the broadcaster analogy is not terribly helpful, because you get into the Fairness Doctrine discussion, and the Supreme Court has correctly held that the internet is different from broadcast in terms of regulation. And conditioning immunity like that, even if you wanted to (which I don’t) would almost surely be unconstitutional.

Advertisement
Advertisement

Based on the debate this year, I think that the most likely outcome will be full repeal of Section 230, because there isn’t enough consensus as to how to reform it. So I get the sense that there is an increased willingness to just throw our hands in the air and just say, “repeal it.” This is particularly coming from the folks who are angry about being “censored” by social media. I think the result will be more “censorship” or simply reduced outlets for user speech. But I don’t think it will lead to more effective moderation—that is what we need, and that is the hard part.

Advertisement

Mike: I think that if the early services (Yahoo, for example, and maybe AOL) had been front and center about the need to make post-hoc decisions to remove content—without acquiring the liability for content they overlooked or misjudged—we’d be in a different world today.

Mary Anne: I reluctantly do favor revising Section 230. For a long time, my view was that the problem wasn’t the text of 230 itself, but how courts were applying it. But courts have gone down so many bad roads now (or so far down the same bad road) that I think reform is necessary.

Advertisement
Advertisement

My two strongest reform recommendations are to explicitly limit Section 230’s protections to speech, and to deny immunity to intermediaries who exhibit deliberate indifference to unlawful conduct.

Advertisement

Andrés: This sounds a bit like conditioning immunity, despite Justice Kosseff’s objections.
But who would deny that immunity? A new FCC-like agency for digital platforms that some have been proposing?

Mike: With a colleague yesterday I was discussing what reform of Section 230 might look like, and, honestly, the thing that came to mind was to say more clearly in the congressional “findings” language that Congress expects services to make curation choices about content—not to be neutral.

Jeff: One idea that I like from the proposed PACT Act (co-sponsored by Republican Sen. John Thune and Democratic Sen. Brian Schatz) is providing some sort of mechanism for people to have material taken down if it has been adjudicated to be defamatory. So if Andrés writes a Facebook post about me, and I sue him for defamation and win, I can have that material taken down. Many platforms already honor court orders, but at least one court has held that platforms are not required to take down material in that scenario. My only concern is to ensure that court orders are not falsified. But that is helpful to provide people with the opportunity to have material taken down once adjudicated to be defamatory. As Carrie Goldberg has correctly pointed out, this needs to be available for family court orders as well.

Advertisement
Advertisement
Advertisement

Andrés: Jeff has gone out on a limb and said he believes the most likely fate for 230 in 2021 after all the debate about it in 2020 is a full repeal. Mary Anne and Mike, what’s your prediction?

Mike: Jeff has always been more pessimistic than I am, but part of that is that he never served as general counsel for Wikipedia, as I did. Nothing shakes up Congress faster than telling members of Congress that they’re going to shut down Wikipedia, which will be hard to explain to their children and grandchildren.

Advertisement

Mary Anne: Jeff may be better at reading the tea leaves here than I am, but I would honestly be shocked if Congress did anything that radical. I could see a messy, convoluted, try-to-please-both-sides-but-make-things-worse-or-basically-change-nothing bill passing.

Advertisement

Mike: Keep in mind that the GOP and the Dems have diametrically opposed complaints about Big Tech. (at least as far as freedom of expression is concerned).

Andrés: Which begs the interesting question of whether they cross each other out or agree to nuke it without agreeing why (which is I guess what Jeff is suggesting).

Advertisement

Jeff: Building off of Mike’s idea of talking about shutting down Wikipedia, I think that online platforms large and small might want to have a “230-free day,” to demonstrate what the Internet would look like without 230.

Mike: By the way, the 2012 Wikipedia blackout was implemented worldwide—in countries outside the scope of the First Amendment. Except in places where Wikipedia is (sometimes) banned, like the People’s Republic of China, the blackout had an impact everywhere.

Advertisement

Jeff: Without 230, I think that some of the smaller platforms would just stop providing outlets for user content. The larger platforms probably would be much more likely to take down content. (That’s why it is baffling that people who are angry about tech “censorship” think that repeal will help them.)

Mike: Ironically, Facebook faces little risk. They’ll comply with whatever new burdens Congress throws at them, because they can afford to. What’s more, Facebook will use its compliance with statutory changes, whatever they are, as a defense.

Andrés: What is the ideal outcome for this debate as it is taken up under the Biden administration?

Mary Anne: I’d like the Biden administration to delve into a serious inquiry into Section 230’s role in online abuses that cause irreparable harm, especially relating to extremism, violence against women, and firearms. And based on the findings, allocate serious resources to fighting these abuses and make clear recommendations to Congress about reforming Section 230.

Advertisement
Advertisement
Advertisement
Advertisement

Mike: I want people like Rose McGowan and Mona Eltahawy and Deray McKesson to be able to preach loudly on the big services without any company feeling it has to remove possibly defamatory content.

Jeff: Building on Mary Anne’s point—and I sound like a broken record—I sure would like a commission to gather facts about moderation. I know this is not a terribly bold political proposal, but we sure do need a better factual record before making sweeping changes to an important law, as I have written. In addition to widespread misunderstandings of 230, I think that very few people understand what is—and is not—possible with both A.I. and human moderation.

I want to hear from experts like Sarah T. Roberts and Tarleton Gillespie about how things actually work. There isn’t a simple solution here.

Advertisement

Mike: There is a premise behind the idea of reforming Section 230 that goes uninterrogated: that content policy on any large platform is top-down. This assumes that the entire online universe looks like Facebook and Twitter et al. But the different world of empowered users (Wikipedia, and the better-late-than-never reform-focused Reddit) scales better at fighting bad content. And you need Section 230 (or something like it) to empower the platforms to empower engaged uses. (Just the way forum moderators on CompuServe moderated CompuServe’s various forums in the old days.)

Advertisement

Mary Anne: The online world without Section 230 would be booming, buzzing, confusion for a while, given how central a role it has played in the world we live in now. And it’s likely that many companies would overreact to the repeal and become far more conservative in what kind of content they leave up or the conduct they allow. But Section 230 is, after all only one defense—its repeal wouldn’t mean that tech companies would suddenly become strictly liable for user-generated content. The First Amendment still exists. Robust protections for private entities to conduct business as they see fit would still exist.

Advertisement
Advertisement

There will be a flood of frivolous litigation, and most of those cases will get tossed.

Mike: They wouldn’t become strictly liable. But they’d have to defend. Only big companies can afford to defend a lot.

Andrés: Mike mentioned international law, Article 19 of the International Covenant of Civil and Political Rights, earlier, and related to that, I wonder if our 230 debates are too U.S.-centric? Should we worry about the global impact of removing 230 immunity to platforms, as a precedent and signal to governments elsewhere?

Mary Anne: I definitely think the debate is distressingly U.S. centric. So much of the world has to live with decisions made in the shadow of Section 230 and the First Amendment, despite those being U.S.- specific laws. The internet reflects U.S. values more than the values of any other regime, and because it is global, the entire world is subjected to them.

Advertisement
Advertisement
Advertisement

Mike: Absolutely. I mentioned earlier the Wikipedia blackout I was involved in, and that was an example of how these issues play out globally.

My experience internationally has been that individual speakers around the world frequently express gratitude that the internet has given them new channels to speak truth to power. I’ve worked with NGOs on internet-law issues in about 20 different countries, give or take. In fact, I met my wife in Cambodia when doing internet free-speech work there for Freedom House.

Jeff: Interestingly, Congress also endorsed the global protection of 230 and First Amendment values when it passed the “libel tourism law” about a decade ago. It requires foreign defamation judgments to satisfy both the First Amendment and 230 before they can be enforced in U.S. courts, and I think that recognizes the competing values.

Advertisement

Andrés: Our hour is up! I want to thank you guys for participating in our Slack party, even if you did leave me hanging a bit on my Coldplay confession. (They did write a song about Section 230, btw… “Viva la Vida.” Or wait, was it “Fix You”?)

Advertisement

But seriously, many thanks. I know we will all continue to benefit from your wisdom and advice as these debates unfold in the new year and new administration.

Mary Anne: The less I say about Coldplay the better for us all.

Mike: NOT A FAN OF COLDPLAY HERE.

(Now you know what I shout about.)

Andrés: Well, Jeff has dropped off to get onto his next meeting, so I am going to assume he is a massive Coldplay fan, and so we have a tie.

Dec. 16, 2020: This article was updated to include mention of Mike Godwin’s book The Splinters of Our Discontent.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement