In October 1959, the Supreme Court heard the story of Eleazer Smith, a Los Angeles bookstore owner. Smith was convicted and given a 30-day jail term for the possession in his bookstore of a hardcover book, Sweeter Than Life. It was an erotic novel—which was illegal. Smith was convicted under a Los Angeles ordinance that says if you merely sell obscene materials, even if you haven’t read them and you don’t know about them, you can go to jail. Fortunately for him, a civil liberties attorney took his case all the way up to the Supreme Court, which held that the Los Angeles ordinance violated the First Amendment.
That ruling set the stage for the creation of a law, Section 230, that has shielded internet giants and social media companies from legal liability for what users say on their platforms, in the same way that Eleazer Smith wasn’t liable for what was inside a book in his store. But now, the Supreme Court is set to hear a case on Section 230, one that could fundamentally alter big tech’s business model.
On Friday’s episode of What Next: TBD, I spoke with Jeff Kosseff, a law professor who wrote the book on the most important law underpinning the modern internet: Section 230 of the Communications Decency Act. Our conversation has been edited and condensed for clarity.
Lizzie O’Leary: You are probably sick of this, but you did write a book with the title The Twenty-Six Words That Created the Internet. Can you read them for me?
Jeff Kosseff: Sure. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
How would you characterize the role that those 26 words had in setting the stage for the modern internet?
They’ve allowed companies of all sizes to build internet businesses around content that users create rather than content that the companies create. It’s really a risk-shifting strategy. It’s saying any legal risk of defamation or anything else is going to be put on the people who post the content, and it won’t be placed on the platform that hosts it.
To understand the landscape in which Section 230 was written, you have to remember the internet of the early 1990s. Back then, services like CompuServe or Prodigy hosted forums and message boards where people posted all sorts of things. But the companies took very different approaches to what users said on their sites. CompuServe didn’t do much moderation of the content on its service, while Prodigy did. This being the internet, users invariably said some offensive things, and both companies got sued, but it was CompuServe’s hands-off approach that was the more successful legal strategy. Why?
They’re able to get the case dismissed. The judge says, “You are just like Eleazer Smith’s bookstore. You didn’t know about this content, you had no reason to know, so we’re not going to hold you liable for defamation.” Now, a few years later, Prodigy tries the same thing, and its efforts are rejected. The judge says, “You’re different than CompuServe. Because you do all of this [moderation], you’re more like a newspaper than a newsstand, and just like a newspaper, you are liable for every single thing in your pages.” This is in 1995. It starts to get a lot of attention from Congress and in the media because it stands for the proposition that, if you moderate content, you actually can increase your liability.
In 1995, Congress rewrote the telecommunications law for the first time in 60 years. A lot of people, including lawmakers, thought the internet was a terrifying place full of weirdos and pornography. The older, less tech-savvy Senate attached the Communications Decency Act to their version of the telecom bill. The CDA made it illegal to knowingly send or show minors indecent content online, but over in the House, members were taking a very different approach.
You have a Republican, Chris Cox, and a Democrat, Ron Wyden, and they want to come up with an alternative. That’s Section 230. What Section 230 does is it solves this Prodigy problem by saying that if you’re an interactive computer service provider, you won’t be treated as the publisher of content that someone else provides. Rather than have the government criminalize certain types of constitutionally protected speech, we’re going to put it in the hands of these online services and also of the user.
In some sort of congressional magic, both the Communications Decency Act and Section 230 get put in the same part of the telecom bill, even though they conflict with one another. But the day that President Clinton signs it into law, you have civil liberties groups challenge the constitutionality of the Senate’s bill. That goes, within a year and a half, up to the Supreme Court, and they strike the CDA down. So basically all that’s left of this internet part of the telecom law is Section 230.
As tech platforms have grown in their influence in power, Section 230 has become a pretty popular target for politicians on both the left and the right. But the courts have pretty vigorously upheld the law and ruled that it gives companies a broad liability shield, which is why the Supreme Court’s recent decision to even hear a 230 case is so significant.
This case, Gonzalez v. Google, centers on a young American law student, Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris. Her family sued Google, claiming that YouTube, which is owned by Google, violated the Anti-Terrorism Act when its algorithm recommended ISIS videos to other users.
Now, the plaintiffs in the Gonzalez case said, “We’re not just seeking to treat YouTube as a publisher. What we’re going after YouTube for is the targeted promotion of certain content.” If people are searching for things related to ISIS, at least at the time, YouTube will then, at least according to the complaint, recommend similar content. This is part of the radicalization and propaganda process. What the plaintiffs in the Gonzalez case are saying is that it’s that sort of algorithmic promotion of terrorist content that is not within the scope of Section 230.
Now, courts—including the 9th Circuit, where this came from, and the 2nd Circuit—have rejected that. They say that’s part of the editorial and curation process that Section 230 protects. There have been some judges who have dissented, including in this case in the 9th Circuit, who have said, “No, this is different, and YouTube and other social media sites really contribute to the harms in how they target the content.”
Usually when the Supreme Court takes a case, it’s because you have what’s known as a circuit split. But on this issue, Section 230 really has not been read in different ways by different courts.
There’s something in the 9th Circuit’s opinion that I thought was interesting. The majority there wrote that Section 230 “shelters more activity than Congress envisioned it would.” When I look at that and think about the court taking this case, it seems to me like the 9th Circuit left a lot of space for either the Supreme Court or Congress to step in and say, “This is how Section 230 should be interpreted.” Am I overreading that?
I think that that’s definitely one possibility because the Supreme Court has never taken a Section 230 case before. This is a really difficult case with tragic circumstances. In other cases like this, judges have said, “We have to find that Section 230 protects the activities here, but we don’t like it. Congress, you have to do something about it.”
There’s no question that the political climate on both sides of the aisle is ripe for some sort of re-examination of Section 230, even if lawmakers haven’t been willing or able to do it themselves. One of the things I find interesting about it is that Republicans and Democrats bring it up for varying reasons, and it is sort of a political hobby horse. Do you think this is reaching a head?
I think it might be. It’s in the news a lot. It’s hard to fully distance the Supreme Court from the political realities. I think there’s a lot of misunderstanding about Section 230 from both the left and the right. They both have real concerns about the action or inaction of the Big Tech platforms, but they seem to think that Section 230 changes will be some sort of magical solution.
For the left, they really want content that they view as harmful to be taken down more frequently. The problem with that is that much of that content is constitutionally protected. So even without Section 230, the government can’t directly or indirectly cause the content to be taken down. On the right, they tend to be more concerned about what they view as liberal tech platforms unfairly censoring conservative viewpoints often because of their hate speech policies or misinformation policies.
The court has the ability to alter the way Section 230 is interpreted and, by extension, change the internet, but it’s unclear how far the justices are going to go. What kind of power does the court have here in terms of how Section 230 could be applied going forward?
They could address the very narrow circumstances involving this specific case and how it applies with the Anti-Terrorism Act. But they also could perhaps say that any algorithmic promotion of content is not covered by Section 230. That would really radically change both Section 230 and how platforms operate. You question how some social media sites would even function without any algorithmic promotion. I think it’s just become so much a part of how we use the internet that I don’t think it’s really practical.
The Supreme Court could have early on said that all Section 230 says is that you’re not treating the platforms as publishers. They could still have that distributor liability that the magazine stands have so that they could be liable if they know or have reason to know of the content. That would change things pretty radically. Because then every time a platform got a complaint about content, they would face a choice of either taking it down or possibly defending a defamation case in court.
That seems like it would put them out of business.
Yeah. You think about Glassdoor, which relies on Section 230 quite a bit even more than Big Tech because they have fairly controversial content that businesses don’t really like. If you basically say 230 is just the magazine stand standard of liability, then Glassdoor would have a really strong reason to take down everything that people complain about. Then suddenly, every workplace that you look for on Glassdoor looks like it’s perfect because all the negative reviews are taken down.
You have spent so much time thinking about Section 230. And every time it comes up in a big political fight or some member of Congress talks about it or now when it is headed to the Supreme Court, I wonder what you think people miss?
They miss the fact that you’re not going to be able to fix everything about the internet by changing Section 230. Despite the fact that it really had this really fundamental impact on these internet businesses, there’s so much that you can’t address by changing a statute.
A lot of it is about human nature. There’s so much of a focus on the supply of bad stuff on the internet. Just changing whether Facebook or Twitter can be sued for defamation, that’s not going to address the fact that people are doing some really harmful and dangerous things. I’m happy to see Section 230 still in the national dialogue, but I also think that we need to look at so many other social factors beyond that.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.