Future Tense

Tech Companies Get a Free Pass on Moderating Content

It’s time to change that.

The Facebook, Twitter, and YouTube logos hide behind a shield.
Photo illustration by Slate. Image by BlackJack3D/E+ via Getty Images Plus.

In 1996, Congress faced a challenge. Lawmakers wanted the internet to be open and free, but they also knew that very openness could allow for noxious activity. Federal agencies could not tackle that problem alone—they would need help from tech companies. So Congress passed Section 230 of the Communications Decency Act, which provided a shield from liability for platforms that tried to moderate “offensive” content. Then-Reps. Chris Cox and Ron Wyden saw it was a way to protect “Good Samaritans” trying to “clean up the internet.” As Wyden put it in 2018, Section 230 offered protection from liability in exchange for “being responsible in terms of policing their platforms.” Twenty-three years later, Section 230 has come under fire from both sides of the aisle—from conservatives who claim that tech platforms are unjustifiably filtering or blocking their speech and from liberals who think that the same companies are not doing enough to filter or block hate speech and extremism online.

Advertisement
Advertisement
Advertisement
Advertisement

Although the purpose of Section 230 was clear, the statute’s language was not. Section 230(c)(1) did not explicitly condition the protection on responsible content-moderation practices. This provided a foothold for defense lawyers to characterize the statute as providing a sweeping “immunity” far beyond what the drafters imagined or provided. Defense lawyers argued that the statute was solely concerned with the promotion of free speech. Unfortunately, courts took the bait. Lower federal and state courts have extended Section 230’s protection far beyond platforms that tried to “clean up” the internet but did an incomplete or overly aggressive job.

Over the past 20 years, courts have massively overextended Section 230’s legal shield. Platforms have been protected from liability no matter how irresponsible (or worse) their conduct and no matter how grave the harm inflicted. For instance, Section 230 has been extended to protect sites whose business is revenge porn, whose operators choose to post defamation, and whose role is getting a cut of illegal gun sales.

Advertisement
Advertisement

This overbroad reading of 230 has significant costs. Online abuse hosted on these “Bad Samaritan” sites makes it difficult for targeted individuals to enjoy life’s crucial opportunities. Because the abuse is prominent in searches of their names, targeted individuals cannot get or keep jobs. They close down their social media profiles, blogs, and sites, retreating into silence. Fear can drive them to move or change their names. More often, the people who are targeted are women and marginalized individuals.

Section 230 is not a clear win for free speech or equal opportunity. As Mary Anne Franks has argued in her important new book The Cult of the Constitution, an overbroad interpretation of Section 230 has been costly to equal protection. The benefits Section 230’s immunity has enabled likely could have been secured at a lesser price.

Advertisement
Advertisement
Advertisement

The market is unlikely to turn this tide. Content that attracts likes, clicks, and shares generates advertising income or a cut of the profits in the case of online firearm marketplaces. Salacious, negative, and novel content is far more likely to attract eyeballs than vanilla, accurate stories. Market pressure is not enough, and it should not have to be.

We need legal reform to ensure that platforms wield their power responsibly. We should keep Section 230’s legal shield but return it to its original purpose. One way to do that would be to exclude bad actors from the immunity. Free speech scholar Geoffrey Stone, for instance, suggests denying the immunity to online service providers that “deliberately leave up unambiguously unlawful content that clearly creates a serious harm to others.” A variant on this theme would deny the legal shield to cases involving platforms that have solicited or induced unlawful content.

Advertisement
Advertisement
Advertisement
Advertisement

Another approach would be to adopt the proposal that Benjamin Wittes and I have suggested: to condition the immunity on reasonable content moderation practices rather than the free pass that exists today. If adopted, when the courts consider a motion to dismiss on Section 230 grounds, the question would not be whether a platform acted reasonably with regard to a specific use of the service. For instance, if Grindr is sued for negligently enabling criminal impersonation on its dating app, the legal shield would not depend upon whether the company did the right thing in the plaintiff’s case. Instead, the court would ask whether the provider or user of a service engaged in reasonable content moderation practices writ large with regard to unlawful uses that clearly create serious harm to others. Thus, in the hypothetical case of Grindr, the court would assess whether the dating app had reasonable processes in place to deal with obvious misuses of its service, including criminal impersonation. If Grindr could point to such reasonable practices, like having a functioning reporting system and the ability to ban IP addresses, then the lawsuit should be dismissed even if that system fell short in the plaintiff’s case.

There is no one size fits all approach to responsible content moderation. Unlawful activity changes and morphs quickly online, and the strategies for addressing unlawful activity clearly causing serious harm should change as well. A reasonableness standard would adapt and evolve to address those changes. Doing nothing is a choice. It says that we are willing to tolerate harms that people experience, especially women and minorities, so that platforms can continue to make money from their irresponsibility or far worse.

This article was adapted from testimony Danielle Keats Citron delivered before the House Energy and Commerce Committee.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement