Future Tense

Five Ways to Address Online Speech Problems Without Gutting the Law That Created Today’s Internet

Editing marks on a background of 1s and zeroes
President Trump and former Vice President Biden agree on one thing: They don’t like Section 230 of the Communications Decency Act. Photo illustration by Slate. Photo by rootstocks/iStock/Getty Images Plus.

While Donald Trump and Joe Biden disagree about many things, they agree on at least one: Section 230 of the Communications Decency Act should be repealed. As tech journalist Casey Newton has said, “both candidates want to end the internet as we know it.”

On Wednesday, Twitter’s Jack Dorsey, Alphabet/Google’s Sundar Pichai, and Facebook’s Mark Zuckerberg will have a chance to defend the internet we know, when they will testify in a high-profile Senate hearing on the Communications Decency Act’s Section 230—the law that provides online intermediaries with immunity from liability for speech by their users.

The CEO testimony will occur as both parties use tech policy in their closing arguments in the 2020 campaign, with Republicans contending that tech platforms disproportionately censor conservative political speech and Democrats making the case that the platforms have grown too powerful and permit too much speech that systematically harms marginalized communities.

Trump has called for “immediately” repealing Section 230. He’s been so focused on the issue that he tweeted “REPEAL SECTION 230!!!” within 24 hours of his return to the White House after his coronavirus hospitalization. Biden has taken a similar position, calling in January for Section 230 to be “revoked” and in May proposing eliminating Section 230 protections for tech platforms that knowingly host false content. Although the two candidates have rationales for reforming Section 230, they are charting a similar policy path for the next administration.

As the election has approached, proposals to reform Section 230 have been published almost weekly, and controversy after controversy has drawn attention to the topic. Legislators and academics from both sides of the aisle have proposed options for curbing problematic online content, and even Justice Clarence Thomas urged the Supreme Court to take a case on the issue. The tech industry’s competitors have joined the fight as well, with news publishers’ lobbyists arguing that Section 230 is a “license to build rage machines.”

In addition to the proposals from Biden and Trump, two bipartisan bills have received significant attention: the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act), introduced by Sens. Lindsey Graham, Richard Blumenthal, Josh Hawley, and Dianne Feinstein, and the Platform Accountability and Consumer Transparency Act (PACT Act), introduced by Sens. Brian Schatz and John Thune. Hawley has introduced numerous other proposals, including one that would make Section 230 protection conditional upon platforms demonstrating that they are politically neutral. In the last two months alone, two other bills have been introduced in the Senate (the Online Freedom and Viewpoint Diversity Act, introduced by Sens. Roger Wicker, Graham, and Marsha Blackburn, and the See Something, Say Something Act, introduced by Sens. Joe Manchin and John Cornyn) and one in the House (the Protecting Americans from Dangerous Algorithms Act, introduced by Reps. Anna Eshoo and Tom Malinowski).

Many of the proposed reforms would cause tech products to be radically redesigned in ways that would dramatically change online expression. Without the protection of Section 230, tech platforms like Twitter and YouTube would likely function more like traditional gatekeepers, such as broadcasters and publishers. Some people might be happy with a version of Facebook, Twitter or YouTube that feels more like Netflix or a newspaper—you can’t share content unless the intermediary approves of it—but many would not.

We need a different blueprint for reform, as I outlined in a working paper published this week by the Day One Project, a project that’s focused on developing actionable policies for the next administration. Without revoking or gutting Section 230, these proposals will deter and punish some of the most problematic online content, while preserving the aspects of tech platform design that are critical to online expression.

First, Congress should modernize federal criminal law by prohibiting some of the most egregious forms of online speech. Section 230 does not provide immunity for tech platforms for violations of federal criminal law, such as child pornography and copyright violations, but there are big gaps in current law where it has not kept pace with the harms of evolving technologies. If we are concerned about the use of online tools to organize violence in the streets, or concerned about the use of apps to mislead people of color about the time and location of voting, then we should look to Congress, not platforms, to provide the governing rules.

Voter suppression is one area that would benefit from strong congressional action. Many states prohibit deceptive practices in voting, but no equivalent federal law exists. To fill this gap, Congress should pass legislation that prohibits specific types of deceptive practices in voting, such as intentionally misleading voters about voting time and location. Such a law would likely be challenged on First Amendment grounds, but could survive court review if it is crafted narrowly and strategically. This will be a high hurdle, but not an impossible one.

Second, Congress should pass a modified version of the PACT Act introduced by Schatz and Thune in June. The law would require platforms to comply with court orders to remove illegal content. It could be further strengthened by adding provisions that would better protect online speech, such as ensuring that orders require only removal of individual pieces of content and do not require platforms to engage in ongoing content filtering.

Third, the Federal Trade Commission should publish guidance that clarifies the blurry line between content hosting and content creation. Under Section 230, platforms are liable when they create content, even if they merely “develop” it “in part.” But existing law provides few details on when “hosting” crosses over into “creation,” a question that gets more and more confusing as platforms build increasingly complex products that rely on artificial intelligence and algorithmic sorting. To provide more clarity, the Federal Trade Commission should hold workshops to gather information from users, platforms, and experts, and then issue guidance that outlines criteria for determining when platforms “develop” content.

Fourth, platforms should include reporting functionality that makes it easier to hold people accountable for their speech on online platforms. For instance, platforms could enable people to report election misinformation directly to an election monitoring organization or to a state attorney general’s office.

Fifth, platforms should provide better data for studying online expression. To make better data sharing possible, policymakers should provide safe harbors to platforms that share anonymized data to researchers to study the impact of content policies. This safe harbor would protect platforms from liability for subsequent security or privacy breaches by researchers, but only if they comply with best practices for responsible data sharing, including robust privacy protections. Platforms and governments should also publish data in their transparency reports that enable researchers to evaluate whether Section 230 reforms in the next administration bring benefits that outweigh their costs.

These five reforms will not solve all of the problems of online expression, but they will deter and punish some of the most harmful activity, provide more clarity on liability for users and platforms, and give us more data to inform future product design and policy development. If the next administration implements these reforms and avoids more draconian measures like revoking Section 230 entirely, we will be on a path toward a better internet.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.