Future Tense

Trump-Backed Rules for Online Speech: No to Porn, Yes to Election Disinformation and Hate Speech

Donald Trump checks his watch  as he stands behind Lindsey Graham, who is at a lectern.
President Donald Trump and Sen. Lindsey Graham during an event about judicial confirmations, Nov. 6, 2019, in Washington. Drew Angerer/Getty Images

As we move deeper into the scrum of election season, politicians are becoming more overt in telling platforms like Twitter or Facebook what legal speech they should tolerate, and what they should take down. Some of this pressure takes the form of draft legislation, including a recent proposal from the Justice Department and another from Republican Sen. Lindsey Graham. Both proposals were prompted by President Trump’s June 2020 executive order on social media, and revising the rules for platform content moderation reportedly remains one of the president’s top priorities for congressional Republicans. The draft laws are extremely revealing about their proponents’ values and priorities, which include taking down pornography and other “lawful-but-awful” online content, but not taking down things like hate speech and electoral disinformation. The proposals also tell us something about the tensions that will shape debates going forward as Congress considers changes to one of the country’s core internet regulation laws, known as Communications Decency Act Section 230.

Advertisement
Advertisement
Advertisement
Advertisement

Right now, CDA 230 gives platforms themselves broad discretion to take down user speech that they consider “objectionable,” even if that speech doesn’t violate the law. Republicans have claimed (with little support) that the law gives platforms cover for “conservative bias” in content moderation. In earlier proposals, Republicans like Sen. Josh Hawley called for platforms to protect all speech, or to somehow create politically “neutral” rules. The DOJ and Graham bills abandon that approach and instead spell out their drafters’ speech preferences. They would keep immunity in place only for specified categories of lawful-but-awful speech, including pornography, barely legal harassment, and pro-terrorist or pro-suicide material. But platforms would face new legal exposure if they take down content for reasons not included in this government-approved list—such as Holocaust denial, white supremacist racial theories, and electoral disinformation. Apparently in these lawmakers’ value systems, platforms should be free to take down The Virgin Suicides, but not The Protocols of the Elders of Zion, recommendations to cure COVID-19 by ingesting bleach, or misleading information about voting by mail.

Advertisement

Both bills have plenty of other problems. Graham’s, for example, also proposes new liability for platforms that fact-check or label users’ posts, as Twitter has done repeatedly with Trump’s tweets, including one claiming that an election with mail-in voting would be “rigged.” Under Graham’s proposal, labeling these tweets would expose Twitter to lawsuits from people who said the tweets hurt them or violated their rights. Graham, whose seat is being challenged in the most competitive South Carolina Senate race in decades, apparently sees this as a top priority—he fast-tracked his bill for markup this week. It has been condemned by groups concerned with voter rights and criticized by academic experts, but has otherwise attracted little attention.

Advertisement
Advertisement
Advertisement

Democrats haven’t been shy about telling platforms how to moderate user speech, either—they just have different preferred rules. Recent letters from Democrats in Congress have urged platforms to take stronger measures against election- and COVID-related disinformation, “violent, objectifying or dehumanizing speech” about women, and white supremacist recruitment and organizing.

Whatever we think of Democrats’ and Republicans’ demands as a moral matter, they all have big problems as a constitutional matter. Both sides want platforms to take down speech that, offensive or harmful though it may be, is protected by the First Amendment. The government can’t evade that limit on its power by passing laws that prohibit legal speech by delegating enforcement duties to platforms. In fact, making platforms enforce Congress’ chosen policies for legal speech would prompt two separate First Amendment challenges: one from users whose lawful speech got taken down, and another from platforms whose editorial prerogatives got taken away. These constitutional questions get messier as lawmakers add layers of legislative indirection. Can Congress, instead of requiring platforms to enforce certain speech policies, achieve the same end by threatening to withhold legal protections under laws like CDA 230? I don’t think that approach is constitutional, either. But that’s not stopping lawmakers from trying.

Advertisement
Advertisement
Advertisement

Congress passed CDA 230 in 1996 with the explicit goal of encouraging internet platforms to moderate speech—without telling them what speech policies to adopt. Instead, the law immunizes them for any action taken “in good faith” to restrict access to content the platform itself considers “objectionable.” That broad discretion has grown controversial in recent years as more of our speech has become concentrated on, and subject to the rules of, a small handful of megaplatforms. But across the internet ecosystem, CDA 230 provides a flexible standard supporting a broad diversity of speech policies—from the relative free-for-all of services like Gab to the restrictive rules for comments on sites like nytimes.com. CDA 230’s immunity for any of these platforms’ decisions to take content down is bolstered by the law’s second, and until recently more famous, immunity for leaving unlawful content up. Congress in 1996 calculated that both immunities were necessary to avoid the “moderator’s dilemma,” in which fear of liability for unlawful user speech deters platforms from trying to moderate content at all.

Advertisement
Advertisement

By freeing platforms to adopt and enforce policies against lawful-but-awful content, U.S. law has brought us an internet that—while very far from perfect—is at least, in most places, not completely overrun with Nazis, pornographers, and scammers. Updating or eliminating 230 immunity for platforms who moderate user content wouldn’t change that overnight.
Platforms still would not have to tolerate those users or carry their speech. If users sue platforms to force them to host unwanted content, the platforms will almost certainly still win. That’s what happened when white nationalist Jared Taylor sued Twitter for taking down his tweets, and in dozens of other cases so far. But without 230, platforms will win more slowly, and at greater expense. Making it more costly and inconvenient for platforms to enforce certain speech policies is another way of putting the heavy thumb of government on the scale. It gives platforms a solid economic incentive to simply avoid moderating content outside the government-approved categories.

Advertisement

Republican proposals like Graham’s should do away with any illusion that lawmakers across the political spectrum are united in their goals for CDA 230 reform. They may agree that platforms like Facebook and YouTube are enforcing the wrong rules in moderating users’ speech, but they remain deeply divided about what the right ones would be. The rest of us can’t agree on that, either. Which means that maybe those members of Congress who enacted CDA 230 back in 1996 were on to something. If we want rules against lawful-but-awful speech, platforms are in the best position to provide those, because they are not bound by the same First Amendment limits as Congress. A diversity of speech rules, across a diversity of internet platforms, is better than any one speech rule that Congress—or Mark Zuckerberg—might come up with.

Advertisement
Advertisement

If we believe that, then the right path forward is not about mandating new platform speech policies at all, but about setting rules in areas like competition or privacy law to encourage platform pluralism and diversity. At most, lawmakers might tinker with the processes major platforms use in removing online speech, and in communicating with their users. The bipartisan PACT Act takes that approach, though it’s not without its own problems. It’s high time we move on to a more mature conversation about platform regulation. The current one, about whose speech rules are the right ones, isn’t getting us anywhere.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement