Future Tense

Revising the Law That Lets Platforms Moderate Content Will Silence Marginalized Voices

A person with a laptop is in the foreground while Sundar Pichai is on a large screen on the wall.
Google CEO Sundar Pichai testifies remotely during a Senate hearing to discuss reforming Section 230 of the Communications Decency Act on Wednesday. GREG NASH/Getty Images

If you’re an LGBTQ+ millennial and were lucky enough to have an internet connection, you likely owe a very particular debt to the internet. I’m a bundle of identities—Black, queer, conservative-minded—and the internet wasn’t just a formative space for me. It was a place where I could safely try on these identities and talk with others who were going through similar experiences. One of my friends joked that in 2010, all the gays decided to friend each other on Facebook, and he thought that was just lovely. There’s some truth to it—when I was closeted, it was a lot easier to friend a random gay person from Saskatchewan with a pride flag in his background than to come out.

The comforting anonymity in spaces made for and by queer or questioning people—whether in ‘90s chatrooms or on Reddit in the 2010s—was and is the first step for many in accepting their identities. We take them for granted now, but the creation of these spaces was never preordained. The internet could have evolved in an entirely different direction—one that didn’t empower minority groups and small communities to meet and organize online with little to no regulatory friction. The internet went in this direction because Congress chose to codify common law precedent and extend First Amendment protections to online communities and moderators. We also owe a debt to the laws that allowed online communities to flourish—laws like Section 230 of the Communications Decency Act. Section 230 established that legal liability for illegal content online should be aimed at the individual who shared it rather than the platform that hosted it. It is based on the very reasonable principle that individuals, rather than the tools they use, hold responsibility for their own actions, and it’s what allowed the internet to make it to this point.

Today, Section 230 is under the attack from both the political right and left. On Wednesday, the Senate Commerce Committee held a hearing focused on this law that created the internet, and the CEOs of Twitter, Facebook and Alphabet/Google were called to testify. Many of the questions demonstrated that lawmakers are misinterpreting, intentionally or otherwise, or obfuscating the meaning of the law for political purposes. The left believes that somehow dismantling Section 230 will stop people from sharing misinformation or harassing people online. The right believes that removing 230 protections will mean less censorship of conservative speech.

Both sides are wrong. Yet in the 116th Congress, at least 10 bills have been introduced to significantly alter Section 230’s protections. The Department of Justice has crafted model legislation amending the law. The Trump administration has directed federal agencies to narrow digital free speech protections and the Federal Communications Commission has started the process of “clarifying” the law.

These measures are animated by different concerns. But what they all have in common is that they flip the current incentives for websites to allow speech.

For instance, Sen. Lindsey Graham’s EARN IT Act amends 230 in an effort to attack online child exploitation. It’s a laudable goal, but the bill is so broad that it could force websites to restrict anonymous speech, which is especially important for queer youth. Other bills would limit the categories of speech protected by 230.

If the EARN IT Act or other 230 reform proposals become law, platforms, both large and small, would put more roadblocks in the way to publishing speech online. If I were a lawyer for Facebook or Reddit and Section 230 was revoked, I’d urge them to remove all content that had a hint of controversy to protect us from legal liability—even if it meant removing legitimate, constructive speech. That’s exactly what happened after Congress passed SESTA-FOSTA, which was designed to hold platforms liable for user speech that knowingly assists, supports, or facilitates illegal sex trafficking conduct. It is the most recent carve-out to Section 230’s broad digital free speech protections. Though perhaps well-intentioned, research has shown that the law made sex work far more dangerous. It also spurred websites like Reddit to take down forums that hosted legal speech out of an abundance of caution, and its constitutionality is being challenged in federal court. SESTA-FOSTA reduced speech online and didn’t even achieve its primary objective: Law enforcement rarely uses it to prosecute sex traffickers because they already could under existing law.

SESTA-FOSTA demonstrates what happens when you weaken digital free speech protections: It raises the costs for platforms who host user-generated content and causes those platforms to reduce speech to protect themselves from legal liability. Both of these pressures would limit free speech online. Altering 230 would also force platforms to adopt policies that would make social media less user friendly in order to ensure every post doesn’t lead to a lawsuit. (As Scott Lincicome of the Dispatch put it, “who wants to wait a day to post a tweet?” That’s what could happen under these proposals.) And, as with SESTA-FOSTA, it would ultimately most impact groups that struggle to communicate safely.

And despite what Graham and other Republicans think, revoking CDA 230 would also affect conservative speech. As Reason’s Robby Soave explains, conservatives have successfully utilized the internet to circumvent the mainstream media and reach new audiences. Revoking 230 would jeopardize that success. Conservatives are natural free speech champions. We understand that it’s a prerequisite for a free society and that government regulations on speech have often harmed our ability to be heard. Twitter’s recent disastrous decision to ban the sharing of a New York Post story on Hunter Biden and Facebook’s call to throttle distribution of that same piece have rightly infuriated Republican members of Congress (and others). But if you have a problem with private platforms deciding how to moderate content users post, then your problem isn’t actually with 230—it’s with the First Amendment and the Supreme Court.

Every online community would be affected by these changes. But those communities most ignored by society and mainstream media would face the brunt of the impact.

That the LGBTQ community is extremely online is not a novel observation—queer writers like Emily VanDerWerff have written about how Reddit and early days of the internet created affirming spaces for the trans community. The existence of Gay Twittertm is a testament to how the queer community has molded and been molded by the internet and social media, for better and for worse.

Something I hear often from much older queer people is how difficult it was in their time to connect with others like them, let alone organize for social and political change. Your choices were few: You could move to the city and hope it was welcoming, sign up for the handful of queer publications, or just try to make it where you were.

Queer people have a microphone on the internet that they were denied consistently for generations. The hostility and indifference from the mainstream have dialed back significantly for numerous reasons, and one of them is because the internet made it almost impossible to ignore us. It gave us a platform to speak freely, and we used it to tell our stories and our truths. If Section 230 is narrowed or revoked, that platform will remain, but it will be diminished. There are likely marginalized groups today who have not yet come to more mainstream acceptance that will be denied this same opportunity if they do not have open internet platforms that allow users to generate their own content.

But policymakers on both the right and left who are attacking Section 230 aren’t reckoning with the costs of their positions. Instead, they’ve settled it as an easy way to demonstrate legitimate frustration with social media platforms and their moderating practices.

To be clear, there are risks to both reforming and maintaining Section 230. I’m as concerned about Section 230’s impact on civil rights laws as I am about the awful moderation decisions of Big Tech companies.  For instance, Public Knowledge’s Bertram Lee has criticized Section 230 for making it more difficult for civil rights lawyers to sue platforms that host discriminatory ads and Twitter’s moderation decisions have been questionable, to say the least. But I’m also a free speech absolutist and a diehard Section 230 advocate because of how instrumental America’s free speech ideals have been to civil rights movements and because it aligns with my conservative values. There is a cost associated with that position—in the form of harassment online, misinformation, and moderation decisions that I disagree with. As any person of color or queer individual could tell you, the internet isn’t always an escape from prejudice. Minority groups online face harassment every day and platforms need to do more to protect them—but it shouldn’t take the threat of narrowing Section 230 to force companies to take those measures. As the public, politicians, and companies debate whether we should change America’s exceptional digital free speech protections, they should take stock of the benefits of the current approach and understand what they’re jeopardizing by limiting them.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.