Future Tense

The Law That Let Silicon Valley Stay Clueless

But also made the internet what we have today.

Photo illustration by Slate. Images by Thinkstock, Twitter, Facebook and Google.

Photo illustration by Slate. Images by Thinkstock, Twitter, Facebook and Google.

The internet didn’t have to turn out this way. There is an alternative future, one where walled gardens like Facebook and Google didn’t morph into overgrown safe havens for Nazis and Kremlin agents to hide and thrive. One where misinformation didn’t spread like wildfire. One where women and members of minority groups didn’t cringe to open their apps. But here we are.

“We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” Twitter’s then-CEO Dick Costolo wrote in an internal memo to employees in 2015. And now, over two years later, the company is finally starting to take strong action this month, rolling out new policies to bar hate speech and by unverifying white nationalists, thus demoting their bigoted content from top news searches. After years of lawsuits and terrifying victim complaints, Facebook is now experimenting with more ways to rid its platform of nonconsensual porn by actually asking victims to upload a naked photo of themselves to Facebook—ostensibly to train software to detect the pic if it’s ever posted again. An extremely awkward proposition, but it’s something.

So why has it taken online companies this long to take meaningful, if clumsy, action? Because they don’t have much if any liability for what happens on their services—thanks in large part the Communications Decency Act, which Congress passed in 1996. The law was originally created to prevent minors from accessing porn and other obscene content, but the Supreme Court struck it down the following year for violating the First Amendment since determining what is and is not indecent was difficult to determine, and it’s easy to label potentially offensive content online instead of prohibiting it altogether.

But one part of the CDA wasn’t axed: Section 230. That part of the law says that in general, websites are not responsible for the things their users do or post. The CDA is one of the main reasons why the most recognized and powerful internet companies in the world are all American. Without it, Yelp or Amazon could be sued when a user posts a damning review. Wikipedia might look more like the work of public relations professionals who complain about negative entries, or it might not exist at all. The internet would look completely different—for better and for worse.

“If Facebook was charged every time someone posted something defamatory or tortious on their wall, Facebook would be sued out of existence,” said Mary Anne Franks, a law professor at the University of Miami School of Law and vice president of the Cyber Civil Rights Initiative.

This year, major internet companies have been fighting to maintain that legal immunity. Toward the end of October, while lawyers from Google and Facebook were preparing to sit in the congressional hot seat over the Russian troll infestation on their websites, other attorneys representing the same companies were meeting with lawmakers to oppose the Stop Enabling Sex Trafficking Act. If passed, the bill would weaken Section 230 of the Communications Decency Act by making it possible to hold companies liable for publishing information that’s “designed to facilitate sex trafficking,” like by selling online ads to known sex trafficking operations. Google spent record sums on lobbying this year, much of which went to defeating the anti-sex trafficking bill. In recent days, the Internet Association, a lobbying group whose members include Google, Twitter, and Facebook, has changed its stance to support the bill, possibly in a spirit of compromise, considering other political fights on their horizon.

But by opening up liability for what users post, internet companies “could wind up taking a ‘better safe than sorry’ approach to hosting their users’ speech,” Nuala O’Connor, president of the Center for Democracy and Technology, said in a statement in an anti-SESTA advocacy website. (CDT receives some funding from Google.) “Anything controversial, unpopular, or outside the mainstream could be viewed as a major risk of liability that” many internet companies “simply couldn’t afford to take on.” It’s a slippery-slope argument, and it’s true that creating exemptions that would make websites liable in some cases could cause companies to proactively clamp down on other kinds of speech to avoid a lawsuit.

But for all it has given us, Section 230 of the CDA has also protected some of the worst parts of the internet. If a small-town factory pollutes the water supply, the company can be held legally responsible for the negative consequences of the factory. But on the internet, armed with the protection granted by the CDA, “you can reap all the rewards for whatever it is you’re producing, and you basically will be accountable for none of the negative things that you might also be producing,” says Franks. “So unlike a factory, people can’t sue you for the negative side effects your online product.” While internet companies enjoy immunity from the legal consequences of hate speech on their platforms, users who are swarmed with it are left to deal with the emotional abuse. A visionary game developer abandoned her career opportunities following a barrage of online death threats. Teenagers are viciously bullied. Reputations are ruined.

You can sue individuals responsible for defamation, say if you’re able to find their identities, though users who engage in abuse online often do so anonymously. But, thanks to the CDA, it’s incredibly challenging to sue companies that helped the message spread. Prior to section CDA 230, anything you could sue a user for, you could sue a platform for, too. Not all cases would necessarily proceed, but more would proceed further than without the CDA.

And unlike in our offline lives, where there are civil rights laws that would prohibit restaurants or hotels or workplaces or schools from discriminating against people for their race or gender, there’s no correlation online that would prohibit a social media company from hosting a user-organized group that kept out black people. Racists have been spreading vile memes and building robust online communities for years, even before social media. It wasn’t until the Unite the Right rally in Charlottesville, Virginia, when these online hate communities spilled into the streets, that a deluge of companies started to take noticeable action. In the days before and after the Unite the Right rally, internet companies started shuttering associated hate group accounts off their platform. Airbnb and Facebook were among the first, and after the weekend turned deadly and scenes of racist mobs saturated social media and television, more online businesses started kicking hate groups off, too: Google, GoDaddy, Spotify, OkCupid, and PayPal all started making it harder for neo-Nazis to use their services.

Perhaps, as with their stance toward harassment online, internet companies always just found it easier to look the other way before Charlottesville. It probably would have looked bad if they didn’t—the timing invites speculation that this was probably the motivation rather than because providing forums for hate groups is actually wrong.

It’s hard to say what the global internet would look like if Section 230 had never become the law of the land. Would YouTube have even been possible?

But the fact that major tech companies have decided to embrace the Stop Enabling Sex Trafficking Act suggests that we are heading into a new era—one in which Google, Facebook, and others are accepting that they can’t remain immune to liability forever. They will survive this fairly minor change to the Communications Decency Act should it pass Congress. But if we are starting to slide down this slope, we have to ask: In a way, doesn’t this just entrench their power further? The companies that grew up during the age of Section 230 had an advantage that the next generation of startups may not.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.