A couple of weeks ago, I went to lunch with a prominent journalist who wanted to ask me about Wikipedia. I had been general counsel for the Wikimedia Foundation for a few years during a time when the online encyclopedia had really taken off in growth and funding. The journalist was curious how Wikipedia remains so information-rich and useful when the rest of the internet (in his view) is filled with divisive, corrosive misinformation. I said that Wikipedia couldn’t exist without the work that cyberlibertarians had done in the 1990s to guarantee freedom of expression and broader access to the internet.
The journalist took the more jaundiced view, one I’ve heard many times: that the internet has brought us to the unhappy historical moment we’re now living in, and that the only way to rescue society is to impose more discipline online, through tougher laws and fewer legal and constitutional protections.
I hear this too-much-free-speech argument a lot these days, but I can’t get used to it. For 30 years, I have been a cyberlibertarian or—the term I prefer—an internet lawyer. Sure, I’ve worked on copyright law, encryption, broadband access, digital privacy, data protection, and more. But the roots of my career have always been in civil liberties and criminal law. That is, I’ve (mostly) been arguing against censorship and against those who want to punish (mostly) law-abiding people for what they say or do with their digital tools on or off the internet.
But increasingly, I’m hearing from politicians, activists, and people like my journalist friend who say that maybe we 1990s-era internet activists blew it. The story goes that we were so shortsighted in our focus on things like internet free speech and digital privacy that we overlooked a whole spectrum of long-term threats posed by digital technologies, the companies that sell them, and the governments that deploy them. This perspective suggests that the internet freedom my colleagues and I championed has instead chained us all by corrupting democracy and poisoning relationships.
In recent years, my views have evolved. I no longer think that tolerance of disruptive speech is invariably the best answer, although, even now, I believe it’s typically the best first response. I also think the too-much-free-speech folks are being shortsighted themselves, because we’ve entered an era in which we need more disintermediated free speakers and free speech, not less.
Although the First Amendment was chiseled into the Bill of Rights in the 18th century, most of what we’re referring to when we talk about American free-expression law is only about a century old. Still, cases like Near v. Minnesota (1931), New York Times v. Sullivan (1964), and Brandenburg v. Ohio (1969) have always seemed as foundational to me as the First Amendment. And the 20th century’s First Amendment cases helped inform the development of international freedom-of-expression principles in the Universal Declaration of Human Rights (1948) and the International Covenant on Civil and Political Rights (1976).
When I was finishing law school in the late 1980s, I loved those digging into those cases, but I also enjoyed taking frequent breaks from studying to participate in the earliest digital forums online—bulletin-board systems but also bigger distributed systems like Usenet, where I could talk with people from all over the world. By my last semester of law school, my interests in criminal law, free expression, digital technologies, and online forums had converged, and I was hired as the Electronic Frontier Foundation’s first staff lawyer in 1990.
In the earliest days of EFF, a big part of the job of expanding cyberliberties was just the work of getting the legal and constitutional issues recognized. Today’s EFF has an admirable and diverse portfolio of casework and public advocacy, but 30 years ago we were a fledgling civil liberties startup focused more on consciousness-raising and proof of concept. My primary work as an attorney in those early years centered on advising other lawyers about handling hacker cases, email-privacy cases, and some of the earliest defamation and obscenity cases.
Some commentators, including April Glaser in a 2018 article for Slate, have interpreted EFF’s early years as disproportionately anti-government and “incomplete” because the organization did not address the fact that corporate decisions, no less than government decisions, could and frequently did undermine “justice, human rights, and creativity.” But we spent plenty of time chiding private corporations, ranging from the early IBM-Sears platform Prodigy to the incumbent telephone companies, for falling short of citizens’ reasonable expectations about free expression, privacy, and access to the larger digital world.
In addition to advising other lawyers and speaking and writing about cyberliberties issues, I also practiced some law myself in the 1990s. Most importantly, I was co-counsel in Reno v. ACLU (1997), a constitutional challenge to the Communications Decency Act that rocketed through a trial-court victory and a quick appeal to the Supreme Court. The Supremes voted unanimously to strike down most of the CDA, which was aimed at banning “indecent” but otherwise legal pornography from the internet. Our victory left in place only the act’s Section 230, which was designed to empower internet companies to remove offensive, disturbing, or otherwise subscriber-alienating content without being liable for whatever else their users posted. The idea was that companies might be afraid to censor anything because in doing so, they would take on responsibility for everything. But now Section 230 is in some legislators’ crosshairs because the companies (in Congress’ view) censor either too much or not enough. The Reno case established the fundamental constitutional and statutory protections for new online forums and did so in such a massive, categorical way that it left me wondering, for a year or two in the late 1990s, whether I ought to retire from civil liberties work, my job being mostly done. I took time off and finished a book about my EFF years, from the early days to the CDA fight, in 1998. (Internet law was moving fast back then, and I published a revised, expanded edition five years later.)
I was wrong to think the big wrangles were over, though. For one thing, debates in the United States about digital copyright, encryption, surveillance, and building broadband access were becoming more heated. Notably, after having failed to block the spread of encryption technologies in the 1990s, the post–Sept. 11 U.S. government began to explore ways of compelling the tech companies to break or sidestep encryption in response to warrants and subpoenas. Government demands of this sort—not just from the United States—have only gotten worse in the past few years, a trend that will continue, to judge from the anti-crypto stylings of Attorney General William Barr.
For another, EFF and other American cyberlibertarians gave inadequate attention to the international environment. We rationalized this U.S. focus because we weren’t yet big enough to be present elsewhere around the world, and because the U.S., as Internet Ground Zero, encountered lots of cyberliberties issues earlier than most other nations did. But since my departure from EFF in 1999, I’ve worked with activists in more than two dozen countries whose constitutions and laws may be different, but whose issues regarding censorship, privacy, and human autonomy are strikingly similar to our own.
In addition, the original big internet-policy debates never disappeared or even really shrank that much—they have been reincarnated in new forms and new places, as when the movie industry promoted civil and criminal cases against programmers who published source code that, in effect, explained how to bypass DVD copy protection. (The idea that publishing source code might itself be a crime underpinned some of the hacker cases in 1990, the year EFF was founded.)
Another thing we clearly got wrong is how large platforms would rise to dominate their markets—even though they never received the kind of bespoke regulated-monopoly partnership with governments that, generations before, the telephone companies had received. In most of today’s democracies, Google dominates search and Facebook dominates social media. In less-democratic nations, counterpart platforms—like Baidu and Weibo in China or VK in Russia—dominate their respective markets, but their relationships with the relevant governments are cozier, so their market-dominant status isn’t surprising.
We didn’t see these monopolies and market-dominant players coming, although we should have. Back in the 1990s, we thought that a thousand website flowers would bloom and no single company would be dominant. We know better now, particularly because of the way social media and search engines can built large ecosystems that contain smaller communities—Facebook’s Groups is only the most prominent example. Market-dominant players face temptations that a gaggle of hungry, competitive startups and “long tail” services don’t, and we’d have done better in the 1990s if we’d anticipated this kind of consolidation and thought about how we might respond to it as a matter of public policy. We should have—the concern about monopolies, unfair competition, and market concentration is an old one in most developed countries—but I have no reflexive reaction either for or against antitrust or other market-regulatory approaches to address this concern, so long as the remedies don’t create more problems than they solve.
What’s new and more troubling is the revival of the idea, after more than half a century of growing freedom-of-expression protections, that maybe there’s just too much free speech. There’s a lot to unpack here. In the 1990s, social conservatives wanted more censorship, particularly of sexual content. Progressive activists back then generally wanted less. Today, progressives frequently argue that social media platforms are too tolerant of vile, offensive, hurtful speech, while conservatives commonly insist that the platforms censor too much (or at least censor them too much).
Both sides miss obvious points. Those who think there needs to be more top-down censorship from the tech companies imagine that when censorship efforts fail, it means the companies aren’t trying hard enough to enforce their content policies. But the reality is that no matter how much money and manpower (plus less-than-perfect “artificial intelligence”) Facebook throws at curating hateful or illegal content on its services, and no matter how well-meaning Facebook’s intentions are, a user base edging toward 3 billion people is always going to generate hundreds of thousands, and perhaps millions, of false positives every year.
On the flip side, those who want to restrict companies’ ability to censor content haven’t given adequate thought to the consequences of their demands. If Facebook or Twitter became what Sen. Ted Cruz calls a “neutral public forum,” for example, they might become 8chan writ large. That’s not very likely to make anyone happier with social media.
Still others, on both the left and the right, argue that weakening (or outright removing) Section 230’s protections would bring the tech platforms into some kind of reasonable balance. These would-be reformers haven’t given enough attention to what law professor Eric Goldman has called “the moderation dilemma.” Alternatively, as in this 2019 piece by Matt Schruers, the newly appointed president of the Computer & Communications Industry Association, it’s sometimes called “the moderator’s dilemma,” where opposing incentives lead either to the suppression of viewpoint diversity or to websites “plagued with off-topic content, trolling, and abuse.”
One reason we need to keep Section 230 safe—a reason I didn’t have the foresight to champion back in the 1990s—is that it’s crucial to fighting disinformation: It allows internet platforms to curate their content without necessarily increasing liability. My colleague Renee DiResta and I have been arguing in the past year or two that empowering tech companies to partner with governments and multistakeholder efforts in fighting disinformation is properly characterized as simply good cybersecurity. I remain skeptical as to whether tactics like microtargeting and demographic profiling, whether used by political campaigns or foreign governments, are as effective at manipulating people as some critics fear, but I see nothing wrong with using legal and policy tools to stop malicious actors from trying to use these tools.
I’ve come to believe our society should take reasonable steps to limit intentionally harmful speech, but I also find myself increasingly embracing a broader, more instrumentalist vision of freedom of speech than I typically championed in the 1990s. Back then, I was much more focused on encouraging tolerance and pluralism—the idea that an open, democratic society should be willing to let people say outrageous things, to the extent possible, because we ought to be strong enough in our democratic convictions to endure disturbing dissent. I still believe that, but here in 2020 I’m also haunted by the challenges we face everywhere in the world in this century, ranging from climate change to income inequality to the (not-unrelated) resurgence of populist xenophobia and even genocidal movements.
It’s been argued that internet forums for free expression have incubated real-world violence. But humanity’s capacity for war, violence, and self-destruction predate social media, and today’s internet platforms are often the first channels where we see the evidence of crimes (Myanmarese persecution of the Rohingya, for example, or Chinese repression of the Uighurs) that governments and closed societies used to be better able to hide. More important, though, is the fact that the problems we’ll face in this century are going to need everyone’s attention and contributions—not just that of our leaders and policymakers and journalists and thought leaders. They’ll need help from people we love and people we hate, from you and from me.
That’s the biggest thing I learned at the Wikimedia Foundation: When ordinary people are empowered to come together and work on a common, humanity-benefiting project like Wikipedia, unexpectedly great and positive things can happen. Wikipedia is not the anomaly my journalist friend thinks it is. Instead, it’s a promise of the good works that ordinary people freed by the internet can create. I no longer argue primarily that the explosion of freedom of expression and diverse voices, facilitated by the internet, is simply a burden we dutifully have to bear. Now, more than I ever did 30 years ago, I argue that it’s the solution.