Introducing the Free Speech Project

A black square with text that says "Free Speech Project" against a white and orange background.
Holly Allen/Slate

Hi Future Tensers,

The presidential election is underway, and it looks like we’re in for quite a ride. Ever since 2016, we’ve been fretting about our vulnerability to misleading news and targeted online ads, and now that primaries have started, we’re being forced to wonder once again about the trustworthiness of election technologies—just look at the Iowa Democrats forgetting to test their calamitous app. So, we thought 2020 would be a good year to take a closer look at the debates swirling around how technology is challenging our traditional commitment to free speech. Enter the Free Speech Project, a yearlong series of events and articles we’ll be bringing you in partnership with American University Washington College of Law’s Tech, Law, & Security Program.

Kicking things off, Mike Godwin takes issue with the idea that we’re suffering from too much free speech online that must somehow be controlled. Mike—an early internet rights activist, a  former Wikimedia general counsel, and the father of Godwin’s Law—writes about how his perspective on online speech has changed over the past 30 years: At first, “I was much more focused on encouraging tolerance and pluralism—the idea that an open, democratic society should be willing to let people say outrageous things. … I still believe that, but here in 2020 I’m also haunted by the challenges we face everywhere in the world in this century, ranging from climate change to income inequality to the (not-unrelated) resurgence of populist xenophobia and even genocidal movements.”

Here are some other pieces we published recently:

Wish We’d Published This
Going the Distance (and Beyond) to Catch Marathon Cheaters” by Gordy Megroz, Wired.

3 Questions for a Smart Person
Jennifer Daskal is a professor and faculty director of the Tech, Law, Security Program at American University Washington College of Law and a scholar-in-residence at New America. I spoke with her about the future of free speech and democracy.

Margaret: What’s one incident that demonstrates the challenge of dealing with online speech?
Jennifer: It’s hard to pick. The struggles with malicious interference in elections are a microcosm of the broader questions about how to identify malicious actors, how to curtail harmful speech without banning foreign speech, how to distinguish between what is true and what is false, what to do with information that’s deemed false, and who decides what’s false. All of these issues come to the fore when trying to deal with the aftermath of a coordinated effort to influence the U.S. election in 2016, ongoing efforts in 2018, and elections throughout the world.

What do you see as the future of malicious interference in democracy?
What we need to remember is that a lot of this is not new. It has always been the case that governments and individuals engage in propaganda across borders to achieve a variety of political goals. What’s new and different is the speed, ease, and scope of the potential influence efforts because of digital interconnectedness. This is an issue that we’ll be struggling with for some time because the ultimate goal is to protect the integrity of elections and preserve commitments to free speech. The risk is that efforts to clamp down on legitimate harms, we do so in ways that harm core freedoms.

Is there an approach toward digital speech that the U.S. should borrow from other countries?
The U.S. is in a unique position because of our First Amendment. Our First Amendment curtails what the government can do in terms of remitting individual speakers and provides broader protection of speech than any other nation. In the United States, much of the core decision-making is delegated to the tech companies curating the content. The world, and particularly the western world, has moved very quickly from proclaiming the benefits of a free and open internet (and making it a centerpiece of our internet agenda), to recognizing the very real-world harms that can be perpetuated online. We haven’t figured out a way to balance the need to address those harms with the need to protect and preserve free speech.

To learn more, check out this Future Tense story Jennifer wrote in October: “A European Court Decision May Usher In Global Censorship.”

Future Tense Recommends
I am constantly behind on the latest releases, but I recently watched I Am Mother, a sci-fi film on Netflix released over the summer. Inside a bunker sealed off from the outside world after an extinction event, a robot raises a single human as her “daughter,” without any other contact with humans or the outside world. When an external intruder arrives, the daughter is forced to challenge her only authority figure and is confronted with the possibility that her entire life is based upon a lie. The infusion of complex morality into a solid thriller makes me thoroughly enjoy thinking about the ethics behind A.I.s.—Anthony Nguyen, Future Tense program coordinator

What Next: TBD
In the latest episode of Slate’s technology podcast, Lizzie O’Leary talks to Joshua Chin, a Wall Street Journal reporter based in Beijing, about how “Coronavirus tests China’s surveillance state.”

Upcoming Future Tense Events
If you’re in Washington, join us on Feb. 20 for an evening screening of Parks and Recreation with Jen Pahlka, founder of Code for America, and Cecilia Muñoz, vice president for public interest technology and local initiatives at New America, at Landmark E Street Cinema.

And on Feb. 24, don’t miss out on “Redefining Free Speech for the Digital Age,” an afternoon event at American University where we’ll kick off the Free Speech Project and look to past debates over speech to inform the future.

Margaret from Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.