So, what if I maliciously and knowingly write something false here about the poke bowl place a block away from me?
I’d clearly be liable, but my poke friends would probably want to go after deeper pockets.
Slate would be on the hook, too (sorry, guys) as the creator of the content that would also be published on its website. But who else? I called up Jeff Kosseff to walk through the chain of defamatory liability to find out.
Jeff, a former journalist turned cybersecurity professor at the U.S. Naval Academy, is like the character at the beginning of every apocalyptic movie who single-mindedly obsesses over something to the amusement or concern of those around them, until … For Jeff, that something is Section 230 of the 1996 Communications Decency Act—or, as he calls it in his brilliant 2019 biography of the legislation, The Twenty-Six Words That Created the Internet: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It’s the reason why Facebook, Reddit, Wikipedia, and other platforms aren’t liable for content published by their users. He’s also written for Future Tense about the ’90s lawsuit against America Online that set up today’s internet speech battles and the Boy Scout who became one of the earliest victims of online bullying.
Under Section 230, Jeff says, neither Sailthru (the email platform used by Slate for this newsletter) nor your email provider are responsible for the damage to said poke bowl’s reputation. By the same token, if I tweet out a link to this newsletter and post it on Facebook, neither of those two social media platforms is responsible.
Since he wrote The Twenty-Six Words That Created the Internet, the good news for Jeff is that Section 230 has taken centerstage in urgent debates around the power of Big Tech; the frustrating news is that the legislation’s notoriety hasn’t prevented the spread of misunderstandings about its meaning and purpose, much to the humorous despair of Jeff’s Twitter account.
When I spoke to Jeff on Thursday afternoon, I asked whether he thought 230 would get mentioned in that evening’s Trump-Biden debate. He laughed, saying he thought there was a decent chance, but that it was a bittersweet prospect given that both presidential candidates have mischaracterized the law’s meaning.
On Thursday night, 230 went unmentioned, but not to worry, Jeff’s still having quite a moment. The Federal Communications Commission is launching a rule-making proceeding to issue an interpretation (or reinterpretation?) of the law, and on Oct, 28 the Senate Commerce Committee has subpoenaed the CEOs of Twitter, Alphabet (which owns Google), and Facebook to a hearing with the if-ever-there-was-a-leading-question title of “Does Section 230’s Sweeping Immunity Enable Big Bad Behavior?”
Most intriguing of all, Justice Clarence Thomas has issued a statement that almost reads like an invitation for plaintiffs to bring forth cases seeking a radical revision of how courts have interpreted 230 to date. Mike Godwin (whose own famous law was confirmed in the presidential debate, but that’s a Hitler story for another time) wrote about what he considered the justice’s worrisome move for us, reviewing the caselaw and shifting distinctions between content publishers and distributors. He concludes: “If you value virtual communities in which any member can speak, but in which all members have to obey some basic rules, the only way to get there is through something very much like Section 230.”
Of course, animating all this 230 action is the longstanding ire on the left that tech platforms are not doing more to curate and filter speech, and the mounting ire on the right that they are doing too much. Many on the right seem to think this is a “gotcha” moment in which they can prove that Twitter and Facebook no longer deserve immunity offered neutral platforms (or distributors), not when they are now doing more to police against disinformation around the election, pandemic, and other issues—acting more like publishers exercising editorial judgment.
This notion that 230 is meant to dissuade content moderation by our Big Tech public square gatekeepers—rather than encourage it—is a longstanding misinterpretation of the law, which now runs the risk of being reinterpreted as the real thing, through legislative, regulatory, or judicial action.
The confusion around 230 is just one part of the far greater challenge about how to apply our well-established 20th century free speech norms to the digital age—in particular, the relevance of the First Amendment itself in an age when government has become a marginal player, in comparison to private Big Tech players, in regulating Americans’ speech.
Typical of such confusion was a tweet last Wednesday afternoon from Republican Sen. Chuck Grassley of Iowa, which read: “To all the Journalists who worship freedom of the press (as I do) u should voice ur outrage about the selective censorship from Big Tech like Facebook & Twitter. It violates a constitutional right.”
The senator, of all people, should know better. He has been on the Senate Judiciary Committee since I was in middle school. Nowhere in the Constitution does it say anything about whether a private platform like Twitter should or shouldn’t have to share dubious stories from the New York Post—or my malicious poke bowl tweets for that matter.
To further consider this growing disconnect between traditional First Amendment cases and our ongoing fights over permissible online speech, join us Wednesday, Oct. 28, at 11:30 a.m. Eastern, for our next Free Speech Project events, “Do We Need a First Amendment 2.0?” New America CEO Anne-Marie Slaughter, University of Chicago law professor Geoff Stone, and other prominent legal scholars will discuss. BYOPoke.
Here’s some of the other things we’ve cooked up in the recent past of Future Tense:
Wish We’d Published This
“How the World’s Biggest Slum Stopped the Virus,” by Ari Altstedter and Dhwani Pandya in Bloomberg Businessweek.
Future Tense Recommends
Mia Armstrong, Future Tense contributor, writes: More than $2 trillion in suspicious payments over 18 years, detailed in more than 2,000 reports submitted by banks to the U.S. Treasury Department. These are the so-called FinCEN Files, obtained by BuzzFeed News and reviewed in conjunction with the International Consortium of Investigative Journalists. ”Suspicious Activity: Inside the FinCEN Files,” is an alarming and intriguing five-episode podcast series that dives into that investigation, exploring “how banks profit off terror and organized crime—and the ways that governments fail to stop it.
What Next: TBD
On this week’s episode of Slate’s technology podcast, host Lizzie O’Leary, who is back from maternity leave, spoke with the Washington Post’s Tony Romm about the Department of Justice’s new antitrust case against Google. Last week, guest host Celeste Headlee and Evelyn Douek, a lecturer at Harvard Law School and affiliate at the Berkman Klein Center for Internet & Society, discussed Facebook’s decision to ban Holocaust denial, and whether it signals a new era of content moderation on the social network.
Upcoming Future Tense Events
• Wednesday, Oct. 28, 11:30 a.m. Eastern: Do We Need a First Amendment 2.0? with New America CEO Anne-Marie Slaughter; Geoffrey R. Stone, Edward H. Levi distinguished service professor of law, at the University of Chicago; Neil Richards, Koch distinguished professor in law, Washington University School of Law; Jennifer Daskal, professor and faculty director of the Tech, Law, & Security Program at American University Washington College of Law; Ciara Torres-Spelliscy, professor of law, Stetson University; and Daphne Keller, director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center
• Tuesday, Nov. 10, noon Eastern: Governing for the Future, with Kim Stanley Robinson, author of the new novel The Ministry for the Future; Peter Schlosser, the vice president and vice provost of Arizona State University’s Julie Ann Wrigley Global Futures Laboratory; science fiction author and humanitarian worker Malka Older; and Ed Finn, director of ASU’s Center for Science and the Imagination.