Future Tense

“Tech People Are the Last People I Would Trust to Regulate Speech”

The creator of Godwin’s law interviews David Simon about Nazis on Twitter and so much more.

David Simon next to the Twitter logo.
Photo illustration by Slate. Photos by Astrid Stawiarz/Getty Images for the 2014 Tribeca Film Festival and Twitter.

The minute after I watched the first episode of The Wire, I found myself asking: Is this the best show ever to be on television? (It was.) So of course I’ve followed David Simon’s work through his post–Hurricane Katrina New Orleans series Treme and later his ’70s-porn-era New York City drama The Deuce. Like me, Simon once paid his rent primarily as a journalist, but he leveraged his newspaper years into creating TV drama that, if anything, was as good as (or maybe better than) the best journalism I’d seen until then—capturing crime and social problems with a consistent recognition that our real-life heroes, like our real-life villains, have a gift for being their own worst enemies.

Advertisement

On Twitter, Simon has won a unique reputation as a prolific hurler of baroque insults targeting those he believes are poisoning the social media platform. After the 2016 election, people in my feed would tag me regarding Simon’s tweets comparing both Twitter trolls and genuinely monstrous people like Syria’s President Bashar al-Assad to Hitler, Nazis, fascists, and the like. Some clearly hoped that, as the creator of Godwin’s law, I might render a verdict against him as a Godwin’s lawbreaker, but I had already written that informed, knowledgeable Nazi comparisons won’t earn my criticism. At its best, I saw Simon’s frequently colorful exercise of his First Amendment rights as high-quality performance art.

Advertisement
Advertisement
Advertisement
Advertisement

Twitter’s management took a different view and has suspended Simon twice this year so far for his invective, some of which is aimed at Twitter’s masters. When he came back for the second time, I asked him whether he’d be willing to be interviewed about Twitter and social media generally. He quickly agreed, and after a phone call working out the details, we began this interview in Twitter’s direct messages, shifting midway to email. Our exchanges ranged from talk about what Twitter is doing wrong to larger social ills that seem to be undermining American democracy. This transcript has been lightly edited and condensed.

Mike Godwin: You’ve relished using Twitter to challenge trolls and racists and other objectionable tweeters, only to get suspended—“sent to Twitter jail”—more than once. You’ve quit Twitter, but now you’re back. Can this relationship be saved?

Advertisement
Advertisement

David Simon: Not much of a relationship, I gotta say. There is no human intellect with which to engage, just the Great Algorando [Simon’s word for the mystery personnel superintending Twitter’s algorithmic search for policy-violating tweets] in the Twitter basement, which is an epic fail in terms of creating any ethical paradigm that anyone should respect. Twitter has no answer to being a repository for all manner of libel, intolerance, and organized disinformation. Nor do they seek an answer. It is a platform that thrives on today’s open warfare between fact and falsehood. Instead, they police decorum. How does anyone seriously engage with that? And with regard to my own experiences, I certainly doubt CEO Jack Dorsey or anyone capable of voicing his logic is going to get on the phone or fire off emails in order to muster a coherent explanation. I’m not holding my breath, anyway.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

You’ve written that Twitter can’t just bail on the issue of content and let the whole public forum be poisoned by bad speech, but you’ve also said you’re a strong First Amendment/free speech guy. How do you square these two ideas?

I can’t conjure a social media platform moving at the speed of Twitter—with the limited human resources that they are willing to support on their current profit margins—that can actually regulate and police disinformation, libel, and harassment. Not well, anyway. To do that job, they’re going to need—dare I say it—some trained journalists. Editors, by name. Fact-checking is labor-intensive, and it’s skilled labor. So it isn’t happening—not in the near future, not for the most fundamental responsibility of any media site: policing disinformation and preserving accuracy. So, OK. It’s going to be a free-for-all, and the lies and affronts will be across the internet before the truth gets its boots on. That’s the given.

Advertisement

But if that is, in fact, the given, then the last thing that Twitter should be doing is policing decorum, or trying to leach hostility from the platform. Why? Because the appropriate response to overt racism, to anti-Semitism, to libel, to organized disinformation campaigns is not to politely reason with such in long threads of fact-sharing. All that does is lend a fundamental credence to the worst kind of speech—which, grievously, seems to be the paradigm that Twitter prefers at present. It’s a paradigm that offers two basic choices: Ignore the deplorati—which allows the dishonesty or cruelty to stand in public view and acquire the veneer of credibility by doing so. Or worse, engage in some measure of serious disputation with all manner of horseshit, which also grants trash the veneer of credibility. In 1935, the reply to Streicher or Goebbels quoting The Protocols of the Elders of Zion and asserting that Jews drink the blood of baptized Christian babies is not to begin arguing that “no, Jews do not drink Christian baby blood” and deliver a long explanation of The Protocols as a czarist forgery in chapter and verse. The correct response is to call Julius Streicher a submoronic piece of shit, marking him as such for the rest of the sentient, and move on to some more meaningful exchange of ideas. So it is with Twitter. If I’m gonna exist there, I’m not going to let the most rancid shit stand on my feed as if it’s plausible, but nor am I going to treat it as deserving of serious argument. I’m gonna call it out quickly and block—and do so with as much flair and performance as I can so at least the process won’t be boring. But effectively, what I am doing is marking the [land mines] for the rest of the platoon to block as well. It’s a permanent, quotidian task—but given that Twitter is not going to become a responsible news organization that fact-checks the commentary and regulates it on that basis, what else can we do?

Advertisement
Advertisement
Advertisement
Advertisement

I agree that human beings are better than algorithms (at least for now), but there’s lots of evidence that human beings screw up these curation issues, too, isn’t there? Even if Twitter staffed up with people (even journalists!) to respond to complaints about terms of service violations, wouldn’t there still be complaints about bias and unfairness?

As there always are—even in the most consistently edited media and on the most carefully regulated platforms. Everyone is arguing about what gets play on the New York Times op-ed [page], or in the Letters column. But I’d rather take my chance arguing with and defending myself to a sentient human than being arbitrarily tagged by a flat-brained algorithm. If you are going to police your site, then make the effort to at least entertain an appellate process that helps you establish the basic context to proceed with banning people or censoring opinion. To this moment, having been banned twice for comic hyperbole, I’ve not had either a written reply to my appeal of the absurdity or a conversation with any living soul at Twitter.

Advertisement

In fact, the cheese-eating mooks actually took down one of the tweets unilaterally without ever engaging me. I wouldn’t delete the tweet, and they would not return me to the platform until I did. So, OK, fair enough. I was willing to quit and just leave the thing up there as evidence that it was neither harassment nor threat. But no, rather than engage on the merits, they quietly deleted the tweet after several weeks while leaving the form demanding that I do so on my account. Someone alerted me that it was gone, and after checking repeatedly and seeing as much, I finally deleted what wasn’t there, if only to tell Jack Dorsey once again that he deserves boils.

Advertisement
Advertisement
Advertisement

Why are they not honing a process by which they might address the excesses of their algorithmic interventions? Or defend those interventions? Because they’re not good at this stuff. And their programmatic response sucks. And if they have to explain themselves in a cohesive and thoughtful way, they’re going to fall on their ass. They can’t explain it in detail—as it is actually applied on a case-by-case basis—so they won’t. On the ethics of all this, Twitter is a fucking mess.

Twitter gets savaged for hosting obvious trolls, but since the elections of 2016, Facebook has been taking a lot of heat for so-called filter bubbles, echo chambers that intensify extreme opinions, plus its news feed, on the theory that the algorithmically picked news sources push you toward extremes. But Facebook’s de-emphasizing the news feed has forced news sources that rely on internet advertising, like Slate, to take a hit

I barely use Facebook, and only then for my private connections with friends and family. I’m there under an assumed name. And I’m actually less of a student of and participant in that particular agora. Fact is, after years of resisting it and seeing it as a flawed vehicle for arguing or discussing anything seriously, I got on Twitter as a means of promoting my television programming and, occasionally, some bit of prose work on my blog. Or of highlighting other content that I thought had merit. It was in the last election cycle that I began to realize, to my chagrin, that public rhetoric is now arriving at light speed on these social platforms. They are already in effect the first news cycle, and with regard to the worst kind of spin and rumor, there is often scarcely a second news cycle in which facts ever catch up. Or so it seems.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

So I’ve been drawn into the national argument where it seems to begin. But again, it’s a corrupted platform. If you can’t sever the bots and professional trolls and find a decent argument with someone else who is really wrestling with stuff, then to what purpose? The best you can do with a troll or bot is use them in the same fashion that Edgar Bergen used Charlie McCarthy—as a rhetorical prop. If you tell me that Facebook is any different, then maybe I should dump Twitter and die on the other hill. But either way, the idea that all of these platforms are subject to political manipulations and agitprop is, by now, obvious. There are no gatekeepers. There is no commitment to police for accuracy. The metadata delivered by users can be repurposed into political weaponry by interested parties. And they have no viable institutional response to these realities.

Advertisement

It’s as if they can’t solve murders, robberies, and rapes in this town, so rather than confront the long and hard journey of real police work, the folks at Twitter are going to make this the least-jaywalkingest ville in Christendom.

But to be clear, I have no interest in encouraging anyone in any authoritative capacity to ban speech on a platform that has become de facto part of our national agora. And given how miserable Twitter has thus far proved itself at being capable of discerning even sarcasm or comic hyperbole, and how tolerant it is as a platform for the overt and organized slander and libel of individuals and cohorts, tech people are in fact the last people I would trust to regulate speech.

Advertisement
Advertisement

[At this point, Simon and I both realized we were getting more essayistic—so we migrated to email.]

Advertisement

I want to come back to the idea of employing more journalists, more editors—especially since those jobs are scarce. It’s great if more reporters and editors are working, but doesn’t the whole idea of social media, the whole success of it, spring from “disintermediation”? From being able to step up and say something to the public without having to get an editor’s approval? The way we talked about the internet in the early days—not just social media but the internet itself—was that it opened the door for everyone to speak to large audiences. That said, it was obvious from the outset that some high percentage of the speakers was going to end up being dopes. Or worse.

Advertisement

If I had possession over Judgment Day and the resources of Twitter, here is what I would do:

I would not throw open my review process to fretting about name-calling or comic hyperbole or even exchanges of abject contempt and disgust because, as we all know, there is plenty on the platform that deserves a hailstorm of contempt and disgust. Instead, I would use my limited resources to open the gates to complaints about intellectual frauds, libels, and disinformation campaigns. And I would empower Twitter users to be, if not the ultimate arbiters of these issues, to be a force, in a fundamental way, that begins to self-police the site.

Advertisement
Advertisement

How? Same way as users now report what they perceive to be “offensive” content, I would demand that they raise their game and raise the stakes to reporting that which can be empirically demonstrated to be false. There’s your disintermediation. The users themselves deliver complaints that go to the heart of Twitter’s fundamental weakness: This is a libel. That is a lie. Let them call it out and deliver the empirical proof. Let them be the police and then have Twitter—in conjunction with some in-house research equivalent of Snopes or some other fact-checking forum—be the court of jurisdiction for claims that originate with Twitter users themselves. That limits Twitter’s responsibility to only the fact-checking that is requested organically by users, not extending its responsibility over the whole of the content. It also makes it imperative for objecting users to bring intellectual and journalistic rigor to their complaints, further girding the process. And it creates a standard that makes it possible for Twitter to remove those who can be evidenced to be not merely in error about facts, but purposefully and repeatedly employing libel and disinformation.

Advertisement
Advertisement

And here is the stick I would employ:

Advertisement

If it can be demonstrated that a user’s content is subjective, well, that is a function of rhetoric and beyond any sanction.

If it can be demonstrated that a user’s content is empirically false, but there is no evidence of an intent to mislead or libel, then a request to remove the falsehood could be undertaken and the user could be given the choice of removing the tweet or self-correcting publicly.

If it can be demonstrated that a user’s content is part of a continuing and persistent pattern of employing disinformation, fraud, or libel, then the account can be suspended.

Isn’t this a more fundamental use of limited journalistic resources than to stop David Simon from telling some racist troll he ought to consider succumbing to a nonlethal skin disorder? And if Snopes can do this as an online resource, how the fuck is it so elusive for Twitter?

Advertisement

Snopes sometimes seems to be limping along as a nonprofit based on donations (maybe some big donations from the companies, like $100,000 from Facebook in 2017, but not too big). But subsidizing Snopes seems like something that would be well within even Twitter’s uncertain profitability.

Advertisement
Advertisement

Great to hear. Let them do it. Immediately. Just [recently] James fucking Woods showed up on Twitter to once again declare George Soros to be a Nazi collaborator. Never mind that Snopes has thoroughly and impartially dismissed this claim—Woods is still rambling around on the platform repeating the big lie. Wouldn’t it be great if an in-house component simply flagged that tweet, alerted the bitter little fuckmook as to its fraudulence and gave him the opportunity to remove it himself or, even better, asked him to post a corrective and apologize like a grown-ass human? And when he fails, suspend his libelous account. Now there is an actual deterrent to using Twitter for organized libel and disinformation.

Advertisement
Advertisement

I’ll give you a couple of examples where I think the social media platforms have done good in a way that traditional media never have managed to do (although there’s been some symbiosis here). The first is #MeToo. It’s suddenly become possible for many more women’s voices to be heard. (And those of men too—as with Kevin Spacey.)

The second is #BlackLivesMatter. Everybody who knew anything about policing and criminal law, both in cities and in small-town and rural environments, knew that people of color were more at risk in encounters with police. But now, all of a sudden, we can abruptly publicize police violence, or even explosions of insane verbal racism. Isn’t the radical empowerment of individuals who need it some kind of a balance for all the dumbass things other Twitter twits do?

Advertisement
Advertisement

Indeed. And you’re arguing in some of this for the power that exists in the ubiquity of the smartphone, with its instantaneous video capability. No disputing that revolution, and it is overwhelmingly for the better to have first-generation evidence of what is occurring with regard to authoritarian action or to off-the-cuff remarks or affronts by people. Sure.

But be careful about claiming that unfettered access by anonymous complainants to social media platforms has done a singular service to the real work of #BlackLivesMatter or #MeToo. With #MeToo in particular, I would argue that traditional journalism—with its elaborate construct for proving accusations, documenting patterns of behavior, and confronting offenders and knocking down their false counterclaims is what delivered Weinstein, Moonves, Toback, Cosby, and others. Yes, the initial spark may be a bubbling of complaint—some on the record, but much of it anonymous—on social media. But then the rigor of journalistic investigation establishes the credibility of the narrative. What the New Yorker and the New York Times did with Weinstein was magnificent, and it was definitive in a way that the rumored rage of social media can never be. They worked the claims and confirmed and published the totality of the story. And there was a totality. Regrettably, I can point to some case studies in which the level of accusation, even if we credit the claims for what they are, actually does a disservice to #MeToo by flattening all allegation—however important or however modest—into the same claimed affront.

Advertisement
Advertisement
Advertisement
Advertisement

Same thing with Black Lives Matter. We have reached a point where every act of police violence or every filmed police shooting has its turn on social media. This is for the better. But we have also reached a point where it’s clear that every act of police violence or every police shooting is not unjustifiable. Often, law officers are heedless, brutal, indifferent, and even sadistic. That is now rightfully grist for new media, and as a result it is an issue being highlighted for address by old media. Sometimes, the police are in a fight not of their choosing, in which case it is not police brutality if the cops win the fight. In Baltimore, we just went through that pregnant social media pause when city police, who have all kinds of deserved credibility issues, said they shot someone who was shooting at them. The rumor mill began to churn a bit until the department released the video. And yes, this time the police were returning fire in a running gun battle with one of the pursuing officers wounded.

Advertisement

Point being that no one sentient doesn’t see the value in all of this first-generation video content now being delivered. And social media is the delivery platform, to be sure. But what comes behind the delivery of that material still matters as much as it ever did: Particularly when you become aware of how even video content can be manipulated, edited, deconstructed by interested parties. I am as exhilarated as anyone by the digital revolution and what it allows ordinary people to acquire of the world and deliver with immediacy. But I am also intent on what an impartial, professional journalist acquires when he corroborates the video and contextualizes the video.

I like the idea that users themselves can be, and are, more empowered to answer false facts and raise questions about fake news—supplementing or complementing the traditional press. But Twitter and Facebook in particular are feeling pressured to “do something.” Often by governments. Many conservatives are absolutely certain that the platforms are biased against them and are censoring them. Progressive activists are equally certain that they’re the targets of censorship. After Brexit and the election of President Trump, the governments around the world are looking for someone easy to blame for the weird political moment we’re in. Internet platforms are new, so they’re an easy target. The way TV used to be. And movies and radio before that.

OK. I don’t dispute that Twitter and other such platforms are being bashed from all points of the political compass. Same for old media for all of its history. That goes with the job.

I’m saying they have responded by doing the wrong fucking something. They are responding in such a way that they are, in effect, normalizing the worst kind of organized disinformation and hate speech. They have set up a both-sides construct that is disturbingly reminiscent of the Trumpian reaction to Charlottesville.

It’s a kind of abdication.

Advertisement