Trump and Twitter Go to War

Listen to this episode

S1: On Tuesday, this little thing popped up on Twitter hyperlink under two of the president’s tweets with a blue exclamation point. It said, get the facts about mail in ballots. And if you clicked on the link, it led to news articles. They explained that the president’s tweets, which claim things like and I’m quoting here, there is no way zero that mail in ballots will be anything less than substantially fraudulent or, in fact, unsubstantiated. When you saw that, what was your reaction?

S2: Well, it’s always surprising when Twitter takes action. Twitter is the most slow moving of our tech companies, and their M.O. is to consider things for many years and then sort of come up with a half measure after the fact.

S3: That’s Casey Newton. He is the Silicon Valley editor for The Verge. And he’s written about Twitter’s decision making process a lot over the past few years. How the company has dealt with online harassment, how they’ve been slow to remove people like conspiracy theorist Alex Jones, and how since the 2016 election, Twitter has been trying to figure out what to do when the president tweets something and accurate, especially about voting.

S2: And I think a lot of people had sort of given up on the idea that Twitter would ever take any action because there had been so many previous Trump tweets that seemed to some people to have gone over that line.

S1: Did you realize when you when you saw that notifications like, oh, man, this is gonna be a thing?

S2: Yeah. I think from the moment it appeared online, we all just brace ourselves for the Trump outburst that was sure to follow. And, you know, it arrived soon enough.

S1: The president’s reaction to Twitter’s fact check was two pronged. First, there was the stuff that Casey was expecting. Trump’s tweets saying big tech was censoring him before this year’s election and predictable news coverage. Then there was this other potentially much bigger deal.

S4: Thank you very much. We’re here today to defend free speech from one of the gravest dangers.

S1: Last night, just two days after Twitter replied fact checking links to his tweets. Trump signed an executive order.

S4: Therefore, today I’m signing an executive order to protect and uphold the free speech rights of the American people.

S1: And this order could undo the legal underpinnings of how Twitter and other social media companies do business.

S4: My executive order calls for new regulations under Section 230 of the Communications Decency Act to make it that social media companies that engage in censoring or any political content will not be able to keep their liability.

S3: Today on the show, Casey breaks down the fight between Twitter and President Trump. What it means when a social media platform tries to fact check the president and how Trump’s response has the potential to change the Internet for all of us. I’m Lizzie O’Leary, and you’re listening to What Next, TBD, a show about technology, power and how the future will be determined. Stay with us.

S1: One of the reasons why Casey Newton was surprised at Twitter’s decision to add a fact check to the president’s tweets is the company’s history. It’s not just that they tend to drag their feet when it comes to difficult decisions. Twitter is a much smaller company than, say, Facebook, and they have actively tried to do as little moderating as possible on their platform.

S2: It’s early. Employees like to say that Twitter was the free speech wing of the free speech party. And while that did enable a lot of free debate. And while employees thought it was really valuable for movements like the Arab Spring and Black Lives Matter, Twitter also developed a huge problem around harassment, particularly for women and people of color that just took them for it forever to get under control.

S1: Yeah, I’ve gotten I’ve gotten rape threats. I’ve gotten people saying we know where you live.

S2: And it’s and it’s so, so disturbing. And Twitter has mostly turned a blind ear to to those complaints where it did in its early days. But finally, after a sustained public pressure, they started to turn that around. And so in recent years, we have seen them become more and more comfortable with the idea of aggressive moderation of some of those threats and other bad posts.

S1: Yeah. How do they compare to other tech companies that have to deal with some similar issues? Now, this is something that does happen on YouTube, on Facebook, etc..

S2: So one of the things that we’ve seen over the past few years is that the big social networks have been increasingly willing to add labels underneath problematic content. So if you go on YouTube and you search for a video saying that the earth is flat, you’ll find it. But it’ll have a link to Wikipedia underneath the video telling you that that is a hoax on Facebook. They have a third party fact checking program now. So if some piece of content goes viral and it’s full of lies, a fact checker will look at it and then Facebook will place that label underneath. So Twitter has been the slowest of those platforms to experiment with these labels, but it has started adding them. And it really stopped that effort up earlier this year as the COGAT 19 pandemic went around the globe. And I think it got some confidence that it could do that. And so this next move comes from an announcement they made in January that they were going to try to stop the spread of election related misinformation, particularly misinformation around the voting process and how people vote.

S1: And that is why those six words get the facts about mail in ballots ended up appended to Trump’s tweets about mail and voting being ripe for fraud. But it wasn’t just those tweets that set off this latest debate over speech on the platform.

S3: On Tuesday, Trump tweeted a debunked conspiracy theory about MSNBC and Joe Scarborough’s involvement with the death of a former staffer. The widower of that staffer had asked Twitter CEO Jack Dorsey to remove the president’s tweets, calling them horrifying lines. But the tweet stayed up and not get a fact check clarifying that the theory had been debunked. I asked Casey to take me inside that decision making process to explain what gets a fact check and what doesn’t.

S2: Let’s start with the mail in ballots. One of the things that the platforms have decided, and I would include YouTube, Facebook and Twitter in that is that the voting process should remain sacred and that their platforms should not be used as tools to spread misinformation around that process. So that includes things like a post that says that the election is on a day that it isn’t or that discourages people from voting or says your vote doesn’t matter. The platforms have sort of agreed that they’re going to take down any of that content, in part because it’s really easy to draw a bright line around. Right. You are not actually ruling out that much content when you tell your user base. You can’t say the election is on the wrong day. So when Trump comes along and starts this disinformation campaign that voting by mail is illegal, it actually just crosses that bright line.

S1: But when it comes to the other set of tweets, there are different rules at play.

S2: Now, when it comes to Joe Scarborough and this ridiculous conspiracy theory that he was involved in his staffers death, on one hand, that is super harmful. It’s super offensive, writes this sort of post that makes us all say, gosh, you know how like how could you leave something like this up? But if you work at Twitter, what are you saying? Is the rule that you cannot speculate about crimes? As one former Twitter employee told me, well, say goodbye to every true crime podcast and tweets about them. So that gets tricky. And it gets more tricky because here you have the president of the United States talking to a former member of Congress. Right. So these are public public figures, right. Public officials, current and former, engaging in what is effectively political speech, because, of course, the whole reason that Trump is going after Scarbro is that Scarbro is one of his few Republican critics. Now, I think there’s a good argument that you don’t. The social platforms to try to determine the boundaries of free speech on the Internet when it comes to politics and political speech, they’ve said we’re going to stay out of it.

S1: When we think about sort of who gets what in terms of speech on a social network. I mean, there is this question of like, is the president United States just different? Does. Does he get his own rulebook?

S2: The platforms have tried to make that the case. They have done everything they can to not act because, again, there’s very little incentive for them to do that. Well, it’s historical record. Yeah. And Twitter has said in the past that they believe in making an exception for tweets that they would remove for other users when it comes to public officials because they think that there should be a newsworthiness exemption and we make this exemption in other cases. If you want to post a photo of the Millais massacre on Facebook, you can do that because that photo is seen to have historical significance. And so Facebook has decided we will leave that up under an exemption. The president going on Twitter and making some sort of vague indirect threat doesn’t feel real good. But Twitter has opted to leave it up because the thought is, well, it’s probably better that we know what this guy is thinking than leave it all in the shadows.

S1: At the same time, they have taken down videos from both scenario and Maduro in March. I mean, is that within the realm of possibility for Trump or is it still American company, American president?

S2: I think everything is in the realm of possibility. And I think what you have seen is a slow ratcheting of these platforms taking more action than they had been in the past. And I think one reason why Trump reacted with the outrage that he did is because he has identified something important that has power over him. So he has to intervene and stop it now.

S1: Then intervention came Thursday when Trump signed the executive order. The order takes aim at some of the protections these platforms have relied on, protections that are part of something called Section 230 of the Communications Decency Act. More than any other law, it’s enabled the growth and popularity of social media platforms. That sounds wonky, but it’s really, really important to these companies. Without it, Facebook, Twitter and YouTube as we know them wouldn’t exist.

S2: This is a law that enables platforms to host a maximum amount of speech without worrying that they will get sued into oblivion if they moderate it. And it’s explicitly written to go beyond the First Amendment so that platforms can remove things that are permissible under the First Amendment. But that you are I would not want to see on a social network. Right. So I don’t want to see Nazi content on Facebook. Section 230 is what enables them to go in and remove it before Section two, 30 platforms that existed in the olden days of the Internet, like AOL or Prodigy would get sued because they would they would just face these lawsuits that were making content, moderation, impossible for them, that were making them legally liable for what their users posted. So we had this law that essentially enabled the Internet that we enjoy today.

S1: There’s this funny tension here to me because conservatives often complain that the platforms are infringing on free speech. But the First Amendment protects you from the government, not from private companies. Couldn’t you actually make an argument here that an executive order infringes on the, quote, unquote, speech of the companies?

S2: Absolutely. Like, up is down. Left is right. We are fully through the looking glass here. Like this is about the federal government trying to infringe on the speech rights of a private company. That is the entire story. And it’s really the only way we should be talking about this.

S1: Why do you think people look at these companies as if they are public utilities and therefore, you know, First Amendment protections should apply when really technically a private company can say, yeah, no, you can’t say that here.

S2: You know, in the United States in particular, we grow up with this free speech ethos that says, you know, icons, I can say whatever I want and it becomes part of our identity. And when you start talking and I’m using finger quotes on the Internet, it feels like the same thing. So it sort of feels like the same rules should apply. And because Facebook, Twitter, YouTube dominate our consumption of the Internet for a lot of us, they don’t really even feel like private companies anymore. A conversation that you’re having on Facebook feels like a conversation you’re having in a public park, right? Or maybe a shopping mall would be a better example. But I think Americans are just very uncomfortable with the idea that the founder, CEOs of these companies, you know, who in many cases have total control over the country, or at least in the case of Mark Zuckerberg, has total control. People are uncomfortable with the idea that Zuckerberg can tell them what they can say or can’t say. And by the way, so is Mark Zuckerberg. Like Mark Zuckerberg also doesn’t want to have to weigh in. But if he is not going to do it, someone has to do it. And so then the question becomes, who has the power to regulate speech?

S1: It can be hard, I think, to put all of this stuff in context in real time, to have an understanding of like how big a deal is this? Is it this week? Is it something we’ll remember in a month or five years from now? Will we look back on this week as something pivotal and momentous? Do you have any sense of kind of where this falls in the spectrum of these companies histories?

S2: Well, personally, I’m planning on forgetting all of 2020. So this is where I’m at emotionally. So I hope in five years I don’t remember this conversation. Nothing personal. I just moved on a trip there like this. Like, here’s what I can tell you. If this leads to a rewriting of Section two 30. Either one that Congress does or one that is just enforced. The courts. And this executive order, then. Yeah. That’ll really change the Internet and the way that it’s most likely to change the Internet. By the way, it’s not that will suddenly have a lot more free speech is that we’ll suddenly have a lot less because companies will not be able to legally moderate speech in the way that they’ve been doing it. Now there’s a Senator, Josh Hawley, who wants to get rid of Section 230 because he basically thinks content moderation should be illegal. And what he has never reckoned with publicly is that if that becomes the case, then Facebook, Twitter are all going to be become overrun with Nazi content, pornography and all sorts of other things that will just kill the businesses. Right. Because nobody wants to spend time in those spaces. So it still feels like most of the debate that we’re having about this is super bad. Faith is not leading anywhere productive. But at the same time, it is. Happening, so we have to pay attention.

S1: We’re in the middle of a pandemic. So much information is flying around on different platforms. Do you think? At the leadership of these various platforms that the coronavirus pandemic has imbued any of this with. A little more sense of stakes or urgency.

S2: Sure. And the platforms have actually been really unusually willing to intervene when it comes to misinformation about the Corona virus, like this has been a place where they have been willing to be arbiters of truth and to get in there. And, you know, if you if you make a video that says wearing a mask is going to activate the virus as one hoax video famously did. They’re going to remove that from from their platforms because they realize how high the stakes are. I think it’s really interesting to think about social networks during the pandemic and during the Trump administration, because we see how there’s been a lot of really good information spreading on Twitter in particular, but also Facebook and YouTube from doctors who are sharing their experiences, who are sharing what they’re learning in real time. And because of the Internet, because of this free and open exchange of ideas that we have, a lot of really good information is surfacing. And in many cases, it’s better information that we’re getting from the federal government. So while, you know, I spent most of my time criticizing social networks, I think it’s important to recognize that there is an upside to having these open platforms where people get to say whatever they want. Which, of course, is the free speech principle that the country was founded on.

S3: Casey Newton, thank you very much. Thanks for having me. Casey Newton is the Silicon Valley editor for The Verge. That’s it for the show today. TBD is produced by Ethan Brooks and hosted by me, Lizzie O’Leary. And it’s part of the larger What Next family. TBD is also part of Future Tense, a partnership of Slate, Arizona State University and New America this year. Future Tense is collaborating with the Tech Law and security program at American University Washington College of Law on the Free Speech Project, an editorial and event series examining the ways technology is influencing how we think about speech. On June 2nd, at 4:00 p.m. Eastern, the Free Speech Project will host an online event on the promise and peril of regulating political advocacy on social media. For more information and RSVP, visit New America dot org slash events. What next to be back in your feet on Monday? Have a great weekend.