S1: This ad free podcast is part of your slate plus membership.

S2: Welcome to if then the show about how technology is changing our lives and our future. I’m will. And on today’s show, I’m going to be joined by a key player in the ongoing drama around Facebook. The company’s former chief security officer, Alex Stamos. Hey, everyone, welcome to F then. Coming to you from Slate and Future Tense, a partnership between Slate, Arizona State University and New America. We’re recording this on the afternoon of Tuesday, November 20th. My co-host, April Glazer, is off again this week. She’s been on assignment in Butte County, California, covering the devastating wildfires there. Today, we’re going to discuss the fallout from last week’s explosive New York Times exposé about Facebook.

Advertisement

S3: It’s a story that has changed the way many people look at the social network and its leaders. The headline was DeLay, Deny and Deflect How Facebook leaders leaned out in crisis. It painted CEO Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg as by turns out of touch. Image obsessed and mercenary in their response to the growing problems of misinformation and foreign election interference on their platform between 2016 and 2018. Details included how the company privately and publicly minimized the prevalence of Russian misinformation even to its own board. As public opinion turned against it, Facebook employed more ruthless corporate tactics, hiring a D.C. based consultancy specializing in campaign-style opposition research to plant negative stories about other tech giants, as well as to try to undermine anti Facebook activist groups by linking them to the financier George Soros. Alex Stamos has been at the center of this story, both as a critic and an advocate for Facebook. The Times story revolves partly around reports that Zuckerberg and Sandberg stifled or somehow downplayed his revelations that their platform was still not free of Russian interference in the months after the 2016 election. We’ll get his side of that story, as well as his perspective on Facebook missteps, what he thinks the public and media get wrong about the company. We’ll also talk about what some solutions to its problems might look like, including potentially government regulation. And finally, I’ll be joined again by our producer, Max Jacobs for Don’t Close My Tabs. Some of the best things we saw on the Web this week. That’s all coming up on this week’s If Then. It’s been a bad year for Facebook. Not a bad year like they missed their revenue growth targets in a quarterly earnings report, but bad like they’ve been accused of contributing to genocide in Myanmar. They’ve been hauled before Congress to explain how they let tens of millions of users personal information get harvested and exploited by Cambridge Analytica. They were hit by a data breach in which the profile information of some 29 million users was stolen by parties that remain unknown and at large. And last week came a New York Times investigation that paints the company’s leaders as more concerned with their public image than with owning up to the scope of Facebook problems along the way. Several key executives had departed the company, citing differences with leadership, including our guest today, Alex Stamos. steamos served as the chief security officer for Facebook from 2015 until August of this year. Prior to that, he was the CSO at Yahoo! Currently, he’s a Hoover fellow and an adjunct professor at Stanford University and the director of the new Stanford Internet Observatory. It’s a technical research group focusing on understanding and mitigating abuse of new technologies. After the New York Times story came out last week, Stamos penned an op ed in The Washington Post titled Yes, Facebook Made Mistakes in 2016, but we weren’t the only ones. Alex Stamos, welcome to F10. Thanks. Well, glad to be here.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

S4: Yeah, I’m glad to have you on there. We’ve been talking about doing this for a while. I think this is a great opportunity to reflect on everything that’s transpired at Facebook in the past two or three years. Well, not everything, but the stuff that concerns your role as chief security officer in particular and where that leaves us now. So I wanted to dive right into the New York Times piece, which, of course, is the subject of controversy of the moment. The crux of the story to me was this Facebook has over the past couple years presented itself as this idealistic, mission driven company, and it’s been blindsided by the ways in which bad actors have abused its platform. It’s trying earnestly to confront those problems as they come up. You know, it always they always say they’re taking it seriously. They’ve acknowledged that they’ve been too slow to respond in some cases. But The Times piece painted a little bit of a different picture. It suggested that leaders like Mark Zuckerberg and Sheryl Sandberg were not just slow to grapple with the consequences of some of these problems, but actually wanted to sort of downplay them. And when you brought them up, they got upset and were concerned about how it would look not only to the public, but maybe to Republicans on Capitol Hill or that sort of thing. So I wanted to get your take on this. I mean, which vision of Facebook leadership resonates with you or are they earnest idealists who have just been blindsided or are they, you know, have they been calculating and more concerned maybe with their image or with looking bad than with really tackling the problems?

Advertisement

S5: Well, serious question. I think from the outside, people are kind of confused about what’s going on. And that dichotomy you described as is the challenge. What’s going on in Facebook is there’s a lot of people who care a lot about the abuse of the platform and stopping it. There are also people whose job it is to look out for the company from a calms and a legal and policy perspective. And I think what you’re seeing from the outside is sometimes the groups that work on safety and security are able to get their message out, are able to to drive change internally, and you’ll see a burst of activity. And sometimes there’s the people who are really worried about the way that the company is seen are winning that battle. And so you end up with this this kind of inconsistent view of what what what’s going on. And for the most part, I think that’s actually unfortunate for the company, because the truth is, is that a lot more was happening, especially in the late 2016, early 2017 time period than is publicly known. And because of these concerns about not censoring itself in the controversy after the election of Donald Trump, I think Facebook missed this huge opportunity to demonstrate that the company is part of the solution, not part of the problem. And because of that missed opportunity, now everything else, it comes out as seen through the lens of the idea that Facebook doesn’t care, which is not accurate, at least for the people that I worked with.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

S1: Right. Okay. And so The New York Times story opens with the scene of Chief Operating Officer Sheryl Sandberg seething because you had brought up to the Facebook board of directors the problems that you saw with suspicious Russia linked activity on the platform. And she felt blindsided by that and said, you threw us under the bus. According to the story you have since the story came out, you have sought to clarify what really happened there. I think a lot of people took away the idea that she was dissuading you for investigating. You’ve said no, she wasn’t dissuading you from investigating, but it did make it look like she was more concerned with the company’s image than. Then with sharing what could have been critical information about what had transpired.

Advertisement

S5: So I have both compliments for the Times authors and I have some frustration. So for the themes that personally I saw, I have no factual objections to what the Times wrote. I was not. I don’t remember her saying something like threw you under the bus. But certainly that argument happened. It was you know, I was not expecting to relive one of the more difficult professional moments of my life on the front page of The New York Times. But that’s that’s fine. One of the challenges I have with The Times reporting is that it mixes up the timelines a bit. And so one of things you gotta understand is that this was not just a rolling disclosure externally, but Facebook has learned in various ways about what happened in 2016. So during the election of 2016, we saw activity that we attributed to G.R. you, the main intelligence directorate of the Russian military. That activity was reconnaissance activity that ended up expressing itself as then breaking into email accounts that did not belong or controlled by Facebook. But often what Facebook will see is, is Rickon activity, where intelligence agencies are looking into potential targets. So that information was reported to the FBI and that was kind of the model under which we operated back in the day. It was companies would give information to the FBI, especially if the targets were American citizens. And it was up to the government to try to figure out how to do both victim disclosure and possibly public disclosure. And so that, you know, there was no public announcement of that during the 2016 election. And then immediately after the election, there was a big look into the fake news crisis and kind of an analysis of what was driving what people were calling fake news, which I really don’t like the term fake news, because one, obviously it’s been co-opted by the president to mean real news that he dislikes, but also because even when it was being used at that time, it was incorrect and that most of the propaganda that’s been pushed is not falsifiable information. It is expressions of very aggressive political positions meant to drive divisive narratives. So one of the promises we have, we’ve had like these this rolling discovery and therefore rolling disclosures externally. And in each of those moments, I think we missed opportunity at Facebook to come out and say, this is everything we know. We’re not done yet. We’re going to keep on going. And because of that, people start to push back now over the Sheryl situation. Exactly. That was a much more specific at that point Sheryl knew about. Obviously, all the stuff we had found, we had put together a plan to announce it in September 2017. And I’d gone to the board, as was my responsibility as CSO and briefed them on what was going on in that briefing. And I told them that I didn’t think this was over yet, that there was no way for us to determine whether what percentage of Russian activity we possibly could’ve found. And since we weren’t getting help from anybody. So the government wasn’t helping us. The other tech companies, we would send them information. We get pretty much nothing back from them. We really had no external indications of whether we had cut 90 percent or 5 percent of Russian activity at that point. And so I expressed that to the board. And what Sheryl was angry about the next day was that she felt that that message of this is not over yet, was not something that she really understood. I was going to say, which is reasonable. I did not enjoy getting chewed out. But the truth is, this was a incredibly tough situation. These are a lot of people under a huge amount of stress. And, you know, sometimes when you’re in the NFL, you got to take a hit. And that’s what what happened here. She later came to me and kind of apologized. And, you know, we worked it out and we had a good working relationship from then on. But, you know, I think people are overreading the Times piece into that moment, being about covering things up. And it’s more about kind of the internal expectations management and it who was informed and in the loop. I think it’s easy to kind of over pivot on that specific anecdote to come up with an assumption about what was going on. That’s not true.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

S1: Okay, fair enough. So let me ask you about a different aspect of that time story. This was the aspect where it said that on multiple occasions and in multiple different ways, Facebook downplayed problems like misinformation, hate speech, Russian election interference out of concern for how it would look politically and in particular out of concern for riling up Republicans or getting the Republican majority on Capitol Hill angry at Facebook. Some of that was attributed to Joel Kaplan, who is the director of policy. How did that play out for you? I mean, were there circumstances in which you were aware that political considerations were were part of the calculations here as opposed to just considerations about what’s best for users or what’s the right thing to do?

Advertisement

S5: You the people who often we had to negotiate with to put details in these reports or or the blog post that we did. Was the policy team. And they did sometimes push back on that. Nobody ever expressed to me, we’re doing this because the Republicans I do think the before the election, the public discussion of the overall fake news problem was probably muted by the fact that the company did not want to be seen putting its finger on the scale. And when I look at all of the activity in 2016, it feels like there’s a bunch of very powerful institutions, including Facebook, but also including the mass media and including the FBI and the White House, who were all assuming Hillary was going to win. All of those groups have a significant portion of people who wanted her to win. And there was a lot of decision making based upon the theory of we can take care of this later after Hillary’s president. Everything right. That’s that’s what you have from the FBI not coming out and disclosing all of the rush investigations going on. That’s what you see in the media, kind of really amplifying anti-Hillary messages, including anti-Hillary messages planted by the G.R., you themselves. And that’s it. You see it Facebook of, you know, trying to kind of quietly take care of this problem and not come out and, you know, make a statement that might be interpreted as saying support for Trump is a fake news phenomena. And basically, Hillary Loston the world changed. And a lot of people have looked back at those decisions of, you know, that perhaps they were trying to over pivot towards a neutrality that meant that they weren’t being neutral and that if the situation was reversed, that certainly this information would’ve been disclosed. And I think that’s something that the company’s going to have to continue to deal with in these situations. No matter what you think, the political impact is going to be Facebook and the other tech companies are going to have to have. This is the standard by which we will decide whether we disclose something when we figure it out, because clearly trying to do the we’re going to hold our information so that we don’t put our thumbs on the scale. Even if you were in the right place making that decision at the time, that is always gonna look like a cover up later no matter what the outcome is.

Advertisement
Advertisement
Advertisement
Advertisement

S6: You’re talking about, for instance, there was that moment in The New York Times story back in 2015 when there was a post by Donald Trump that was flagged as potential hate speech.

S1: And there had to be a decision made as to whether to allow that on the platform from the report and said that Zuckerberg himself got involved in that decision and looking at whether Facebook was going to remove that as hate speech or or let it stand. Then there was also, I think you’ve alluded to throughout 2016, the question of as you started to see these problems of misinformation, of coordinated propaganda activities. Did Facebook take an active role in trying to address those or eliminate those or did it sort of sit back and and let it happen for fear of being accused of taking a political stand?

Advertisement

S7: Yeah, that’s right. I think the taking Trump’s post down situation, that’s one of those situations that I think we all got to think very hard about. What kind of power do we want these tech companies to have? I mean, personally, I found pretty much everything Trump wrote during the campaign to be personally insulting and disgusting. And I think, you know, you could make a strong argument that a lot of those things, if said by somebody else, would have been taken out by Facebook. So I don’t think, you know, Facebook should make the argument that they judged Trump. Trump’s post just like any random person. But the flipside is, is we’ve got to think about do we want these tech companies? What level does does something have to go to before they censor a political candidate from a major party in a Democratic election? Right. If you take a real step back and you look at here’s a company that can that controls a platform that around that at that time, you know, almost two billion people used hundreds of millions. Americans are using Facebook products and the executive ranks of this company. The vast majority of them are most likely Hillary supporters, if not massive donors to the Democratic Party. If that company made that decision of we are going to silence somebody from the other party, you’re going to have a really, really, really good reason. Right. There’s there’s a little bit of a banana republic kind of feel to having the platform itself, just like if in your older times, the radio station that is influenced by the government or the phone network, you know, making a decision to put their thumb on the scale. And I think, again, looking back in a situation where everybody is is reading the FiveThirtyEight forecast and is feeling like Hillary has it in the bag, creating a situation that Donald Trump loses. And part of his argument is he lost because the tech industry conspired to set to silence him. I think that’s the kind of thing that they were afraid of. And perhaps for good reason. I think, you know, if you’re going to get involved like that, you’ve got to have a really, really high reason. And in a lot of the Trump material. Well, again, I think it would’ve been taken down from somebody else was right there on the line. And if you’re if you’re going to make that call, I think the tie goes to the runner if the runner is a candidate in a major election.

Advertisement
Advertisement
Advertisement
Advertisement

S6: Yeah, I get that, but I also can imagine it must be a little bit frustrating in your position as someone whose mandate is to enforce policies in a consistent way to see what you’ve described, which is that the policy team and the political considerations and the considerations about the optics and how will this look and how will this play in the media and Capitol Hill? But that comes in and sways those decisions as to as to how to enforce Facebook’s policies.

S7: So to be absolutely clear, it is not my job to decide whether Donald Trump’s post up, right. My job as chief security officer was first to build systems to protect the company and the platform from attack and then to understand adversarial abuse of the platform. It was not hate speech or content policy. That’s a dedicated team. But you’re right that that that is a problem. And I think what it indicates to me is that the companies cannot make these decisions in a completely blackbox manner. And that’s that’s what’s happened now, is that all of the tech platforms, Facebook, Google, YouTube, to a much lesser extent Apple. But Apple started to go into this with their podcast app. They have now demonstrated that you can work the refs and you can, you know, get either get them to believe that they are being too tough against you, in which case they will not enforce their policies or you can get them to take down content from the other side. And the fact that they make these decisions in a black box without really explaining, there’s always like a little explanation, but there’s not like a real explanation of this is how we’re applying our rules. And the precedent that has been set before to make this decision means that everybody believes that the best answer is just to turn up the volume of criticism of the companies. And that’s what you’ve seen for both sides that everybody wants to turn up the volume of. We believe the content decisions you’re making are unfair. Everybody believes that their side is the is the victim. And one the only ways out of this is that companies are going to have to, a, be much more transparent about these decisions and B, probably move to a model where the decisions are being made outside of the companies himself, you know, in the United States. We can’t generally have the government make that because all of this speech that we’re talking about is First Amendment protected speech. None of it is illegal under U.S. law. But I do believe there needs to be total transparency in this, because if a decision like we’re not going to take Donald Trump down is made, that need to be made in a way that sets a precedent so that the other side that disagrees can at least thinks of himself. Well, this is going to one day break our way. But when you make all these decisions in a vacuum in a black box, then nobody has any confidence that the fairness that has been shown to the other side will ever be applied to them.

Advertisement
Advertisement
Advertisement
Advertisement

S1: Right. In an ideal world, you might have these very clear policies that govern everything. And then the platform could just enforce them with total objectivity and you wouldn’t have these debates. But the reality is that a lot of these questions, as you as you’ve pointed out, are hard questions. I mean, there’s there is no clear objective standard for whether Donald Trump post counts as hate speech or not. Right. There’s there’s not there’s there can be a blurry line between propaganda and fake news. This is stuff that requires human judgment. And you’ve suggested that you’d be interested in seeing some of that human judgment move outside the realm of the company itself. So so I know that Mark Zuckerberg proposed in the wake of this New York Times story to set up a sort of appeals body and independent appeals board for if you feel that your content was taken down from Facebook unfairly, you can get a hearing elsewhere. I mean, is that the kind of model that you have in mind?

S7: Yeah, I think so. The devil’s in the details here. This is gonna be incredibly difficult. Right. You’re never going to be able to provide the same kind of due process on content decisions that’s provided by the legal system. Facebook has tens of billions of pieces of content per day that could possibly be moderated and they make millions of moderation decisions. It’s the back of the envelope. You could probably argue that every day Facebook is making more decisions in the entire U.S. legal system does all year. Right. And so clearly you’re not going to have a trained quote/unquote judge sit there and come up with some super reason decision for every single takedown. But I think what you can’t get to is that these big decisions like are we are going to take Donald Trump down or not? Or are you can take down this fake news site or the Alex Jones decision. Those kinds of decisions could be made in a much more public forum, perhaps by people who was a mixture of employees of the company and external experts and done in a way that creates precedent that then can be enforced at scale by the communi operations folks, which are effectively that the call center people at Facebook, the people who make these decisions millions of times per day, and then eventually to be enforced by the machines by A.I.. And I think it’s going to be really important as we move forward for those precedents that got set to be transparent, even if you if it’s unrealistic that every time somebody has something, take it out for hate speech, that they don’t get to go through a massive. Process. That’s just impractical and would chew up the entire system.

Advertisement
Advertisement
Advertisement

S8: All right. We’re going to take a quick break. When we come back, we’ll have more from our interview with Al Stingless.

S1: In your op ed for The Washington Post, you said that we need more clarity on how these companies make decisions and what powers we want to reserve to our duly elected government. What do you have in mind? There? I mean, is that a way of calling for more regulation or what role did you have in mind for the government in these types of decisions?

S7: So I do think we need more regulation. When you look at regulation, one of the other problems here, too, is that people are are smashing both the platforms together into one, but they’re also not teasing out the fact that any one tech platform actually has four or five different components that are of different interest from a disinformation perspective. So if you look at Facebook, it is it has a peer to peer messaging service. That’s Messenger. It has a way for you to have a personal persona. It has pseudo anonymous persona. So those are the pages for corporations. That’s a specific tool that was misused by the Internet research agency and other Russian groups. It has recommendation engines, although they’re not as important as people make him out to be. They’re much more important than sites like YouTube. And then there’s the advertising platform in that order. I was thinking of that going from bottom up. At the top of that, you have the parts of the platform that have the most ability to both amplify messages and then to put messages in front of people who did not explicitly choose to see them. And I think that’s where you have the least kind of free expression issues and you have to focus the most on getting where the amplification. And so I think for for regulation, Congress should start around online advertising. One of the challenges going twice 16 is the company’s role still interpreting the Nixon era laws around online advertising, which are written really for TV and print advertising. And the tools that are available to advertisers online were never expected by Congress that have not been regulated. And so I do think we need to regulate first to have transparency. But we also probably what’s going on right now is the companies themselves are taking it on to decide who is allowed to advertise in the United States and in what is considered an issue ad. That is probably a decision that they should not be making by themselves. I think that is a decision that needs to be made democratically. And the process of saying this person is a legitimate PAC and is allowed to advertise and these people are not legitimate. That is probably a decision that should be made by the government. I think there’s ways you can create, you know, lightweight processes by which political advertisers can register. Probably the FEC can go get tokens that allow them to take them to the advertising platforms and to run ads and basic definitions of what is a political issue ad that can be then enforced by the companies. But where the interpretation of what that is is made in a democratic manner, because that’s a that’s a really powerful tool that makes a lot of sense.

Advertisement
Advertisement
Advertisement

S1: And another arena in which regulation often comes up with respect to the big tech platforms is data collection and privacy and data use. You’ve worked at two companies now and Yahoo! And Facebook that collect and store tons of sensitive personal information on their users. You had the extremely difficult task in both those companies of keeping all of that data secure.

S6: You announce that you’d be stepping down around the time of the Cambridge Analytica scandal at Facebook, which revolves around the information that Facebook just sort of allowed developers to have on its users. As a matter of policy, and I think that that dated to before your time at Facebook, but shortly after you left the company, there was another data breach and this was a breach in the more classical sense where hackers got in and exploited some loopholes in the system to steal the profile information of twenty nine million users. I think that came as a surprise to a lot of people who followed Facebook closely because you guys did have a reputation for doing a good job of protecting the information and defending the platform against hacks. Did it surprise you to see that that breach happened? And then I guess my my follow up question is, is that kind of thing inevitable? And when you just collect so much sensitive data on so many people. Is it even possible to keep it safe indefinitely?

S7: I do think breaches are inevitable. And you point to one of the directions you got to think about as is one of the ways you can reduce the impact of a breach is reducing the data that you have that could be stolen. The problem in Facebook’s case. So first off, I mean, I was surprised to hear that specific flaw. It’s interesting is that flaw was in a privacy component. It was in a part of the platform that was built to increase people’s privacy. So but it was doing something very, very dangerous, which is impersonating allowing you to impersonate somebody else. And that’s just technically a very difficult thing to do securely. But when the promise of Facebook is this is not that would not be an easy kind of problem to solve. The better privacy policy is because for the most part, the information that was. Dolan was information that Facebook collected because it needed it to operate. Right. And when I think of the privacy issues, you know, people kind of smash them all together. But there’s really the information that you give these products because they need them to operate. And then there’s the information they need, but they keep too long. And then there’s information they collect that they they shouldn’t have it all. And I do think we need federal privacy regulation for multiple reasons, partially because the lack of competent privacy regulation, the United States means that the states and the EU are stepping in to fill that gap. And I think we can learn from especially the mistakes of GDP are and do a better job in the United States of having a consistent privacy policy. But when we do that, we’ve got to think about mostly trying to strike down on companies collecting data that people are surprised they have. Whereas if I upload my photos to Google photos, which I do, I am not shocked that Google has my photos. That is how their product was to work. If Google has my G.P.S. location from every single place that I took a photo, that might be somewhat surprising to a non-technical user. If they have my G.P.S. location from all the time, then that’s a real problem. And that’s the kind of thing that needs to be cracked down on. So I do think we need privacy regulation, but we first got to think about what it is that they’re going to have. And to the extent that you’re building platforms that allow people to communicate anything you upload to that platform that other people can see. That company has. And I think that’s also why it’s really important for us to continue to put pressure on the companies to in situations where it’s possible to create and end encrypted models, where this information is encrypted in a way that even if there is a breach, that data can’t be stolen. There’s a tradeoff here. WhatsApp is ended encrypted. Facebook allowed the founders of WhatsApp to put all of this information beyond the reach of the company, which probably honestly cost Facebook billions of dollars a year. But the tradeoff in that is it makes it a lot harder to fight a bunch of different kinds of abuses on WhatsApp. And from where I sit right now, I think it is better to make the tradeoff towards privacy. But it turns out to not be an incredibly simple tradeoff. And, you know, federal privacy regulation should probably encourage those tradeoffs go in the right direction. But they’re going to also have to be tempered by the concern for the way that these products are being used right now is to cause harm.

Advertisement
Advertisement
Advertisement

S3: Was that part of your decision to leave Facebook, that that tension you felt where you’re trying to keep everybody’s data safe, but you’re working within the constraints of a company whose business depends on collecting more and more and more of it?

S7: I left for personal reasons, as well as the fact that there had been organizational changes that meant that I was no longer able to impact these issues as much as I wanted to. And I just wasn’t interested in sticking around and having kind of public responsibility for these things where I couldn’t fix them. I mean, there is a tension, right? If you’re if you’re trying to keep things secure and you’re collecting lots of data. But again, will the stuff that was stolen out of that breach was stuff that people intentionally gave to Facebook and knew it had. There is not a model in which you have a social network that allows people to, for example, look up your email addresses where those e-mail addresses don’t exist. I think you can create some models where end encryption keeps it secure. But in the actual Facebook model of I have a profile and I share that profile with hundreds of thousands of people. You’re not going to be able to have that data encrypt in the way that it’s always out of scope. And so unfortunately, if you’re going to have platforms or people have one too many communication like that, there is always going to be the possibility of a breach in that information being taken, sir.

S6: And that makes sense. I just want to quickly point out that some of the people who were affected by the breach did have information taken that they had not shared with other stuff like the last 10 places they had logged in from. But your point overall is well-taken, right.

Advertisement
Advertisement
Advertisement

S7: But again, that’s one of those hard tradeoffs. If knowing the last 10 places you’ve logged into is a critical part of the of both, you being able to look and see whether or not somebody is taking over your account and for Facebook to know where their account takeovers are happening. So, you know, one of the interesting things that people overlook is that the truth is, is that every single day there’s a huge amount of data theft online, mostly because of the re-use of passwords. And when I was at Facebook, these systems that catch this, that catch somebody coming in with the right username and password that they think is a bad guy caught between half a million to a million accounts were taken over per day. Right. And so that’s a great example of one of those difficult things. If you want to catch that, that you have to create like a history of what networks have you logged into, what browser to be logged in to, what IP addresses, what G.P.S. locations. In some cases, if you saw that day, that way you can’t do that kind of detection. And that’s that’s one of the hard tradeoffs. I think we’re going to see that in Europe now, which EPR, especially with one of the articles that allows for the right to be forgotten, for people to say, I want you to forget everything about me, where that law is probably gonna be use ever seriously by people who want to clean up their history. If you’re you know, if we go back to the investigation into what happened in 2016, that would. Impossible. If Facebook was only keeping logs of some of this identifying information for 30 days or 60 days. And so it’s it’s not a clean off. And that’s one of the difficult things we’re gonna we’re gonna have to wrestle with over the next year or so as we consider what the laws are going to be in the US.

Advertisement
Advertisement
Advertisement

S3: So Facebook has obviously taken a lot of steps since 2016 to try to address these problems, misinformation, foreign propaganda campaigns. Is Facebook better prepared today than it was before? And do you think that the 2020 election is one where we’ll see fewer problems of this nature of the sort than we did in 2016 or about the same? Or could it be worse somehow?

S7: It’s hard to know what’s going to happen. So first, I don’t think we’re gonna have to wait till 2020. If you look at the last sets of accounts, Facebook took down one there started a trend towards Instagram, which is going to be a more difficult platform for Facebook to protect because Instagram, like some other platforms like Twitter, does not require your account to be tied to your real identity. And so one of the policy tools that was available to Facebook was people like you being in St. Petersburg and saying they’re from Wyoming. There’s no real equivalent to that on Instagram. So there’s just from a kind of product design and policy design standpoint, Instagram is going to be a more difficult situation. But the other thing you can learn from that is that Instagram is a much younger population, probably a much more liberal population than on the Facebook product. And so as a result, you might want to read into that that the Russians are aiming left and that would make sense if their goal is to drive division in the United States. Then one of things you’d like to see, I’m sure, is the most radical possible candidate make it out of the Democrat primary. And for most people in 2020, to look at the choices that they have between Trump and whomever from the Democratic side and to throw up their hands in exasperation. And so I don’t have to wait till 2020. I think they’re going to get engaged really soon on trying to push candidates towards a radical position away from the middle on the Democratic side.

Advertisement
Advertisement
Advertisement

S3: Right. So I guess the remaining part of the question is this has been described by Mark Zuckerberg and others as sort of an arms race between the platforms and bad actors who would exploit them. Who’s winning that arms race right now?

S7: Do you think the companies are never going to just win? Write it as long as we live in a free society. People are going to be able to inject misinformation or disinformation to that society. And we haven’t really talked about it. But that is also going to be via the media. The probably the most effective component of the Russian campaign in 2016 was their ability to drive stories about Hillary that was laser focused on keeping Bernie voters held up. And as long as we have like a free press and no official secrets act and that we’re not requiring people to have ideas to create social media accounts as a society, we will always be vulnerable to a certain extent for foreign interference. And I think that’s just going to have to become a reality. We’ve become used to what the companies can shoot for us, not to eliminate all of it. It’s to increase the cost to these adversaries for them to build personas with large audiences, to increase the chance that any single one is being caught and then thrown away and to reduce the spread of the desparation to a point at which it disappears, kind of in the noise of all the other stuff that’s going on. And I do think that is possible, but I think it’s going to require a real coordinated effort. I think, again, that we’re going to need legislation from Congress around online ads. You know, Google and Facebook have taken steps here. There are a thousand other companies in the in the attic ecosystem, and they’ve done nothing because they’re not legally required to. And we’re gonna need that kind of standardize those rules and the copies. We have to continue to work with federal law enforcement to look at people trying to break them. You know, we’re gonna see more fake I.D. You’re going to see people trying to smuggle ads through actual American groups. You might see direct financing of of radical groups in the United States. So I don’t think things are gonna get better. I think if you’re the Russians, you’ll at 2016. It seems like a success. You have not been punished. And so they might be back into it and you might have other U.S. adversaries seeing this as a low cost way to to influence the United States into neutralize some of the asymmetries in traditional military and cyber power. And so I think 2020 might be pretty crazy because we quite possibly will have multiple different countries involved, all of whom will have different geostrategic interests and might be using totally different types of disinformation.

Advertisement
Advertisement
Advertisement

S1: All right. I do think you have a good point, by the way, that it’s not just Facebook, that the mainstream media got played as well by Russia and WikiLeaks. And we haven’t heard maybe as many conversations about what the mainstream media has done to re-evaluate their processes to make sure that doesn’t happen again. Alex Stamos, thank you so much for joining me.

S9: Thanks, Will. Appreciate it.

S8: One final quick break. And then don’t close my tabs. Some of the best things we saw on the Web this week.

S3: It’s time again for don’t close my tabs. And joining me again this week is our producer, Max Jacobs. Max, what tab could you not close this week?

S10: Hey, well, and apologies to our listeners that I’m not April Glazer, but thank you for having me back. I will forgive you for not being wrong. Thank you. I should also mention you may hear some jumping up and down in the background or some kind of weird noises. We are near the Tough Mudder studios here at Slate. And after a certain time of day, they start tough muttering. So so that’s what you might be hearing.

S6: I remember that when I worked in New York office, they would just get really intense in the evenings as people started grunting and groaning and. All right. But Max, what what tab did you want to share the stunt?

S10: So I realized I was on here last week and I also was talking about the fire. And that’s not the only thing that I’m thinking about or I can talk about. But it seemed like it was worth just mentioning that that’s the reason, as we alluded to earlier, that April is not with us on the show. She went back up to Butte County and has been doing really great reporting on that. She just filed a story this morning. It was the cover story on Slate called Trapped in the Fire Zone. I haven’t gotten all the way through it, but it’s yeah, it’s harrowing. It’s really well written and it’s really well reported. And we we just wanted to support her work on that and hope that people will well check it out. So we think it’s it’s really important.

Advertisement
Advertisement
Advertisement

S6: It is really important. It’s a terrible story. And it’s it’s even more terrible to me because it doesn’t feel like a one off. You know, I mean, it feels like these are the kinds of fires that that California and much of the west are going to be dealing with for for years and decades to come. Thanks to climate change. But there are some great details in April’s piece, by the way. There’s there’s the pink ribbons on the flyers that you put on there to tell the authorities there are no bodies inside and they don’t have to inspect them further. There’s she spent time at the refugee camp at the Wal-Mart and said that the problem, you know, the air is nearly unbreathable, but that’s hardly even a consideration compared to the cold at night. So just some really important reporting on the terrible crisis that’s going on in northern California right now.

S10: Yeah, yeah. Agreed. Well, what tab could you not close this week?

S1: All right. I’m trying to follow your lead, Max, and do something somewhat lighthearted. Oh, wonderful. So my tab is from The New York Times. It’s the headline was Why Standing Desks are Overrated. And I received this headline with a certain measure of frustration because I intentionally didn’t finish this one as someone who uses a standing desk.

S6: So, yeah, I had a sort of a hate read it because everybody convinced me to get this damn standing desk and now I have it. And now now you tell me it’s overrated. But it’s actually an interesting story to read. The big takeaway is there was all this research a few years back and maybe you remember the sort of slogan sitting is the new smoking. It was research that correlated. Yeah. People who sit lots of hours during the day. It was associated with all kinds of bad health outcomes. I think like you even die earlier. But now there’s been new research that looks more closely at the question of people who sit at work versus stand at work. So like, let’s throw out the people who are sitting all day because they’re on their couch or maybe they’re immobile or unemployed or whatever else. Let’s look just at the people who are sitting at work. And actually, there’s the correlation goes away. There’s no more correlation between sitting and most of these bad health effects anymore. And of course, there could be some confounding factors there as well. The article does note, well, maybe people who stand at work, maybe a lot of them are doing intense physical tasks. And so they’re having some health drawbacks that are canceling out the benefits from standing. But in any case, it doesn’t seem like sitting at your desk is the big problem. And certainly for cardiovascular health, the research seems to be pretty clear that standing isn’t all that much better than sitting. Standing is not. Exercise would be. I guess if you try to distill this latest round of research into one handy slogan, standing is not exercise. It is not that great for your cardiovascular health. But I did take away one thing from it that justified to me. My investment in the standing desk, which is standing really can still be much better if you have lower back pain or neck pain. I had developed arthritis from stooping over my laptop all day for years on end. And so I can keep my standing desk knowing that at least it’s helping me with that, even if it’s not going to prolong my life.

Advertisement
Advertisement
Advertisement

S10: Yeah, I think I will still rotate between them and mostly sit, which is nice. And I think that I knew standing wasn’t exercise, although it was kind of nice to think of it that way for a little bit. Yeah. Deep down, I knew it was an exercise.

S6: Yeah, maybe there’s a little bit of a straw man there. Come to think of it, I don’t know anybody who thought that the standing desk was actually like, you know, pumping up your heart rate or anything. But but in any case, I said, if you must stand, if you can alternate between them, ideally that’s that’s the best we can do for now. Great.

S2: All right. That’s our show. You can get updates about what’s coming up next. By following us on Twitter at. If then Pod tell the truth. I haven’t been the greatest at updating that, but I will try to get you updates if you follow that account going forward.

S11: You can email us at it then at slate.com. You can send us your tech questions, show and guest suggestions or just say hi. And I realize I feel bad. Somebody e-mailed us a few weeks ago about our new show music and who it was. The name of the song is Loaded Up and it’s by Andrius Jan Shay Heri. I’m not saying that right. I believe. But it’s like j m s h e r e. Thank you for writing.

S12: All right. Thanks to everybody who writes us. You can follow me and April on Twitter as well. I’m at Will Rumi’s. April is at April ASER. Thanks again to our guest, Alex Stamos. You can find him on Twitter at Alex Stamos. And thanks to everybody who has left us a comment. A review on Apple podcasts or whatever platform you used to listen. We really appreciate your time. If then is a production of Slate and Future Tense, a partnership between Slate, Arizona State University and America. Our producer is Max Jacobs. And thank you to Merritt Jacob for extra engineering help this week. Thanks to Nick Holmes at Occupy Studios in Newark, Delaware. We’ll see you next week. Indeed.