If Then

Alex Stamos Is Still Living the 2016 Election

Facebook’s former security chief, who reportedly clashed with Sheryl Sandberg, reflects on how the social network handled Russian misinformation—and how it can do better next time.

SAN FRANCISCO, CA - OCTOBER 14:  Alex Stamos speaks at WIRED25 Festival: WIRED Celebrates 25th Anniversary – Day 2 on October 14, 2018 in San Francisco, California.  (Photo by Phillip Faraone/Getty Images for WIRED25  )
SAN FRANCISCO, CA - OCTOBER 14: Alex Stamos speaks at WIRED25 Festival: WIRED Celebrates 25th Anniversary – Day 2 on October 14, 2018 in San Francisco, California. (Photo by Phillip Faraone/Getty Images for WIRED25 ) Phillip Faraone/Getty Images

It’s been a bad year for Facebook, but not the kind that involves ugly earnings reports. It’s been bad because the company has been accused of contributing to genocide in Myanmar, and bad because its executives have been hauled before Congress to explain how they let tens of millions of users’ personal information get harvested and exploited by Cambridge Analytica. They were hit by a data breach in which the profile information of some 29 million users was stolen by unknown parties. And then, just last week, came a New York Times investigation that painted the company’s leaders as more concerned with their public image than with owning up to the scope of Facebook’s problems.

Advertisement

Along the way, several key executives have departed the company, citing differences with leadership. Those include Alex Stamos, who was the chief security officer for Facebook from 2015 until this August. According to the Times, Stamos clashed with Facebook’s top brass over his investigation into the Russian misinformation campaign on Facebook in 2016. After the Times story came out, Stamos told his version of what happened in an op-ed in the Washington Post titled. “Yes, Facebook Made Mistakes In 2016, But We Weren’t The Only Ones.” He expanded on those reflections—and discussed the many challenges with user privacy, misinformation, and content moderation that the social network faces—in an interview with Slate’s tech podcast If Then this week. Our conversation has been lightly edited for clarity.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

Read or listen to our conversation below, or get the show via Apple PodcastsOvercastSpotifyStitcher, or Google Play.

Will Oremus: I wanted to dive right into the New York Times piece which, of course, is the subject of controversy at the moment. The crux of the story, to me, was this: Facebook has, over the past couple years, presented itself as this idealistic, mission-driven company that’s been blindsided by the ways in which bad actors have abused its platform. It’s trying earnestly to confront those problems as they come up. They always say they’re taking it seriously. They’ve acknowledged that they’ve been too slow to respond in some cases. The Times piece painted a little bit of a different picture. It suggested that leaders like Mark Zuckerberg and Sheryl Sandberg were not just slow to grapple with the consequences of some of these problems but actually wanted to sort of downplay them, and when you brought them up, they got upset and were concerned about how it would look, not only to the public, but maybe to Republicans on Capitol Hill.

Advertisement

I wanted to get your take on this. Which vision of Facebook leadership resonates with you? Are they earnest idealists who have just been blindsided, or have they been calculating and more concerned, maybe, with their image or with looking bad than with really tackling the problems?

Advertisement

Alex Stamos: That’s an interesting question. I think, from the outside, people are kind of confused about what’s going on, and that dichotomy you described is the challenge. What’s going on at Facebook is there’s a lot of people who care a lot about the abuse of the platform and stopping it. There are also people whose job it is to look out for the company from a communications and a legal and a policy perspective. I think what you’re seeing from the outside is, sometimes, the groups that work on safety and security are able to get their message out, are able to drive change internally, and you’ll see a burst of activity—and sometimes the people who are really worried about the way that the company is seen are winning that battle, and so you end up with this kind of inconsistent view of what’s going on.

Advertisement
Advertisement
Advertisement
Advertisement

For the most part, I think that’s actually unfortunate for the company, because the truth is that a lot more was happening, especially in the late 2016/early 2017 time period, than is publicly known, and because of these concerns about not centering itself in the controversy after the election of Donald Trump, I think Facebook missed this huge opportunity to demonstrate that the company is part of the solution, not part of the problem. Because of that missed opportunity, now everything else that comes out is seen through the lens of the idea that Facebook doesn’t care, which is not accurate, at least for the people that I worked with.

Advertisement
Advertisement

The New York Times story opens with the scene of Chief Operating Officer Sheryl Sandberg seething because you had brought up, to the Facebook board of directors, the problems that you saw with suspicious Russia-linked activity on the platform, and she felt blindsided by that and said, “You threw us under the bus,” according to the story. Since the story came out, you have sought to clarify what really happened there. I think a lot of people took away the idea that she was dissuading you from investigating. You’ve said no, she wasn’t dissuading you from investigating, but it did make it look like she was more concerned with the company’s image than with sharing what could have been critical information about what had transpired.

Advertisement

I have both compliments for the Times authors and I have some frustration. For the things that, personally, I saw, I have no factual objections to what the Times wrote. I don’t remember her saying something like “threw you under the bus” but, certainly, that argument happened, and it was … I was not expecting to relive one of the more difficult professional moments of my life on the front page of the New York Times, but that’s fine.

Advertisement
Advertisement

One of the challenges I have with the Times reporting is that it mixes up the timelines a bit. One of the things you’ve got to understand is that this was not just a rolling disclosure externally, but Facebook has learned, in various waves, about what happened in 2016. During the election of 2016, we saw activity that we attributed to GRU, the main intelligence directorate of the Russian military. That activity was reconnaissance activity that ended up expressing itself as them breaking into email accounts that did not belong or controlled by Facebook, but often what Facebook will see is recon activity where intelligence agencies are looking into potential targets.

Advertisement

That information was reported to the FBI, and that was kind of the model under which we operated back in that day—companies would give information to the FBI, especially if the targets were American citizens, and it was up to the government to try to figure out how to do both victim disclosure and, possibly, public disclosure. There was no public announcement of that during the 2016 election.

Then, immediately after the election, there was a big look into the fake news crisis and kind of an analysis of what was driving what people were calling fake news which … I really don’t like the term fake news because, one, obviously it’s been co-opted by the president to mean real news that he dislikes, but also because, even when it was being used at that time, it was incorrect in that most of the propaganda that’s being pushed is not falsifiable information. It is expressions of very aggressive political positions meant to drive divisive narratives.

Advertisement
Advertisement
Advertisement

One of the problems is we’ve had this rolling discovery and, therefore, rolling disclosures externally. At each of those moments, I think we missed the opportunity for Facebook to come out and say, “This is everything we know. We’re not done yet. We’re going to keep on going,” and because of that, people started to push back.

Advertisement
Advertisement

Now, as for the Sheryl situation exactly, at that point, Sheryl knew about, obviously, all the stuff we had found. We had put together a plan to announce it in September of 2017, and I had gone to the board, as was my responsibility as CSO, and briefed them on what was going on. In that briefing, I told them that I didn’t think this was over yet, that there was no way for us to determine what percentage of Russian activity we possibly could have found, and since we weren’t getting help from anybody … so the government wasn’t helping us. The other tech companies, we would send them information. We’d get, pretty much, nothing back from them. We really had no external indications of whether we had caught 90 percent or 5 percent of Russian activity at that point.

Advertisement
Advertisement

I expressed that to the board, and what Sheryl was angry about the next day was that she felt that that message of, “This is not over yet,” was not something that she really understood I was going to say, which is reasonable. I did not enjoy getting chewed out, but the truth is this was a incredibly tough situation. These are a lot of people under a huge amount of stress. Sometimes, when you’re in the NFL, you got to take a hit, and that’s what happened here. She later came to me and kind of apologized, and we worked it out, and we had a good working relationship from then on, but it—I think people are over-reading the Times piece into that moment being about covering thing up, and it’s more about kind of the internal expectations of management and who was informed and in the loop. I think it’s easy to kind of overpivot on that specific anecdote, to come up with an assumption about what was going on that’s not true.

Advertisement
Advertisement

OK, fair enough. Let me ask you about a different aspect of that Times story. This was the aspect where it said that, on multiple occasions and in multiple different ways, Facebook downplayed problems like misinformation, hate speech, Russian election interference out of concern for how it would look politically and, in particular, out of concern for riling up Republicans or getting the Republican majority on Capitol Hill angry at Facebook. Some of that was attributed to Joel Kaplan, who is the director of policy. How did that play out for you? Were there circumstances in which you were aware that political considerations were part of the calculations here as opposed to just considerations about what’s best for users or what’s the right thing to do?

Advertisement
Advertisement
Advertisement

The people who, often, we had to negotiate with to put details in these reports or the blog posts that we did was the policy team, and they did sometimes push back on that. Nobody ever expressed to me, “We’re doing this because of the Republicans.” I do think, before the election, the public discussion of the overall fake news problem was probably mooted by that fact that the company did not want to be seen putting its finger on the scale.

When I look at all of the activity in 2016, it feels like there is a bunch of very powerful institutions, including Facebook, but also including the mass media and including the FBI and the White House, who were all assuming Hillary was going to win. All of those groups have a significant portion of people who wanted her to win, and there was a lot of decisionmaking based upon the theory of, “We can take care of this later after Hillary is president and everything’s right.” That’s what you had from the FBI not coming out and disclosing all of the Russia investigations going on. That’s what you see in the media kind of really amplifying anti-Hillary messages including anti-Hillary messages planted by the GRU themselves. That’s what you see at Facebook of trying to kind of quietly take care of this problem and not come out and make a statement that might be interpreted as saying support for Trump is a fake news phenomena.

Advertisement
Advertisement
Advertisement

Basically, Hillary lost, and the world changed, and a lot of people have looked back at those decisions of that perhaps they were trying to overpivot toward a neutrality that meant that they weren’t being neutral and that, if the situation was reversed, that certainly this information would have been disclosed. I think that’s something that the company is going to have to continue to deal with in these situations. No matter what you think the political impact is going to be, Facebook and the other tech companies are going to have to have, “This is the standard by which we will decide whether we disclose something when we figure it out,” because, clearly, trying to do the, “We’re going to hold our information so that we don’t put our thumbs on the scale,” even if you were in the right place making that decision at the time, that is always going to look like a cover-up later no matter what the outcome is.

Advertisement

You’re talking about, for instance, there was that moment in the New York Times story back in 2015 when there was a post by Donald Trump that was flagged as potential hate speech, and there had to be a decision made as to whether to allow that on the platform. From the report, it said that Zuckerberg himself got involved in that decision and looking at whether Facebook was going to remove that as hate speech or let it stand.

Advertisement
Advertisement

Then there was also … I think you’ve alluded to, throughout 2016, the question of: As you started to see these problems of misinformation, of coordinated propaganda activities, did Facebook take an active role in trying to address those or eliminate those, or did it sort of sit back and let it happen for fear of being accused of taking a political stand?

Advertisement
Advertisement

Yeah, that’s right. I think the taking-Trump’s-post-down situation, that’s one of those situations that I think we all got to think very hard about what kind of power do we want these tech companies to have. I mean, personally, I found pretty much everything Trump wrote during the campaign to be personally insulting and disgusting, and I think you could make a strong argument that a lot of those things, if said by somebody else, would have been taken down by Facebook. I don’t think Facebook should make the argument that they judged Trump’s post just like any random person, but the flipside is we’ve got to think about do we want these tech companies—what level does something have to go to before they censor a political candidate from a major party in a democratic election, right?

Advertisement
Advertisement

If you take a real step back: Here’s a company that controls a platform that around that time almost 2 billion people used, hundreds of millions of Americans are using Facebook products, and the executive ranks of this company, the vast majority of them, are most likely Hillary supporters if not massive donors to the Democratic party. If that company made that decision of, “We are going to silence somebody from the other party,” you’ve got to have a really, really, really good reason, right? There’s a little bit of a banana republic kind of feel to having the platform itself … just like if, in older times, the radio station that is influenced by the government or the phone network making a decision to put their thumb on the scale.

Advertisement

I think, again, looking back, in a situation where everybody is reading the FiveThirtyEight forecast and is feeling like Hillary has it in the bag, creating a situation that Donald Trump loses, and part of his argument is he lost because the tech industry conspired to silence him, I think that’s the kind of thing that they were afraid of and, perhaps, for good reason. I think, if you’re going to get involved like that, you’ve got to have a really, really high reason, and a lot of the Trump material, while, again, I think it would have been taken down from somebody else, was right there on the line. If you’re going to make that call, I think the tie goes to the runner if the runner is a candidate in a major election.

Advertisement
Advertisement
Advertisement
Advertisement

Yeah, I get that, but I also can imagine it must be a little bit frustrating in your position, as someone whose mandate is to enforce policies in a consistent way, to see what you’ve described, which is that the policy team, and the political considerations, and the considerations about the optics, and how will this look, and how will this play in the media and Capitol Hill, that that comes in and sways those decisions as to how to enforce Facebook’s policies.

To be absolutely clear, it is not my job to decide whether Donald Trump’s posts stay up, right? My job as chief security officer was, first, to build systems to protect the company and the platform from attack and then to understand adversarial abuse of the platform.
It was not hate speech or content policy, that’s a dedicated team, but you’re right that that is a problem.

Advertisement

I think what it indicates to me is that the companies cannot make these decisions in a completely black-box manner. What’s happened now is that all of the tech platforms—Facebook, Google, YouTube, to a much lesser extent Apple, but Apple is starting to get into this with their podcast app—they have now demonstrated that you can work the refs, and you can either get them to believe that they are being too tough against you, in which case they will not enforce their policies, or you can get them to take down content from the other side.

Advertisement

The fact that they make these decisions in a black box without really explaining—there’s always a little explanation, but there’s not a real explanation of, “This is how we are applying our rules and the precedent that has been set before to make this decision”—means that everybody believes that the best answer is just to turn up the volume of criticism on the companies. That’s what you’ve seen from both sides, that everybody wants to turn up the volume of, “We believe the content decisions you’re making are unfair.” Everybody believes that their side is the victim.

Advertisement
Advertisement

One of the only ways out of this is that companies are going to have to, A) be much more transparent about these decisions and, B) probably move to a model where the decisions are being made outside of the companies themself. In the United States, we can’t generally have the government make that because all of the speech that we’re talking about is First Amendment-protected speech. None of it is illegal under U.S. law, but I do believe there needs to be total transparency in this because, if a decision like, “We’re not going to take Donald Trump down,” is made, then it needs to be made in a way that sets a precedent so that the other side that agrees can at least think to themself, “Well, this is going to, one day, break our way.” When you make all these decisions in a vacuum, in a black box, then nobody has any confidence that the fairness that has been shown to the other side will ever be applied to them.

Advertisement
Advertisement
Advertisement

Right, so in an ideal world, you might have these very clear policies that govern everything, and then the platform could just enforce them with total objectivity, and you wouldn’t have these debates, but the reality is that a lot of these questions, as you’ve pointed out, are hard questions. I mean there is no clear objective standard for whether a Donald Trump post counts as hate speech or not, right? There can a blurry line between propaganda and fake news. This is stuff that requires human judgment.

You’ve suggested that you would be interested in seeing some of that human judgment move outside the realm of the company itself. I know that Mark Zuckerberg proposed, in the wake of this New York Times story, to set up a sort of appeals body, an independent appeals board for, if you feel that your content was taken down from Facebook unfairly, you can get a hearing elsewhere. Is that the kind of model that you have in mind?

Advertisement
Advertisement

Yeah, I think so. The devil’s in the details here. This is going to be incredibly difficult, right? You’re never going to be able to provide the same kind of due process on content decisions that’s provided by the legal system. Facebook has tens of billions of pieces of content per day that could possibly be moderated, and they make millions of moderation decisions. Back of the envelope, you could probably argue that, every day, Facebook is making more decisions than the entire U.S. legal system does all year, right?

Advertisement

Clearly, you’re not going to have a trained “judge” sit there and come up with some super-reasoned decision for every single takedown, but I think what you can get to is that these big decisions—like, are you going to take Donald Trump down or not, or are you going to take down this fake news site, or the Alex Jones decision—those kinds of decisions could be made in a much more public forum, perhaps by people who area mixture of employees of the company and external experts, and done in a way that creates precedent that then can be enforced at scale by the community operations folks, which are effectively the call center people at Facebook, the people who make these decisions millions of times per day and then, eventually, to be enforced by the machines, by A.I.

Advertisement

I think it’s going to be really important, as we move forward, for those precedents that get set to be transparent. Even if it’s unrealistic that, every time somebody has something taken down for hate speech, that they don’t get to go through a massive appellate process. That’s just impractical and would chew up the entire system.

Advertisement
Advertisement

In your op-ed for the Washington Post, you said that we need more clarity on how these companies make decisions and what powers we want to reserve to our duly-elected government. What do you have in mind there? I mean is that a way of calling for more regulation, or what role did you have in mind for the government in these types of decisions?

Advertisement
Advertisement

I do think we need more regulation. When you look at regulation, one of the other problems here too is that people are smashing both of the platforms together into one, but they’re also not teasing out the fact that any one tech platform actually has four or five different components that are of different interest from a disinformation perspective.

If you look at Facebook, it has a peer-to-peer messaging service. That’s Messenger. It has a way for you to have a personal persona. It has pseudo-anonymous personas, so those are the pages for like corporations. That’s a specific tool that was misused by the Internet Research Agency and other Russian groups. It has recommendation engines, although they’re not as important as people make them out to be. They’re much more important in sites like YouTube. Then there’s the advertising platform.

Advertisement
Advertisement

That order, I was thinking of that going from bottom up. At the top of that, you have the parts of the platform that have the most ability to both amplify messages and then to put messages in front of people who did not explicitly choose to see them. I think that’s where you have the least kind of free-expression issues and you have to focus the most on getting rid of the amplification.

I think, for regulation, Congress should start around online advertising. One of the challenges, going to 2016, is the companies were all still interpreting the Nixon-era laws around online advertising, which were written, really, for TV and print advertising. The tools that are available to advertisers online were never expected by Congress and have not been regulated, and so I do think we need to regulate, first, to have transparency, but we also probably … What’s going on right now is the companies themselves are taking it on to decide who is allowed to advertise within the United States in what is considered an issue ad. That is probably a decision that they should not be making by themselves. I think that is a decision that needs to be made democratically.

Advertisement
Advertisement
Advertisement
Advertisement

The process of saying, “This person is a legitimate PAC and is allowed to advertise, and these people are not legitimate,” that is probably a decision that should be made by the government. I think there’s ways you can create lightweight processes by which political advertisers can register, probably with the FEC, can go get tokens that allow them to take them to the advertising platforms and to run ads, and basic definitions of what is a political issue ad that can be then enforced by the companies but where the interpretation of what that is is made in a democratic manner because that’s a really powerful tool.

That makes a lot of sense. Another area in which regulation often comes up with respect to the big tech platforms is data collection, and privacy, and data use. You’ve worked at two companies now, Yahoo and Facebook, that collect and store tons of sensitive personal information on their users. You had the extremely difficult task, at both of those companies, of keeping all of that data secure. You announced that you would be stepping down around the time of the Cambridge Analytica scandal at Facebook, which revolved around the information that Facebook just sort of allowed developers to have on its users as a matter of policy. I think that that dated to before your time at Facebook.

Advertisement
Advertisement

Shortly after you left the company, there was another data breach, and this was a breach in the more classic sense where hackers got in and exploited some loopholes in the system to steal the profile information of 29 million users. I think that came as a surprise to a lot of people who followed Facebook closely because you guys did have a reputation for doing a good job of protecting the information and defending the platform against hacks. Did it surprise you to see that that breach happened? Then, I guess my follow-up question is is that kind of thing inevitable? I mean when you just collect so much sensitive data on so many people, is it even possible to keep it safe indefinitely?

Advertisement
Advertisement

I do think breaches are inevitable, and one of the ways you can reduce the impact of a breach is reducing the data that you have that could be stolen.

The problem, in Facebook’s case—first off, I was surprised to hear of that specific flaw. It’s interesting because that flaw was in a privacy component. It was in a part of the platform that was built to increase people’s privacy…but it was doing something very, very dangerous, which is impersonating, allowing you to impersonate somebody else, and that’s just technically a very difficult thing to do securely. One of the problems for Facebook here is that would not be an easy kind of problem to solve via a better privacy policy just because, for the most part, the information that was stolen was information that Facebook collected because it needed it to operate, right?

Advertisement
Advertisement
Advertisement

When I think of the privacy issues, people kind of smash them all together, but there’s really the information that you give these products because they need them to operate, and then there’s the information they need but they keep too long, and then there’s information they collect that they shouldn’t have at all. I do think we need federal privacy regulation for multiple reasons, partially because the lack of competent privacy regulation in the United States means that the states and the EU are stepping in to fill that gap. I think we can learn from, especially the mistakes of GDPR, and do a better job in the United States of having a consistent privacy policy.

Advertisement

When we do that, we’ve got to think about mostly is trying to strike down on companies collecting data that people are surprised they have, whereas if I upload my photos to Google Photos, which I do, I am not shocked that Google has my photos. That is how that product is supposed to work. If Google has my GPS location from every single place that I took a photo, that might be somewhat surprising to a nontechnical user. If they have my GPS location from all the time, then that’s a real problem, and that’s the kind of thing that needs to be cracked down on.

Advertisement
Advertisement

I do think we need privacy regulation, but we first got to think about what it is that they’re going to have. To the extent that you’re building platforms that allow people to communicate, anything you upload to that platform that other people can see, that company has. I think that’s also why it’s really important for us to continue to put pressure on the companies, in situations where it’s possible, to create end-to-end encrypted models where this information is encrypted in a way that, even if there is a breach, that data can’t be stolen.

Advertisement

There is a trade-off here. WhatsApp is end-to-end encrypted. Facebook allowed the founders of WhatsApp to put all of this information beyond the reach of the company, which probably, honestly, cost Facebook billions of dollars a year, but the trade-off in that is it makes it a lot harder to fight a bunch of different kinds of abuses on WhatsApp. From where I sit right now, I think it is better to make the trade-off toward privacy, but it turns out to not be an incredibly simple trade-off. Federal privacy regulation should probably encourage those trade-offs going in the right direction, but they’re going to also have to be tempered by the concern for the way that these products are being used right now to cause harm.

Advertisement
Advertisement
Advertisement

Was that part of your decision to leave Facebook, that tension you felt where you’re trying to keep everybody’s data safe, but you’re working within the constraints of a company whose business depends on collecting more and more and more of it?

I left for personal reasons as well as the fact that there had been organizational changes that meant that I was no longer able to impact these issues as much as I wanted to, and I just wasn’t interested in sticking around and having kind of public responsibility for these things where I couldn’t fix them. I mean there is a tension, right, if you’re trying to keep things secure and you’re collecting lots of data, but again, the stuff that was stolen out of that breach was stuff that people intentionally gave to Facebook and knew it had. There’s not a model in which you have a social network that allows people to, for example, look up your email addresses where those email addresses don’t exist.

Advertisement

I think you can create some models where end-to-end encryption keeps that secure, but in the actual Facebook model of “I have a profile and I share that profile with hundreds or thousands of people,” you’re not going to be able to have that data encrypted in a way that it’s always out of scope, and so, unfortunately, if you’re going to have platforms where people have one-to-many communication like that, there is always going to be the possibility of a breach and that information being taken.

Advertisement

Sure, and that makes sense. I just wanted to quickly point out that some of the people who were affected by the breach did have information taken that they had not shared with others, stuff like the last 10 places they had logged in from, but your point, overall, is well taken.

Again, that’s one of those hard trade-offs, if knowing the last 10 places you’ve logged into is a critical part of both you being able to look and see whether or not somebody has taken over your account and for Facebook to know whether account takeovers are happening.

One of the interesting things that people overlook is that the truth is that, every single day, there is a huge amount of data theft online, mostly because of the reuse of passwords. When I was at Facebook, the systems that catch this, that catch somebody coming in with the right username and password that they think is a bad guy, caught between half a million to a million accounts that were taken over per day, right? That’s a great example of one of those difficult things. If you want to catch that, then you have to create a history of what networks have you logged into, what browsers have you logged into, what IP addresses, what GPS locations in some cases. If you throw that data away, you can’t do that kind of detection, and that’s one of the hard trade-offs.

Advertisement
Advertisement
Advertisement

I think we’re going to see that in Europe now with GDPR, especially with one of the articles that allows for the right to be forgotten, for people to say, “I want you to forget everything about me,” where that law is probably going to be used adversarially by people who want to clean up their history. If we go back to the investigation into what happened in 2016, that would have been impossible if Facebook was only keeping logs of some of this identifying information for 30 days or 60 days, and so it’s not a clean trade-off, and that’s one of the difficult things we’re going to have to wrestle with over the next year or so as we consider what the laws are going to be in the U.S.

Facebook has obviously taken a lot of steps since 2016 to try to address these problems—misinformation, foreign propaganda campaigns. Is Facebook better prepared today than it was before, and do you think that the 2020 election is one where we’ll see fewer problems of this sort than we did in 2016, or about the same, or could it be worse somehow?

Advertisement

It’s hard to know what’s going to happen. First, I don’t think we’re going to have to wait until 2020. If you look at the last sets of accounts Facebook took down, one, they’re starting to trend toward Instagram, which is going to be a more difficult platform for Facebook to protect because Instagram, like some other platforms like Twitter, does not require your account to be tied to your real identity. One of the policy tools that was available to Facebook was people lying, being in St. Petersburg and saying they’re from Wyoming. There’s no real equivalent to that on Instagram, so there’s just … From a kind of product design and policy design standpoint, Instagram’s going to be a more difficult situation.

Advertisement

The other thing you can learn from that is that Instagram is a much younger population, probably a much more liberal population than on the Facebook product, and so, as a result, you might want to read into that that the Russians are aiming left. That would make sense if their goal is to drive division in the United States. Then, one of the things they would like to see, I’m sure, is the most radical possible candidate make it out of the Democratic primary. For most people in 2020, they’ll look at the choices that they have between Trump and whomever from the Democratic side and to throw up their hands in exasperation, and so I don’t think we have to wait until 2020. I think they’re going to get engaged really soon on trying to push candidates towards a radical position away from the middle on the Democratic side.

Advertisement
Advertisement

Right, so I guess the remaining part of the question is this has been described by Mark Zuckerberg and others as sort of an arms race between the platforms and bad actors who would exploit them. Who’s winning that arms race right now, do you think?

The companies are never going to just win, right? As long as we live in a free society, people are going to be able to inject misinformation or disinformation in that society. We haven’t really talked about it, but that is also going to be via the media. Probably the most effective component of the Russian campaign in 2016 was their ability to drive stories about Hillary that was laser-focused on keeping Bernie voters home. As long as we have a free press, and no official secrets act, and that we’re not requiring people to have IDs to create social media accounts, as a society, we will always be vulnerable, to a certain extent, for foreign interference. I think that’s just going to have to become a reality we become used to.

Advertisement

What the companies can shoot for is not to eliminate all of it. It’s to increase the cost to these adversaries for them to build personas with large audiences to increase the chance that any single one is being caught and then thrown away and to reduce the spread of the disinformation to a point at which it disappears kind of in the noise of all the other stuff that’s going on. I do think that is possible, but I think it’s going to require a real coordinated effort.

I think, again, that we’re going to need legislation from Congress around online ads. Google and Facebook have taken steps here. There are a thousand other companies in the ad ecosystem, and they’ve done nothing because they’re not legally required to. We’re going to need to kind of standardize those rules, and the companies are going to have to continue to work with federal law enforcement to look at people trying to break them. We’re going to see more fake IDs. You’re going to see people trying to smuggle ads through actual American groups. You might see direct financing of radical groups in the United States.

I don’t think things are going to get better. I think, if you’re the Russians and you look at 2016, it seems like a success, you have not been punished, and so they might be back into it, and you might have other U.S. adversaries seeing this as a low-cost way to influence the United States and to neutralize some of the asymmetries in traditional military and cyber power. I think 2020 might be pretty crazy because we, quite possibly, will have multiple different countries involved, all of whom will have different geo-strategic interests and might be using totally different types of disinformation.