The Industry

“America’s Failure to Lead Is Going to Come Back and Bite Us”

Sen. Mark Warner wants Congress to get tough with Big Tech. How would he do it?

Virginia Sen. Mark Warner
Virginia Sen. Mark Warner
Alex Wong/Getty Images

The last several years have revised how many Americans see the massive tech platforms that monopolize the time we spend online. There’s the Russian-abetted role played by social media during the 2016 election, whose aftermath has forced executives from Facebook, Twitter, and Google to repeatedly explain to Congress what they knew about Kremlin-linked content designed to widen divisions in American life—and why they didn’t do more to stop it. There’s our deepening understanding of how these companies’ targeted advertising systems can lead to discrimination by age, race, gender, and more. There are the very real privacy concerns they have forced us to confront, from the Cambridge Analytica scandal to the recent massive Facebook hack. Among some Republicans, there’s the more dubious worry that the social-media platforms are plagued by anti-conservative bias. On both sides of the aisle, there appears to be a growing feeling that something must be done by Congress to regulate how these companies treat our information.

One of the loudest voices in the Senate pushing for concrete action is Virginia Democrat Mark Warner. He is the vice-chairman of the Senate Intelligence Committee, which continues to investigate Russian interference in the 2016 election. And he has begun to urge Congress to move beyond its high-profile hearings with tech leaders like Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey and craft meaningful new regulations. In a recent interview for Slate’s tech podcast If Then, Warner discussed why Democrats and Republicans gave Silicon Valley a pass for so long, what kinds of privacy protections could realistically be approved by Congress, and how U.S. cybercapabilities are falling behind countries like China and Russia. Our interview has been edited for clarity.

Read or listen to our conversation below, or get the show via Apple Podcasts, Overcast, Spotify, Stitcher, or Google Play.

April Glaser: Earlier this year, you released a white paper outlining various possibilities for how Congress might be able to do more to ensure that these tech companies aren’t making our elections awful and acting irresponsibly with our personal data. In your opinion, what’s the most urgent issue with U.S. tech companies and social media platforms that can be addressed by Congress?

Mark Warner: Well, I think we need to move past where we’ve been the last 10 years, where people in business and in politics have been totally enamored by the social media companies and the tech companies. Amazon, Google, Facebook, Twitter—they’ve all been wonderfully successful stories. But I think starting in 2016, we’ve seen the kind of dark underbelly of social media—how, in the case of our elections, Russians were able to come in and intervene in massive ways with fake information, with disinformation. That was the political context—what we’ve also seen has been manipulation around stock prices, around advertising clickthroughs. My background was in technology: I was in the wireless industry, co-founder of Nextel. I come with a little bit of knowledge. Many of my colleagues had very little knowledge. What I’ve tried to do in this paper is to say, “Here are 20 ideas, not all of them good, but I tried to break them into three buckets on how we might think about guardrails.”

One bucket is around user authentication and data authentication. Should we have some right to know if someone represents themselves on the internet, if they are that real person, or should we have a right to know whether we’re being communicated to by a human versus a bot? Should we be able to know, for example, where a post might originate from? None of these will solve all issues, but there is that whole question around authentication.

The second bucket is around privacy—something I know, April, that you’ve been working on for some time—and everything from first-party consent to the whole slightly clunky GDPR approach to privacy protections.

The third bucket are questions around whether there are pro-competition tools, where some of these enterprises have become so large and so powerful that, actually, could there be market-based solutions that might provide some relief? For example, being an old telecom guy, it used to be really hard [for consumers] to move from one company to another until there was number portability. Should we increase data portability? If you had increased data portability, we could take all of your data that you have on Facebook and move to a new platform. How do you also guarantee interoperability? One of the issues I raised, for example, with [Facebook COO] Sheryl Sandberg was, wouldn’t it be great if users could know not only how much data Facebook or Google or Twitter has on us by individual data points, but also how much that is worth on a monthly or quarterly basis for those companies? So bringing more transparency, both to data and pricing, might then provide, for example, areas for new competitors to come in that might end up either mediating between a user and a platform to provide different levels of security based upon a user’s wants.

Will Oremus: You talked about those three buckets, and there are a lot of good ideas in the paper. Where do we start, though? Which one do you think is most urgent, or which one can Congress actually achieve meaningful action on in the near-term future?

Warner: I think that’s an open question. What I try to tell my friends in the tech community is we are one incident away from a massive overreaction. Let me give you my semi-worst-case prediction. Someone will do a major cyberhack, a la an Equifax or what we’ve seen recently with Facebook, in terms of personal information—but instead of being 30 million users, say, 300 million users—take that personalized information, communicate with individuals with that personal data that will make folks open up the message or the post and then they’ll see a deepfake video of images of a politician or a business leader or others. And then founders will come in and go too far. So I do hope the companies would work with us closer. Some of the low-hanging fruit is an area—this will not solve all problems by any means—but just this notion that Twitter and Facebook said they were willing to move, to let individuals know whether they are being communicated with by a human being or a bot.

Now, there is nothing intrinsically wrong with being communicated with by a machine, but maybe at least having that data point would allow people to make judgments on how much they want to believe or not. I also think, and this gets more cumbersome, when we talk about massive amounts of data moving into the cloud, if somebody says they’re Will and posting from, you know, from the Valley, but the post is actually originating in St. Petersburg, Russia, maybe there ought to be a geo-indicator that would pop up again. You could validate or understand the post. You could still judge it, but you could at least know that it may not be originating from where the so-called user would indicate. Those are things that I think members can get their arms around that doesn’t seem to be too intrusive.

A much more intrusive area, that folks could understand that this would have huge pushback and would be way too radical to start with, would be the whole question of being able to get rid of anonymity on the web and move toward identity validation. We have seen countries such as Estonia, which has had so much interference from Russia where the Estonian population basically made an agreement that they would validate by both biometrics and enhanced passcodes, identity validation. You might end up then with two webs. One that is still kind of the wild, wild west, one where there is identity validation. Because we are seeing a move, unfortunately, toward the Balkanization of the internet. And as more and more countries try to move toward maintaining local control over their user’s data—India, the most recent example, obviously, China more on the extreme.

What’s doable, Will, in the short term I think, is easier around human versus machine, maybe geocoding, maybe some of the areas around the idea of first-party consent, around issues of privacy. I’m very interested in the pro-competitive areas, so that there is a way of recognizing government is pretty slow on the regulatory front. If there is a way do this on a more competitive model, that would probably take a little bit more work in terms of educating members.

Glaser: So, Senator, you say that you fear an overreaction, but it seems that now Congress really isn’t doing much in terms of getting behind some of the policy options that you’ve outlined. There are bills that have been proposed and it doesn’t seem that any have a lot of support. I am curious, though: How powerful are the tech lobbyists in influencing Congress here or causing Congress to drag their feet? I mean, whether it’s between lobbyists, public interest groups, and constituents, I’m curious who’s at the table with these conversations here.

Warner: Well, the conversations in a meaningful way have started literally in the last six to nine months. You know for a year to a year and a half after the 2016 elections, Facebook and Twitter kind of dragged their feet thinking that we would simply go away and Google, frankly, didn’t even engage. And frankly, Google, still to their detriment, are refusing to engage in a meaningful way. So I think that it was only after, for example, the Cambridge Analytica hack it became clear—at least Facebook started to realize that we weren’t going away—to policy makers and Americans were demanding an action. Look at the decreased use among millennials on Facebook—although they’d simply be moving to Instagram and other Facebook properties, so it’s not like we’re moving to a more competitive landscape.

It’s only been recently that [these companies] have fully engaged. I think there’s been, among the Democrats, there was this kind of enamored feeling, particularly out of the Obama administration, toward the tech community at large. And I think among the Republicans, there was this natural inclination not to be for any kind of regulatory structure and that’s combined with the fact that many of them, folks I work with, don’t even understand how these technologies work as a basic business model, which has made this a bit of an education process in moving both parties—Democrats to the point of saying, “Hey, you gotta have some guardrails—that doesn’t mean you’re anti-tech,” and moving to the Republicans to say, “Hey, just because we’re talking about privacy, because we’re talking about how we can increase more competition. You’ve got to look at guardrails as well.”

In an area like this where America has always taken the lead, our failure to take the lead has allowed, for example, the Europeans to move forward with GDPR. What I am starting to see is in kind of cousin areas—another place where I am very active is trying to make sure within the internet of things, next-generation connected devices, that we build-in basic security on the front end. That should have been a no-brainer. We still haven’t got that passed so now instead you’re seeing, like, Japan starting to take the model of my legislation, bipartisan legislation, and start to use that for their own Internet of Things security rule-making. I think America’s failure to lead in a lot of these areas is going to come back and bite us.

Glaser: You say in your policy paper that there is no form of deterrence now against foreign manipulation on U.S. social media. I’m curious if you have a sense of what an appropriate response from the U.S. would look like here.

Warner: Well, since the paper we are seeing government up its game. Let me speak to government first, and then talk about the companies. Part of this is government, and part of this is our structure. If Russia, with its hackers out of the IRA or out of its spy services are creating fake accounts and trying to interfere or hack into our election security, it is the responsibility of the CIA to follow that abroad or the NSA to try to have the cyberabilities to intercept those communications. Once somebody presses send, though, and that information then appears on your device here in America, all that responsibility is transferred over to the FBI and Department of Homeland Security. So this real wall we have, which I think worked for a long time between our domestic services and our foreign services—I’m not saying they need to be rethought, but it does make it more challenging in this realm of misinformation and disinformation.

Also, I would argue in the cyberrealm that we’ve not had a cyber doctrine since 9/11. We need them. This goes back. This is not just a problem with Trump. It’s Obama, it’s Bush, and that’s meant that near-peer adversaries like Russia and China have been able to either steal our intellectual property a la China or hack into the OPM a la China or interfere in our elections a la Russia with the IRA and the GRU. We’ve been reluctant to use any of our tools to push back, so we’ve kind of been a punching bag. Now that gets us into the whole realm of offensive cybercapabilities would probably be a longer conversation than we would have today. But we need an articulated cyberdoctrine. Frankly, not just us, but the West at large should have some policies that say if you use certain cybertools and we can define which ones, you know that we’re going to have retribution, we’re really going to be willing to punch back, but that’s one conversation.

On the companies’ part, they have also started to up their game and we’ve seen Facebook and Twitter take down accounts. Not as much action on Google. We’ve seen Microsoft, for example, a couple of months ago, indicate both Iranian and other accounts they were taking down. They have upped their game some, but what we’ve seen from the most recent Facebook hack or the much more egregious Google hack that they sat on for six months before they even reported. This is still not a top priority for the companies.

Glaser: What are your thoughts on these companies trying to get their products into China and forgoing human rights protections that they’d adhere to in the U.S. and also potentially exposing their artificial intelligence work, which the U.S. military is interested in using as well, to the Chinese government? Is there a role for Congress to play in constraining how these companies move into China?

Warner: Absolutely. And I say this as somebody whose thinking has changed dramatically on China. Five years ago I thought there was ability to peacefully rise together. I still would hope for that, but I think that the reality of the threat that China poses, the fact that the Alibabas and Baidus and Tencents are in a sense almost agents of the state, as we see Huawei and ZTE and the telecom area try and dominate the 5G standards. I think it is remarkable that some of these American companies, purely for financial gain, are willing to sacrifice their principles and give up their crown jewels to try to get access to the Chinese market. So I’m leading a bipartisan effort to really get some more of this information declassified, not only to warn the tech companies, but to warn others in terms of, buyer beware.

Not saying we don’t do business with China, but I’m saying we need a greater sense of protection and frankly, some of these companies, particularly companies that say they want to do no evil—I don’t know how they can square that when they would suddenly provide search engines where the Chinese government has enormous ability to spy and surveil its own people. I would hate to see an American company be part of that. So we need to do more disclosure to our American companies. We need to press them a little bit on, really, are you willing to kind of sacrifice all of your principles to get into this market and, again, I think we need to be willing to call out some of these Chinese tech companies who frankly are very much tools of the Chinese Communist Party. At least, if not directly, indirectly.

Oremus: Sen. Warner, let’s say you have a constituent that comes to you and says, “Look, I use Google, I use Facebook, I’m really afraid about how they’re harvesting all my data.” We had the Google Plus breach that at least exposed our personal data. We had the giant Facebook hack and the Cambridge Analytica scandal. Your constituent says, “I use these services and I’m afraid that I don’t feel like I have a choice”. Now when they testified to you in Congress, these companies said, “Oh, of course people have choices. There are plenty of other social networks, there’s search engines.” Do you buy that answer and that is what you tell your constituent or is there something else?

Warner: No, I don’t buy that answer. You can’t opt out even if you’re not on Facebook. You may have friends who are, who all have information about you. These are companies with as much power, if not more, than even the giant trusts, the railroad and chemical and shipping industries at the beginning of the 20th century. I think we are going to have to have this reckoning with them. But I have been concerned—I don’t want to undercut the American companies to have them replaced by Chinese companies that may even have more information and even less restraints. That’s why I go, for example, on the information piece, I’m really intrigued with this idea of more transparency.

If a user really knew how much information Facebook has or Google has about that individual and if we actually had pricing transparency as well—because a lot of Americans believe, “Oh my gosh, this is all free stuff.” This isn’t free. This is people harvesting information about each of us and they are monetizing it. If we had more transparency on that, that might inject more competition or might move us quicker on trying to put some guardrails. Again, I don’t want to stop innovation, I don’t want to slow it with undue regulation, but I frankly believe that this is a personal security threat. I believe that it’s a national security threat. I honestly believe, in a certain sense, looking at our $7 billion or $13 billion defense budget, we may be buying in this country the world’s best 20th century military in terms of tanks and trucks and rockets, whereas our near-peer adversaries like Russia and China are realizing cyber and misinformation and disinformation may be the tools of conflict in the 21st century. And I’m not sure we’re fully prepared.

Glaser: What can Americans that are concerned about these issues do?

Warner: I’d love to give you a clearer answer. I’d say write or email your congressmen or senators, but for many of the members, maybe the young aide who’s reading the post will understand it. I’m not sure that some of our members will. I mean, one of the things we’ve done on the Intelligence Committee was we spent an awful lot of time trying to educate folks about how, in this case, the Russians were using these tools. And I was really proud when we had Jack Dorsey and Sheryl Sandberg—you know, nobody went off and started speculating about bias and algorithms. They got to the house and it was a very different matter. I think the questions were more serious, but boy, oh boy, we do need to continue to educate members, hopefully in a bipartisan way, so that we can get to the point of some guardrails.

I think continuing to have individuals contact their congresspeople and senators and say, particularly if they have concerns about the amount of information that these companies have about us. We have this huge concern that the government has all this information on us as individuals. I can assure you that if you’re an active Facebook or active Google user, those companies have more information about your personal habits and what you do and where you shop and what you’re interested in than the United States government has.