Future Tense

How to Regulate Facebook

Mark Zuckerberg says he’s open to more oversight. What should it look like?

Mark Zuckerberg.
Facebook CEO Mark Zuckerberg on April 18 in San Jose, California.
Justin Sullivan/Getty Images

As Facebook continues to clean up the fallout of the Cambridge Analytica scandal, it’s become clearer and clearer to users and policymakers alike that the company probably can’t be trusted to regulate itself. Though in multiple interviews on Wednesday Facebook CEO Mark Zuckerberg said he’s open to the idea of some form of regulation, by the looks of it, the company hasn’t always adhered to the federal rules it was already supposed to be following. For example: According to a consent decree issued by the Federal Trade Commission in 2011, Facebook was required to get permission from users before accessing private data about them beyond what they’ve explicitly agreed to, yet for years the company allowed thousands of developers to not only collect data from people who downloaded their Facebook apps but also data on all of their friends.

That’s how Aleksandr Kogan, a professor contracting for the political-data firm Cambridge Analytica, was able to collect data on more than 50 million Facebook users even though only 270,000 people actually downloaded his Facebook app, a personality quiz called “thisisyourdigitallife.” Facebook changed its policy in 2014 to prevent developers from gathering data on the friends of people who downloaded their Facebook quizzes or games, but by that time, according to Sandy Parakilas, a whistleblower who used to work on Facebook’s app security team and spoke to the Guardian earlier this week, it’s possible that hundreds of millions of people’s data could have been swept up by app developers without their consent. Facebook could face up to $40,000 per violation, and the FTC is currently investigating whether the company broke the rules.

So, stipulated: Facebook’s record of adhering to old regulations is troubled enough that we should be skeptical of its enthusiasm for adhering to new ones. But what kinds of regulations are called for to rein in a company that collects and profits off of the data of 2.2 billion people? And how can the officials crafting them ensure that they work?

Let’s start with a comprehensive internet-privacy law, which the U.S. does not have. “We need a privacy bill of rights that we pass through Congress,” Sen. Ed Markey told NPR on Thursday, stressing that it should “guarantee that every American know when information is being gathered about them, know when that information is being used other than how the consumer wanted it to be used, and third, and most importantly, they have a right to say no.” Even if Congress did pass such a thing, it’s not clear how it would be enforced. In Europe, however, we may have soon have a road map.

By May 25, tech companies operating in the European Union will have to adhere to a new set of data-collection laws, the General Data Protection Regulation, which is supposed to ensure that users are able to consent to the data that’s being collected about them and that companies are clear about how that data is used. Tech companies are also supposed to provide consumers with the ability to access the data that companies have on them, allow users to correct personal information deduced by companies that may be inaccurate, and limit how algorithms will be allowed to process their data, in addition to other provisions intended to protect internet users in the EU.

Considering Facebook spent more on federal lobbying last year than in any year in the company’s history—and Google, which also would likely be affected by such a federal online privacy law, spent more money on lobbying than any other company in the U.S.—a privacy law similar to the European regulations would face a steep climb. If internet companies don’t comply to the new suite of European data policies, they could face steep fines, which can go as high as $24.8 million or 4 percent of a company’s annual global revenue, whichever is higher.
For a company as big as Facebook, violations of EU privacy laws could cost the company billions, so it’s safe to assume internet firms won’t want to risk similar consequences in the U.S.

But short of a comprehensive privacy law, there are important, narrower actions Congress could take that would go a long way toward ensuring that the data that we all inadvertently create and is collected about us is less likely to be used in ways we never anticipated or consented to in the future.

“One place where we could start would be with a uniform and robust data breach notification standard that’s not watered down and has penalties,” said Danielle Citron, a law professor at the University of Maryland who specializes in internet privacy and free speech online. Such a law would require that consumers and the government be swiftly alerted when their data has been stolen in a hack or landed somewhere without users’ consent. Citron emphasized that any new data breach law shouldn’t be weaker than current state data breach notification laws, since any new federal requirements would likely pre-empt them. Laws vary from state to state, but most require private companies and government agencies that experience a security breach that includes any personally identifiable information to notify those affected within some set amount of time. Penalties would be important here, too. “If you make companies liable in some way and they have to internalize some of the costs,” Citron said, “they’re going to take security more seriously.”

Even though Facebook’s spillage wasn’t technically a breach—the data went out Facebook’s front door and there wasn’t a security vulnerability that was exploited—the end result is the same: Tens of millions of Facebook users’ data ended up in a place it wasn’t supposed to. So lawmakers might be well-advised to include provisions that force companies to keep any data they collect and share with third parties on a tighter leash. They could start, for example, by establishing federal standards that limit what data companies are reasonably allowed to collect about their users in the first place and make that explicit to consumers. “When you take a quiz to determine whether you are more like Princess Leia or Chewbacca, it’s not reasonable to also take your friends list,” Citron said. “When you buy something from Amazon, should Amazon get to install permanent cookies that travel with you all over the web?” she asked. “We collect data like gluttons, and then we overshare it.” Facebook has since changed its broadly permissive developer data-sharing policies to no longer include information about users’ friends who didn’t download the app, but without a set of federal guidelines that specify some reasonable standard, there’s nothing stopping the company from, say, giving app developers your location or email address when there’s absolutely no reason for them to have it. In fact, Zuckerberg even said on Wednesday in a statement intended to assuage concerns over the Cambridge Analytica controversy that he’s instituting a new set of policies to limit what developers can collect on Facebook users to just people’s photo, name, and email address. But there’s nothing to stop the company from adjusting that policy in the future.

The Federal Trade Commission, which is supposed to act as a consumer protection watchdog, could also do more to proactively investigate how well companies are protecting consumers, according to Ryan Calo, a law professor at the University of Washington who specializes in online privacy. “The FTC has the authority to really look behind the digital veil and figure out what these companies are doing, but the FTC really doesn’t do that,” said Calo. And when the agency does find a problem, “they could issue fines and pass consent decrees,” like the one that Facebook might be in violation of. Laws aren’t easy to pass and pressuring the FTC to use the full extent of its oversight authority may well be a faster avenue toward regulating companies in the business of overbroad data collection, like Facebook.

Still, Zuckerberg did say that he wasn’t flat-out opposed to regulation, and he did gesture toward one existing bill in his interview with Wired on Wednesday, the Honest Ads Act, which would require online political ads to follow the same disclosure rules that political ads on radio and television do by including information in the ad about who paid for it; Zuckerberg also said that he doesn’t expect the bill to pass. Besides, he said, Facebook is taking steps around political-ad disclosure anyway. In addition to being a sly pitch for Facebook to regulate itself without more federal oversight, it was also an altogether different (albeit important) issue from the one highlighted by the Cambridge Analytica scandal, in which the company hemorrhaged data on tens of millions of users.

If Zuckerberg is really open to more government oversight and enforcement, though, and doesn’t plan to push back hard against whatever Congress or federal regulators have planned, then he should agree, unequivocally, to the growing chorus of legislators asking him to testify to Congress. Instead, Zuckerberg told Wired that he’d only do it if he’s “the most informed person at Facebook in the best position to”—which isn’t exactly a yes. If Zuckerberg can’t even commit to a symbolic act to demonstrate his company is putting its users’ privacy first, are we really sure he’ll commit to any legally binding ones?

Read more from Slate on Cambridge Analytica.