Future Tense

How Big Tech Turns Privacy Laws Into Privacy Theater

Some privacy professionals are shut out of the process, but others don’t even realize their complicity.

Three "see no evil, speak no evil, hear no evil" monkey statues sitting in front of laptops.
Photo illustration by Slate. Photos by nantela/iStock/Getty Images Plus, niwate bunlue/iStock/Getty Images Plus, and Pongasn68/iStock/Getty Images Plus.

When whistleblower Frances Haugen testified before Congress in October, she revealed that Facebook designed and marketed a harmful product with full knowledge of its dangers.
It isn’t remarkable that Haugen came forward, nor are the misdeeds she revealed. What’s remarkable is that Big Tech whistleblowers are rare finds in a sea of foot soldiers. And what’s most shocking about those foot soldiers is that many of them don’t even realize how complicit they are in their employers’ crusade to undermine our privacy.

Advertisement

I am a lawyer and sociologist, and one of the things I study is how law is implemented on the ground. So beginning in 2016, I spent nearly four years researching how tech companies do privacy: how they comply with privacy law, how they integrate privacy into design (or fail to), and how that work fits into the larger structure and operation of the organization. I was embedded inside three companies, sitting in on meetings, observing and interviewing workers, and reviewing confidential documents. I also interviewed more than 100 current and former engineers, privacy professionals, product managers, salespersons, and lawyers from the largest Silicon Valley tech companies. What I found isn’t cause for much optimism.

Advertisement
Advertisement
Advertisement

Privacy law is manifested in practice as a litany of “Agree” buttons to consent to data collection and a series of long, convoluted statements of data collection practices that are supposed to give users enough notice about what companies do with our data to enable us to make informed decisions. Almost everyone you ask—policymakers, practitioners, academics—agrees that this system is inadequate. The last time most of us read a privacy policy is never. And even if we did and were offended by what we saw, we rarely have privacy-protective alternatives.

Advertisement

Newer privacy laws like the General Data Protection Regulation in Europe and the California Consumer Privacy Act, as well as orders from the Federal Trade Commission, feature more notice and rights to access, correct, move, and delete our data, combined with a series of internal compliance obligations for companies. Despite the lack of comprehensive privacy legislation in the U.S. (though there are new proposals both federally and in the states), most big tech companies effectively have to comply with these rules, whether because they want to do business in Europe or California, are subject to an FTC consent decree, or recognize that what the FTC requires of one company sets a standard for all companies. (If the FTC has dinged a competitor’s practice for invading consumer privacy, it won’t hesitate to do the same to you.)

Advertisement
Advertisement

Under these rules, companies have to complete privacy impact assessments. These are reports that are supposed to detail the privacy impacts of new products: How does it track behavior? What options do users have to limit tracking? In what ways, if any, are forms of data collection necessary for the product to work? And many other questions. Companies also need to develop internal rules and policies, host training for all employees, and change the ethos of the company toward privacy. They have to keep records and audit their privacy practices regularly. In theory, an army of internal privacy professionals are supposed to engage in ongoing monitoring and advocate for privacy from the inside.

Advertisement

In practice, though, it doesn’t always work that way. Companies like to tell us that they “care” about our privacy or that our “privacy is important” to them, but the truth is that tech companies systematically co-opt both their employees and the law so that everyone—even those who consider themselves privacy advocates—and everything they do—even tasks that seem privacy protective—all end up serving their employers’ data extractive needs in the end.

Advertisement

During four years of research, I found privacy impact assessments reduced to simple box-checking. At one company, for example, the general counsel’s office went so far as to reduce a privacy impact assessment to a chart with “yes” and “no” columns next to questions like, “Will there be collection of personal information from customers?” with a note preceding the chart telling employees to “always check no.” Everyone I spoke to reported using privacy impact assessments to assess litigation risks to the company rather than privacy risks to consumers.
Trainings, which were supposed to highlight privacy’s importance and teach engineers about integrating it into their work, were often a quick 30 minutes during onboarding. And audits were even worse. When you hear “audit,” you probably think of an independent expert coming into a company to do a review of corporate conduct. Instead, the Federal Trade Commission requires privacy “assessments” that are based almost exclusively on executives’ attestations of their own compliance. I saw several audit reports where proof of compliance was a simple one-sentence letter from the general counsel appended to the back of the audit as an exhibit, stating, “We are in compliance with Section 5.2 of the FTC Consent Decree.”

Advertisement
Advertisement
Advertisement

How do privacy professionals let this happen? Sometimes they can’t help it. One tactic I saw companies use to hobble the privacy office is to limit employees’ access to the design process. The pattern is common: The company assigns a privacy expert on a product or engineering team. That lets them tell the world they are integrating privacy throughout their business units. But the company’s reporting practices block the privacy expert from doing much good. They require an engineer, who likely never had any real training in privacy, to be the one to spot a privacy issue. The engineer must report it to their manager (who was also likely a coder), who reports it to their manager (again, likely another coder), who reports it to the product lead, who would then bring it to the privacy expert at a specially scheduled meeting. In that system most, and sometimes all, privacy issues were either missed or addressed ad hoc by a first- or second-year programmer.

Advertisement
Advertisement

Here’s another example. Tech companies boast about hiring many new privacy engineers, technologists whose expertise is in integrating privacy into design. But many privacy engineers are treated like ex post auditors, reviewing other people’s code after all work is done. At that point, there is little a privacy engineer can do other than making small tweaks to millions of lines of code. No one wants to be the person who stops a project at the eleventh hour, wasting all that investment.

Advertisement

But where some privacy professionals are shut out of the process, others don’t even realize their complicity. At one social media company, a team of privacy professionals and lawyers showed me their work product over the previous six months. “We are busier than ever,” the chief privacy officer said. And they were right: They had copious files of reports, policies, assessments, trainings, documents, vendor agreements, and so on. They were doing a lot of work, all of which was assigned to them by their bosses and all of it about transparency, notice, and choice. “Has all of this work translated into stronger privacy protections in the products you create?” I asked. The only answers they offered were about how their privacy policies were more readable, how their customers had more opportunities to accept or decline cookies, how they had improved transparency. Transparency isn’t a bad thing, of course, but it’s not an end in itself. No one’s privacy is going to be materially improved by tech companies making privacy policies more readable or by giving users a thousand-page document about all the data they have on us. When I asked about more material impact on the designs of new technologies, such as anti-tracking defaults or designed-in limitations on data collection, storage, and processing, I got more incredulous looks than anything else. “But that’s not what we do,” they said. “That’s not what privacy law is.”

Advertisement
Advertisement
Advertisement
Advertisement

I heard that a lot in four years studying Big Tech. Some privacy professionals, lawyers, and engineers are so used to focusing primarily on notice, choice, and transparency that they can’t even imagine alternatives. Their habits have become the norm, the common-sense approach so obvious that to suggest that maybe the company shouldn’t collect information on race, or maybe they shouldn’t collect more information than they need, or maybe they shouldn’t use dark patterns to manipulate disclosure, are so out of bounds that the only possible response is an incredulous Huh? They don’t realize that they have been handed a deliberately narrow agenda, focused only on the very thing—notice and choice—that gives companies license to do whatever they want with our data.

Advertisement

This is what’s behind all those Big Tech calls for privacy regulation. Mark Zuckerberg can honestly say he wants Congress to regulate the information industry because he knows that whatever regulation Congress passes isn’t likely to mean much. He knows he can turn proceduralist rules into shams, and he knows most of his staff is either too constrained to do anything about it or unaware of what’s really going on.

What’s even worse is that none of this is illegal. Not only is there no law against check-box privacy, but after decades of neoliberal and anti-regulatory hegemony, performative legal compliance is what passes for public governance. We need an entirely new way of thinking about and writing privacy laws, because Big Tech has gotten too good at manipulating process-based laws for its own benefit. Instead, we should be thinking about: interrogating and regulating the algorithms on which the information economy is based; strict limits on data collection; criminal and civil liabilities for executives who lie to us about our privacy; strong labor protections for employees so management can’t fire someone who does research that challenges the bottom line or speaks up against predatory and data extractive behavior; civil rights remedies for data-driven discrimination; and, ultimately, structural changes in the relationship between public institutions and the information industry. Only with these (and many other) non-reformist changes can we begin to shift the balance of power away from predatory tech companies and their pliant compliance machines.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement