The Industry

The Cambridge Analytica Scandal Is Over and Nothing Has Changed

The use case that began this mess—voter targeting—is nearly as ripe for manipulation as it always was.

Former Cambridge Analytica CEO Alexander Nix.
Former Cambridge Analytica CEO Alexander Nix.
Photo illustration by Slate. Photo by REUTERS/Henry Nicholls.

Cambridge Analytica declared bankruptcy on Friday, sharing that it would shutter most of its operations because “the siege of media coverage has driven away virtually all of the Company’s customers and suppliers.” The announcement was a major, though unsurprising, development in the saga surrounding Facebook and the voter-targeting firm, a relationship that has led to congressional hearings, a federal investigation, and a public debate about the data collection that undergirds the business model of so much of today’s internet—that your email and social networks and music and maps are all free, as long as you give up your personal data in return.

It’s been nearly two months since the story that Cambridge Analytica had inappropriately accessed the data of as many as 87 million Facebook users began to saturate headlines, so let’s review what’s changed: Though Cambridge Analytica is in the process of shutting down, its investors and executives aren’t getting out of the elections business. After all, SCL Group, Cambridge Analytica’s parent company, isn’t dissolving, and it’s not apparent that the 18 other companies and affiliates that operate under SCL Group are either. And while Facebook has taken solid and important steps to tighten how data is allowed to seep beyond its platform’s walls—and yes, Congress seems to be at least interested in reining in how companies like Facebook and Google collect, store, and utilize the unfathomable amount of user data they siphon up in order to sell hypertargeted ads—there’s no indication any of these changes will impact the end game that got Facebook into this mess in the first place: data-driven political advertising.

On Sunday, the Guardian published a story based on an interview with Christopher Wylie, a former Cambridge Analytica employee turned whistleblower, about how exactly the firm took data from Facebook users and funneled it into its work for the Trump campaign in 2016. By combining a user’s Facebook likes with public voter data and other demographic information, the company’s voter-profiling model might determine that a person is neurotic, according to one example offered by Wylie, so if the company was trying to communicate a message about jobs and the economy, its ad sent to that person would emphasize security. A conscientious person, according to Wylie, might receive an ad “about the opportunity to succeed and the responsibility that a job gives you. If it’s an open person, you talk about the opportunity to grow as a person,” he said. According to Wylie, although the company promised Facebook in 2015 that it had deleted the data it had bought from the maker of a personality quiz, it still held onto models derived from the original tranche of user information. (Cambridge Analytica has said that it did not use any of the data from the Facebook quiz in the 2016 election, and Facebook is currently auditing the company to see if it truly deleted the data.)

It would probably not be possible to amass that same data set now: In 2014, Facebook stopped letting app developers scoop up data on people’s friends without their friends’ permission. And last month, the company made it a bit harder for developers to obtain as much information as they used to by setting up some roadblocks and approval processes that limit third parties trying to get their hands on Facebook data. But the company hasn’t announced that it will stop letting political advertisers target people with ads based on their race or their level of education or their sexual preference or their religious identity or any other category the site has placed its billions of users into; Google probably isn’t stopping either. Meanwhile, the data in the Cambridge Analytica stash, as well as data from other third-party app developers, could very well still be out there. Facebook also still allows app developers to access users’ likes, which could be used by a group like Cambridge Analytica to create profiles of voters to help send manipulative political messaging. Already, the data-driven ad-targeting model that Google and Facebook emblematize is much more worrying than normal advertising, in which buyers purchase ads to reach a demographic but are unable to target individuals. When politics are added to the mix, sensitive indicators like income, prejudices, and mental health can come into play. It’s not clear how refined a company like Cambridge Analytica truly is with its profiling. But its ambitions to psychographically target voters are scary—and while the effectiveness of this kind of advertising is up for debate, it’s certainly still allowed.

And the people at Cambridge Analytica may even be the ones to still do it. While that firm might soon cease to exist, its old chief data officer and temporary CEO, Alexander Tayler, started a new company last August with SCL Chairman Julian Wheatland and former Cambridge Analytica CEO Alexander Nix called Emerdata, whose board includes Rebekah and Jennifer Mercer, the daughters of the secretive billionaire who was Cambridge Analytica’s primary financier as well as the biggest donor to Trump’s campaign. In a Channel 4 News investigation, SCL Group founder Nigel Oakes said that Emerdata was established in order to acquire Cambridge Analytica and SCL. And Nix, who was suspended from Cambridge Analytica after a Channel 4 investigation revealed that he offered unsavory methods to swing an election for a client, has a new company, Firecrest Technologies, founded in March, which shares an address with Emerdata and SCL Group. If the data that Cambridge Analytica claimed it deleted still exists in a derivative form, as Wylie claims, then some of the people who wrongfully obtained it in the first place may still have access.

So what does this mean for the future of our elections? Will the midterms be noticeably less awful than the 2016 cycle? Sure, there may be fewer Russian operatives posing as American activists, thanks to Facebook and Twitter bans of accounts associated with the Kremlin-backed Internet Research Agency. Or the Russian operatives might have just gotten even smarter, or may be working from another address—it’s hard to predict. And political ads may now include information about whatever random super PAC paid for them, as Facebook has recently promised. But none of that changes the fact that Facebook and Google and other ad companies will still let campaigns and other political groups target voters based on their race and class and even mental health, if they can find an indicator for it, to attempt to manipulate how we vote. And while there are some legislative proposals that would force Facebook and other big data collectors to actually reveal what they collect on people, there isn’t a proposal now that would actually limit hypertargeted data profiling in an election cycle, because ultimately no one in power seems to consider it a problem. Meanwhile, for the most part, the advocacy groups that have long worked on these issues are publicly calling on the companies to self-regulate and become more transparent, instead of, for example, launching clear public campaigns to demand that lawmakers make these companies collect less data or stop allowing ad buyers to leverage them to try to influence elections.

Data targeting can be a form of racial profiling and economic profiling and religious profiling. It can be used to try to suppress voting and to send ads for subprime loans to certain demographics and not others. Of course, it can also be used to get important information out to people who most need it, like health advice and resources for pregnant women and information about people’s local polling stations, but that just means that any regulation limiting how it’s used might be difficult to craft, but not impossible. We still have no hint that meaningful regulations—that truly curb how companies are allowed to hand over user data to third parties and sell ads targeted at our precise interests, skin color, economic class, level of education, and even bigotries—are coming. And incidents such as Facebook’s repeated failed promises to curb housing discrimination leave plenty of room to doubt that our elections will be any less vulnerable to divisive manipulation—either by professional firms or shadowy propagandists—anytime soon.