Future Tense

Are Algorithms the New Campaign Donation?

It’s difficult to assess their market value, and they can move between organizations easily. That’s a problem.

Alexander Nix, Steve Bannon, Robert Mercer, and Christopher Wylie.
Alexander Nix, Steve Bannon, Robert Mercer, and Christopher Wylie. Photo illustration by Slate. Photos by Henry Nicholls/Reuters. Oliver Contreras/For The Washington Post via Getty Images, Peter Nicholls/Reuters, Pascal Rossignol/Reuters.

The past few weeks have generated tremendous news about the juncture of algorithmic technologies and elections. When the New York Times and the Guardian revealed that the data-analytics firm Cambridge Analytica had stolen Facebook profiles to build the voter-profiling tools used in the 2016 Trump presidential and pro-Brexit campaigns, it was clearly bad news for some powerful people and tech companies. The whistleblower behind it all, Christopher Wylie, testified to the U.K. Parliament on March 27 and made dozens of explosive allegations, including the trading of campaign services for cushy contracts and mineral rights in the developing world, deceitful official testimony from executives, and possibly even murder. As far as parliamentary testimony goes, it was a wild three hours, even if all of Wylie’s claims don’t pan out.

However, in the long run, the most consequential claim Wylie made might be that GOP megadonor and Cambridge Analytica’s chief financier Robert Mercer may have funded the development of the behavioral models at the heart of this story at a loss. You might ask: So what? A conservative billionaire sunk a lot of his own money to build a wonkish tool to help Republican candidates find voters more effectively—that doesn’t seem particularly odd.

Yet this development heralds a new and dangerous age of algorithmic electioneering, one that warrants significant legal scrutiny. Such behavioral models move far more freely and easily between political entities than voter data itself does, enabling organizations that are separated by law to coordinate on core campaign decisions, particularly if they are licensed at prices far below their hard-to-determine fair market value.

As reported by Mother Jones recently, Cambridge Analytica was founded in the wake of the voting-day collapse of Mitt Romney’s data operations. On the margins of an election post-mortem meeting in New York City in early 2013, GOP operative Mark Block (Herman Cain’s notorious cigarette-smoking campaign manager) introduced major GOP donors Robert and Rebekah Mercer to Alexander Nix, the CEO of the British election-intelligence firm Strategic Communication Laboratories, or SCL. Also at the meeting aboard the Mercers’ yacht was Steve Bannon, then chairman of Breitbart News and future Trump adviser. Mercer made his fortune through the use of financial algorithms to find an uncountable number of tiny marginal advantages in the stock market, launching the age of high-speed trading. He is a true believer in the supposed power of data analytics to make enormous events happen one small change at a time.

Nix offered Bannon and the Mercers behavioral models that would combine intimate psychological profiles and the vast troves of demographic and consumer data already bought by political campaigns from commercial data brokers. These models would enable political campaigns to target cultural content and political advertising to people who were susceptible to tailored messages and who were likely to share it organically within their social networks, magnifying their reach. Nix’s sales pitch must have worked, because the shell company Cambridge Analytica was formed in December 2013 with a reported $15 million investment from the Mercers and with Bannon as vice president and board member, adding the firm to the long list of Mercer-Bannon ventures. The company existed for one reason: to hold intellectual property licensed from SCL—namely, those behavioral models and the dashboards to interpret and use them—for use in U.S. elections. This corporate structure meant that SCL would theoretically not run afoul of laws preventing foreign nationals from participating in leadership roles in U.S. elections.

Shortly thereafter, under contract with SCL, the University of Cambridge psychologist Aleksandr Kogan started illicitly sending the company extensive Facebook-profile data. (Disclosure: I have worked for Facebook as a consultant on issues not directly related to this topic.) Kogan persuaded and/or paid 270,000 Facebook users, some of them Mechanical Turk workers, to take a scientifically validated personality quiz, requiring them to provide permission for Kogan to collect the entire history of their “likes” on the platform.

Those likes provided a cheap, easily accessible behavioral record of billions of people all in one place, conveniently formatted for machine analysis. That’s a gold mine for anyone who wants to understand or manipulate people—that is, if they can determine how to correlate those inexpensive behavioral records with more meaningful traits that are typically far more expensive to observe. So Kogan built a mathematical model that leverages correlations between (cheap) “likes” and (expensive) major personality traits. And because of the permissive nature of Facebook’s API at that time, SCL was also able to collect the behavioral history of all those users’ friends, turning 270,000 quiz takers into 87 million records, by Facebook’s latest estimate.

That is a lot of potentially useful information about voters, particularly when combined with the largely unregulated trove of personal data all large campaigns now hold. Because Facebook’s “custom audience” advertising platform allows advertisers to target small groups of people in specific locations based on behavioral markers (their “likes”), SCL and its subsidiaries now held what was potentially a back door to the mental character of hundreds of millions of American voters. In 2015, Facebook demanded that Cambridge Analytica, SCL, Wylie, and Kogan delete the data sets for violation of the Facebook terms of service. (They apparently failed to comply, and Facebook barred them from the site this year.) But by then, the models were already built and the purloined data was no longer necessary as long as they had other large data sets.

Soon thereafter, Cambridge Analytica started working with its first Republican clients. Those clients included the ForAmerica PAC, run by right-wing media critic Brent Bozell III and funded primarily by the Mercers. ForAmerica soon gained notoriety for effectively using social media to persuade its followers to harass the Obama White House. They also worked with the Mercer-funded John Bolton Super PAC to create psychologically targeted ads in a U.S. Senate race. Block’s polling firm was also involved in the process of bulking up the models, racking up consumer complaints about aggressive phone-polling practices. In addition to these campaigns, Cambridge Analytica worked in a half-dozen other state races, for both official candidate campaigns and PACs. According to Wylie, these efforts were focused on testing Bannon’s tactics for his culture war. What makes this rogue’s gallery of GOP-connected PACs important is that Mercer funding was often on both sides of the contract, and the algorithm “learns” from each campaign, effectively transferring expertise and economic value across organizations.

Cambridge Analytica’s first major campaign was Ted Cruz’s 2016 Republican presidential primary. Cruz, the Mercer’s first preferred candidate in the race, hired it to run the majority of his digital operations, which appeared to be a mistake. Numerous reports highlight how Cambridge Analytica bungled basic campaign operations, such as not launching Cruz’s campaign website on time. When Cruz dropped out and the Mercers backed Trump—including allegedly requiring him to hire Bannon—Cambridge Analytica and its SCL contractors were integrated into the Trump digital team based in San Antonio, Texas. Reporting indicates that they were used primarily for behavioral modeling to fine-tune targeted advertising in swing states and to instruct Trump about emotional themes and keywords to reinforce in stump speeches. By their own account, the Trump digital campaign tested up to 175,000 unique advertisements on one day (40,000–50,000 on most days) based on these integrated profiles, bombarding the Facebook advertising platform with experiments. These algorithm-driven tests learned how to best activate negative personality traits in their own voters and suppress their opponent’s core voters’ desire to vote. Despite the White House’s denials that any stolen Facebook data was used by their campaign, Nix stated to the BBC that “legacy data models” were transferred from the Cruz campaign to the Trump campaign.

Leaving aside concerns about the content and efficacy of the psychologically targeted ads, the transfer of behavioral models between political organizations in this manner marks a new age of campaign financing. Algorithmic models integrate new data and “learn” from quantitative feedback. They do not care whether they are learning from a PAC’s data or an electoral campaign’s data, despite U.S. election law requiring that such entities not coordinate on decisions, staffing, or messaging. Indeed, given skepticism about whether psychographic profiling is nearly as effective as Cambridge Analytica claimed, the safest assumption is that personality scores were neither magic nor snake oil. Instead, it seems they were simply one more feature among many by which the profiled voters could be sorted and their responses to ads tested. In other words, the one thing we can be sure of psychographic profiling is that it provided one more way to transfer knowledge and economic value between campaigns and organizations.

This raises a question: Are two organizations running the same behavioral-profiling algorithm, trained on the same prior data sets, truly making independent decisions? Furthermore, if SCL is using the same models between its shell-company subsidiaries in the U.S., the U.K., Canada, and any of the other 100 nations they have operated in, are these models a nexus for international campaign financing fraud? When the Republican National Committee hired SCL’s Canadian subsidiary AggregateIQ, or AIQ, to build its core data platform, code-named Ripon, did it realize that it was using data stolen by foreign nationals from a U.S. corporation, Facebook? (AIQ is also at the heart of an inquiry in the U.K., where their fees are alleged to have facilitated campaign financing fraud.) Now that RNC leaders realize this, will they destroy the models? Will Facebook refuse the RNC access to their platform until the models are destroyed?

Additionally, with Mercer-backed organizations on both sides of many of these transactions in the U.S., it is important that we know whether Cambridge Analytica was charging fair market rates for these technologies. If the rates are not fair, as Wylie alleged in his parliamentary testimony, then the license to use the models may be an enormous in-kind donation given to the campaigns that exceeds the legal limits. (Relevant portions of Wylie’s testimony can be found at the 11:15, 13:32 and 13:41 time stamps.) Similarly, if PACs are being used to collect data and train these models, is the transfer of the improved model between PACs and campaigns also an illegal in-kind donation? A well-run campaign would have meticulously documented the value of these donations, but I do not hold out hope that the Trump campaign did so. Likely the only way to discover the truth is if regulators or prosecutors with subpoena power decide that the facts warrant an investigation. Notably, now that Cambridge Analytica has become a toxic brand, the major players have reorganized under a new shell company named EmerData.

Behavioral models are a new form of economic value in political campaigning, so we can expect they are or will be a potential nexus of political corruption. At the moment, it appears that electioneering algorithms are a loss leader for the Mercers’ culture war, and our current electoral regulations are not up to the task of addressing that. Most ominously, the iterative nature of these machines means what we have seen so far is always just a dry run.

Read more from Slate on Cambridge Analytica.