The Industry

Facebook’s Political Ads Are Still a Black Box

So far, the company’s effort to make its election-related ads more transparent—and keep Russian propaganda out—isn’t nearly good enough.

Confused man surrounded by question marks.
Photo illustration by Slate. Image by Thinkstock.

One of the many, many problems on Facebook during the 2016 presidential campaign was that the company had left a door open for Russian internet trolls, and the trolls hopped right through it. The Kremlin-backed Internet Research Agency spent election season pumping propaganda—much of it pro-Trump, all of it designed to stoke political, racial, or religious tensions—into the news feeds of U.S. voters with the help of Facebook’s ad-targeting system. And when people saw these posts (thousands of which were really ads because someone had paid for more users to see them), they had no idea where this content had actually come from.

This year, Facebook has been working to clean up the advertising system that facilitated all that misbehavior by requiring more disclosure from advertisers so that you know who’s paying for you to see their message. The company told me it is adding more features to this system, like information about where a page is managed. That’s good, because so far, while the information that Facebook’s transparency portal has furnished is interesting, it’s inadequate to the task of making Facebook as transparent as it needs to be so that the 2018 elections aren’t as cacophonous and shady on social media as they were in 2016.

To help their users get a better picture of how much and where certain groups are spending on Facebook this election cycle, in May the company unveiled a public archive of political ads that run on the site. Last week, Facebook let the sunshine in for all of the ads it runs. That means that if a Facebook page pays the social network to promote a post for a brand, issue, or political figure, users can see who paid for the ad, a range of the amount that was paid, other ads that the same page purchased, as well as some of the targeting info that was used in promoting the post, like the ages of users who saw the ad as well as their gender and location. Political ads now also include a small line of text on top that says who paid for it, not unlike the disclaimers read on political ads on television that say who sponsored the ad and if the candidate approves the message.

The thinking behind this effort is that with more transparency about Facebook ads, watchdog groups, journalists, regulators, and the American public will be able “to see these ads and then know more about who’s actually behind them,” said Katie Harbath, Facebook’s global politics and government outreach director, on a call with reporters in May. Any additional transparency is surely a good thing: While Facebook can have a flattening effect on sources of content—in the way that everything you publish on Facebook, from kids’ photos to New York Times articles to Pepe memes, looks like a Facebook post—more information that’s easily clickable can never hurt. Ad disclosures can help users make inferences about the motivations behind the content they see, and they can help researchers understand how disinformation efforts like the one that boiled over in the 2016 U.S. election spread, not to mention how aboveboard political spending works. But more transparency doesn’t always mean more clarity, and while Facebook now gives us the names of groups that buy political ads, it’s not always obvious, by any stretch, what that information means. Take this post:

The ad came up when I searched for “Red Hen” recently in Facebook’s database of political ads, wondering how political groups were using the Virginia restaurant that asked White House press secretary Sarah Huckabee Sanders to leave one evening in June. The account behind that ad, Donald Trump Is My President, has nearly 1.8 million followers, making it one of the largest unofficial pro-Trump pages on Facebook. According to Facebook’s transparency center, the ad was paid for by a group called Pigeon Media USA. The same group has purchased all the ads from the Donald Trump Is My President Facebook page, totaling 86 ads since the first week in May, when Facebook started adding political ads to its database. All of the ads from Pigeon Media take users to news stories on the same website, Ilovemyfreedom.org. Pigeon Media USA appears to have no presence online. Nor is there any mention of Pigeon Media on Ilovemyfreedom.org, a website packed with far-right retellings of the day’s news.

It took me a day, but I found the people behind Pigeon Media, its affiliated Facebook pages, and Ilovemyfreedom.org through a combination of multiple phone calls to numbers on previous versions of pages on the site retrieved from the Internet Archive, a business records search, and by searching on Twitter for one of the authors of articles on Ilovemyfreedom.org, which only lists first names. It turns out they’re a trio of three Trump supporters who found each other during the then-candidate’s campaign because they were operating separate popular Facebook pages. They’re not so mysterious. What is a mystery is what Facebook users are supposed to do when they look up the source of an ad on Facebook and its purchaser is a group they’ve never heard of, that doesn’t have a website, and that could be based anywhere. Especially if they don’t have the time to spend multiple hours tracking the ad buyer down.

A search for ads placed by Pigeon Media USA using Facebook’s advertising-transparency center.
A search for ads placed by Pigeon Media USA using Facebook’s advertising-transparency center.
Screenshot from Facebook political-ad archive.

Facebook’s political-ad-transparency center isn’t fancy. When you type in a keyword or the name of a page, it retrieves a list of all the ads that match. For each ad, the system tells you the dates it was shown to people, the ratio of men to women who saw the ad, a breakdown of the audience size by state, a broad range of the total number of people who saw the ad, and a broad range of how much was spent (for example, less than $100 or between $100–$499). The archive lets you see all the ads a particular page bought, but it also may show you ads that have nothing to do with what you searched. If you type in Beyoncé, you’ll get results including dozens of ads that make no mention of Beyoncé and sprinkling of a few that do. (It’s possible that the people who bought these ads targeted users who are interested in Beyoncé, but the system doesn’t tell us and Facebook would not clarify.) Once you start using the archive, its imperfections become obvious.

This opaqueness doesn’t just afflict ads targeting users on the political right. Another set of ads I found, from the page America Rise Up, says in the archive that it was bought by “three American friends who care” in its “paid for by” field. That page has nearly 2,400 followers and an About page riddled with grammatical errors, with no affiliated organization, email, events, or even a link to a website. Still, it’s boosted anti-Trump posts and received thousands of likes and shares. I wrote the page to ask more about its aims, since it’s paying to boost posts, and in response got an email with a Word document attached. That document detailed in clunky English why one of the women who claims to be behind the page was inspired to start it. Defending Donald, another Facebook page with almost no information listed, has been spending money to boost its political posts, too. It has more than 8,500 followers and links to a website, Defendingdonald.com, which has no information about who runs it either. It’s great that Facebook can give us the names behind the ads these pages run—and frustrating that that’s about all the transparency center gives.

As part of its efforts, Facebook has implemented a new vetting protocol requiring users who place political ads to verify their identity and location, but it doesn’t share that information with users. To get verified, Facebook asks those placing political ads in the U.S. to provide a U.S. driver’s license or passport, a U.S. residential mailing address, and the last four digits of their Social Security number. These steps are good—but definitely not foolproof if the goal is to keep out foreign meddling in Facebook political ads. It also doesn’t address the quality or intentions of the ads these pages purchase.

If the goal is to make Facebook ads more transparent so users can trust that foreign actors aren’t trying to fool them this time around, it shouldn’t take so much legwork to actually follow the money. It’s much better to have some information than no information, but it’s hard to imagine how simply knowing the name of the group that bought the political ad and a little bit about who saw it is going to help thwart foreign interference come November.

The requirements for “political” ads on Facebook have led to some consternation, with some news organizations fretting about having to comply with the verification requirements in order to promote their content on the social network. That the company is using a broad definition of political—beyond what the Federal Election Commission requires for broadcast and print ads—is on the whole a good thing, according to Young Mie Kim, a professor of journalism and political science at the University of Wisconsin–Madison who studies political advertising online. It’s important, she said, that Facebook’s stricter requirements apply to advocacy around issues and not just ads for candidates. But it could go further. “We know there are ways to work around this,” Kim told me. “On the face of it an ad could look like nonpolitical content, but then when you click through it the ad could take you to political content, so nonpolitical content could be used to target particular populations and used to spread misinformation.” It also probably isn’t good enough that Facebook is doing this voluntarily. The FEC currently has no clear regulations for how social media companies are supposed to handle political ads, leaving open the possibility that self-regulation will be inconsistent and unaccountable.

A Facebook representative told me that the transparency center is a work in progress and that the company is looking into including information in the archive about the location where pages are managed, which could help to catch non-U.S. election-interference operations. Facebook will also continue to use machine learning to take down fake accounts in bulk. It’s also true that even the current transparency requirements do make it harder for foreign actors to create fake pages, since even if they do succeed in buying an ad, a journalist or other watchdog could in theory find them in the ads database. (Well, maybe: If many actors manage to fool the system, flagging them one by one will be like trying to stop a waterfall with a bucket.)

However strict Facebook’s ad-buying system becomes, it’s a safe bet that Russia—and perhaps other foreign actors and domestic groups that would rather not be public—will once again try to use Facebook and other social networks to sow confusion, division, and distrust in the runup to the 2018 elections. The question that remains is: How will social media companies like Facebook stay ahead of their game?