The Industry

Can Facebook Fix Itself Before the Midterms? It’s Trying.

Facebook's chief security officer, Alex Stamos—shown here testifying before the Senate in 2014 in his previous role at Yahoo—is spearheading the company's effort to crack down on fake news, propaganda, and Russian election interference.
Facebook’s chief security officer, Alex Stamos—shown here testifying before the Senate in 2014 in his previous role at Yahoo—is spearheading the company’s effort to crack down on fake news, propaganda, and Russian election interference. Win McNamee/Getty Images

Facebook has been under fire since America’s 2016 presidential election for the way that Russian agents, fake-news sites, and political campaigns used the company’s platform to spread propaganda and misinformation. On Thursday, the social network laid out its plan to prevent a repeat of those problems in November’s midterms as well as other elections around the world.

“[N]one of us can turn back the clock, but we are all responsible for making sure the same kind of attack [on] our democracy does not happen again,” said Facebook VP of Product Management Guy Rosen in a briefing with reporters, the transcript of which Facebook published as a blog post. (Slate was not present at the briefing.) “And we are taking our role in that effort very, very seriously.”

Advertisement
Advertisement
Advertisement
Advertisement

The four-pronged plan seems sober and well–thought-out, as far as it goes. It includes mechanisms to fight fake accounts, false news stories, shady and misleading ads, and foreign agents interfering with elections. Close followers of Facebook will not be surprised, however, to learn that the company’s proposals stop well short of the sort of deep, structural changes that many critics believe it needs. And they come at a time when studies suggest that click-bait and misinformation are still flourishing on the platform.

They also come on the same day that an internal 2016 memo from a Facebook executive leaked to BuzzFeed, showing that the company pursued its quest for growth even as it became aware of some of the societal downsides.

Advertisement

Some of the tools Facebook discussed on Thursday are new; others have been previously announced. The briefing served to tie them all together in a single plan that’s starting to look more comprehensive and proactive than past efforts that have felt more piecemeal and reactive. It’s the kind of plan you’d expect from a company that’s beginning to understand—belatedly—just how influential it has become in the spread of information worldwide.

Advertisement

Facebook’s core problem-solving strategy is to use human reviewers and machine learning algorithms in tandem, with each helping the other.

For instance, the company has been partnering with third-party fact-checkers to review stories flagged by users as possibly false—an approach that was initially derided as insufficient. Nonetheless, the company has been busily expanding its fact-checking partnerships to both other countries and individual U.S. states, the latter via a partnership with the Associated Press. That should broaden the reach of the social network’s efforts considerably. Meanwhile, Facebook disclosed Thursday that a tweak to its algorithm designed to limit false stories’ virality has indeed reduced their spread by more than 80 percent, on average.

Advertisement
Advertisement

And Facebook is now using the fact-checkers’ work to train machine learning models to flag other possibly false stories for review. If implemented well, such a hybrid human-machine system should improve with time, potentially to the point where it actually makes a dent in the flow of fake news.

Likewise, Facebook said Thursday that it’s getting better and faster at identifying and blocking fake accounts and political pages of foreign origin. “We’re now at the point that we block millions of fake accounts each day at the point of creation before they can do any harm,” said Samidh Chakrabarti, a Facebook product manager. “We’ve been able to do this thanks to advances in machine learning, which have allowed us to find suspicious behaviors—without assessing the content itself.”

Advertisement
Advertisement
Advertisement

That’s crucial, because while Facebook is doubling its number of human reviewers and moderators from 10,000 to 20,000, that’s still not nearly enough to secure a platform of more than 2.2 billion users around the world. Still, the humans are essential, because even state-of-the-art A.I. lacks the nuance to reliably make the right calls on its own.

The company is also starting to take the initiative in identifying and investigating possible cases of election interference, rather than just responding to user reports, Chakrabarti said. He elaborated briefly on an example that CEO Mark Zuckerberg mentioned last week:

We first piloted this tool last year around the time of the Alabama special Senate race. By looking specifically for foreign interference, we were able to identify a previously unknown set [of] Macedonian political spammers that appeared to be financially motivated. We then quickly blocked them from our platform.

Advertisement

Finally, Facebook began testing advertising-transparency tools in Canada that will be gradually imported to the United States in the run-up to the midterms. The company said it will soon begin the process of verifying advertisers’ identities and countries of origin, and marking political ads as such in users’ feeds.

Advertisement
Advertisement

I’ve noted before how Facebook is gradually beginning to shift from “move fast and break things” to “move slow and fix things.” But the midterms are rapidly approaching; the company can only do so much in preparation for them. So the approach Facebook laid out on Thursday could best be described as an attempt to have it both ways: to move fast and fix things.

To truly fix the platform’s propensity to spread misinformation and sow division would require a more thorough overhaul of the news feed algorithm—a step that Facebook did not mention Thursday. It now appears that Facebook is at least undertaking the fight against the likes of Russian agents, trolls, sensationalist publishers, and unscrupulous political campaigns in earnest. But as long as the news feed revolves around hyperpersonalization and is optimized for user engagement, the company will be fighting an uphill battle against the dynamics of its own platform.

Advertisement