The Industry

Facebook Will Never Run Out of Moles to Whack

The latest disclosure of Russian election meddling reveals the limits of social media’s new dedication to fighting false news.

The Facebook logo fused with a Russian flag
Photo illustration by Slate.

It turns out last week was an awkward time for Facebook to show off its election-interference “war room.” On Wednesday, the company invited journalists to its Menlo Park, California, headquarters to tour the hub of its efforts to stem disinformation campaigns in the runup to the midterm elections. Two days later, however, we got one glimpse of how well that war is going—and the answer is discouraging.

On Friday, the Department of Justice shared a criminal complaint accusing Elena Alekseevna Khusyaynova, a 44-year-old woman from St. Petersburg, with serving as the chief accountant of “Project Lakhta,” a covert disinformation campaign targeting the U.S. midterms. Behind this effort was the Internet Research Agency, the Russian troll operation that puppeted an army of fake social media accounts to cleave open Americans’ political and cultural fault lines and help Donald Trump win the election during the 2016 presidential campaign. The 39-page complaint reveals the ways the Russians’ digital-propaganda tactics have evolved even in the past two years, and it suggests that it may be too much to expect that Facebook will ever be able to truly abate such malicious uses of its service unless it fundamentally rethinks the architecture of the platform.

Advertisement
Advertisement
Advertisement
Advertisement

The complaint details how Russian operatives continued to instrumentalize U.S. social media networks long after Facebook, Twitter, and Google testified to Congress in October 2017 about their efforts to rid their platforms of inauthentic behavior from the Kremlin-connected Internet Research Agency. The investigation found that between January and June 2018, Project Lakhta’s operating budget was more than $10 million—or $1.7 million a month. Despite thousands of accounts being removed from Twitter and a cleanup and election-security effort that Facebook executives have called the biggest shake-up at the company “since our shift from desktop to mobile phones,” it’s clear that state-sponsored disinformation agents have not been deterred. Just as they did in 2016, the propagandists created fake users in order to stoke various political tensions—or many at once. Take this March 22 missive from @johncopper16, a fake U.S. hard-line conservative-activist Twitter user called “Marlboro Man” who tweeted at nearly 5,000 followers:

Advertisement

Just a friendly reminder to get involved in the 2018 Midterms.

They are motivated

They hate you

They hate your morals

They hate your 1A and 2A rights

They hate the Police

They hate the Military

They hate YOUR President

Advertisement

It’s not surprising that foreign experts in social media deception would be hard to catch. I could create a second Twitter profile right now under a fake name and post some divisive political content; as long as it didn’t explicitly aim to disenfranchise voters or harm someone, it wouldn’t be at risk of removal, even if I intended to use it to deceive. On Facebook, creating a new profile under a false name is a little harder to do, but it’s not rocket science. Because of the scale of these platforms, it would of course be impossible for Facebook to monitor activity at a granular level. But it has vowed to try to stop larger, coordinated efforts, like Project Lakhta. Facebook did assist the Department of Justice in its investigation, and the fact that the effort was eventually caught—though not necessarily stopped—is heartening. But the Internet Research Agency’s propaganda efforts started in 2014—meaning it got away with polluting the online political discourse for years, and the DOJ complaint alleges the accounts they found were operational as late as this summer. No doubt the IRA is still up to no good on U.S. social media.

Advertisement
Advertisement
Advertisement
Advertisement

Teaching social media users to be better news consumers—as Facebook has tried to do—will only accomplish so much in stemming this problem, especially since the goal of these efforts isn’t necessarily to deceive via a single post or tweet but to create the illusion of a groundswell of sentiment. Once an inauthentic account has a platform, the content it posts looks just like content from a real user or a reputable news outlet. A Facebook account belonging to “Bertha Malone”—actually a Russian troll, according to the Department of Justice’s charging documents—shared Islamophobic and anti-immigrant pro-Trump memes within the same blue frame as any other account, eliciting the same emoji reactions above the same invitations to comment. On Facebook, there’s a flattening effect in which all content has equal aesthetic standing. The same is true on Twitter and on YouTube. If there’s a surge of this stuff, Facebook can’t prevent it from blurring together as users scan through their feeds.

Advertisement

Yes, the social networks have all taken efforts to downgrade or flag articles that have been fact-checked as incorrect as well as de-emphasize hateful content posted by users. Still, if enough people in your community are sharing and commenting on a post, chances are you’re going to see it. And if a piece of misinformation is shared directly, like via private message or to a private group in the Facebook-owned WhatsApp, there’s really not much algorithmic filtering can do. A study of information spreading on WhatsApp ahead of the Brazilian elections showed that from a sample of 100,000 widely shared political images, more than half offered information that was either false or misleading.

Advertisement

Facebook is designed so anyone can make a group that can become popular and anyone can buy an ad; on Twitter or YouTube, anyone can make an account. If a tweet by a troll becomes popular enough, it might end up in the carousel at the very top of a search page on Google or even in a news story. A March study in the Columbia Journalism Review found that 32 of 33 major American news outlets published stories with a tweet embedded from an Internet Research Agency troll account, including HuffPost, Washington Post, Slate, and Fox News. There’s nothing stopping anyone with a Facebook page from creating a political group and posting divisive memes that may go viral. Which is why even after Facebook kicked off 470 Internet Research Agency accounts last September and Twitter told Congress that it removed 2,752 accounts last October, it could never be enough. Twitter says it’s seen huge improvements in its efforts to stop malicious bots before they even make accounts, and in the first three months of 2018 alone, Facebook likewise said that it had purged more than half a billion fake accounts from its platform. This is important and hard work, but state-sponsored trolls have kept on posting, as the DOJ charges show. No matter how hard these companies moderate, there are always new moles to whack.

Advertisement
Advertisement
Advertisement
Advertisement

None of this very satisfying—particularly if you have any hope whatsoever of ever having an election that isn’t plagued by disinformation and hate seeded by foreign and domestic social media users hoping to confuse or rile up voters. Short of just abolishing social media from the face of the Earth, which isn’t a good or fair idea, it’s hard to imagine that much will actually change without a radical design overhaul. The companies could rethink allowing anonymity, an idea toyed with in a policy paper by Democratic Sen. Mark Warner, but still, the platforms that require real names are bedeviled by propaganda too. Or they could more fundamentally rethink the radical openness that has allowed their walled gardens to grow out of control and subsume the internet. It could mean dramatically increasing curation—even beyond the thousands of moderators Facebook and others are adding—or leveraging the scale of the platforms to better provide the information necessary to meaningfully participate in a democracy. It probably also means reconsidering the entire economic model that free social media is premised on, which mimics broadcast in that the content is “free” but you have to watch advertisements. Only unlike with broadcast radio or television, on social media we pay with our personal data, which informs which ads we see, what groups we’re encouraged to join, and how the news we see is curated. While Facebook hasn’t entirely ruled out a paid, ad-free tier, it surely won’t abandon its core model anytime soon.

Advertisement
Advertisement

What about a technological solution to the current problem? Facebook CEO Mark Zuckerberg told Congress earlier this year that an artificial intelligence fix to the platform’s misinformation issues could be just a few years away. But professors who study artificial intelligence argued in the New York Times last week that developing A.I. that can operate at a level in which it can understand multiple viewpoints and discern culturally significant attitudes isn’t even close to possible now. Which leaves us, for now, with limited A.I. buttressed by human moderation.

Advertisement

Last week, Facebook shared some independent research that showed it’s getting better at fighting off false news. “Because it’s evolving, we’ll never be able to catch every instance of false news — though we can learn from the things we do miss,” product manager Tessa Lyons wrote. It’s a reassuring attitude, and correct in its acknowledgement that determined bad actors will always try to find new ways to take advantage of the platform. And the fundamentals of the platform—and peer ones like Twitter and YouTube—work to the bad actors’ advantage.

Advertisement

This is where policy could come in. Without regulatory requirements or the threat of legal consequences, these companies will be left to police themselves—and likely remain reticent to make larger-scale changes to, say, the way they present information and allow new members to sign up. Politicians are starting to wise up to this fact. If Democrats can take off the rose-colored glasses through they which they often assess technology companies and Republicans can set aside their aversion of regulation, they probably have an opening to address the ways misinformation is allowed to fester on social media and muddy our public elections. It will be too late, of course, for the one happening in two weeks.

Advertisement