Future Tense

Facebook Is Cracking Down on “Inauthentic” Content

Facebook’s Adam Mosseri, vice president for the news feed, oversees the algorithm that shapes how people use the social network.

Photo by Steve Jennings/Getty Images for TechCrunch

Facebook is constantly tinkering with its news feed algorithm, and the latest changes include yet another salvo against publishers of spammy, misleading, or otherwise irksome content. They also include what might reasonably interpreted as a salvo against Twitter—an attempt to make Facebook a better venue for breaking news and real-time discussion.

The changes, announced Tuesday in a blog post, are unlikely to make a dramatic impact in the short term. But they’re a good bellwether of the social media giant’s ever-evolving priorities. Right now it’s clear that those include making its news feed both more trustworthy and more timely—in recognition, perhaps, of the company’s growing influence as a news source.

First, the company is introducing several new criteria to its news feed ranking algorithm—the software that decides which posts you see in your feed—designed to boost what the company calls “authentic” content while downgrading inauthentic content. It’s part of a long-running effort by Facebook to keep publishers and brands from gaming the news feed by, for example, begging users to like and share their posts.

These tweaks may look like part of the social network’s campaign against fake news, which emerged as a troubling cottage industry in the run-up to the 2016 U.S. presidential election. But while Facebook has taken several steps to fight fake news, the company tells me that isn’t the explicit goal in this case. Fake news sites could be affected by the latest changes, but the target here is broader, encompassing clickbait, like-bait, domain-spoofing, and posts that plagiarize or duplicate content originally published elsewhere.

Facebook wouldn’t say exactly what new signals it’s using to identify such posts, because it believes disclosing them would make them easier to circumvent. But the company did give one example: If a given post is prompting a significant portion of users to click “Hide post,” Facebook will show it to fewer people on the theory that it’s not authentic.

Facebook also explained how it came up with the new criteria. First, it paid a team of people to manually flag Facebook pages that were posting spam, clickbait, and the like. It then used those pages’ posts as examples to train a machine-learning model to identify the characteristics of inauthentic posts. Those characteristics are now being incorporated into the broader algorithm that ranks all posts in users’ feeds. That means posts from any Facebook page that share key traits with posts from spammers will be downgraded, while posts that do not share those traits will get a slight boost.

The second suite of changes Facebook announced Tuesday is intended to surface more timely posts in users’ feeds. While a post’s recency is already a criterion in the ranking system, the company is adjusting the signals it uses to identify posts that might be of immediate interest at any given moment. For instance, it will temporarily upgrade posts that relate to currently trending topics, such as an ongoing sporting event involving a team you’re interested in. It will also boost posts that a lot of people are commenting on or liking at the moment, at the expense of others that might have gotten a flurry of engagement several hours ago.

For years the Facebook feed has functioned as a sort of catch-up on noteworthy activity that transpired over the past day or two. These changes should nudge it a little more in the direction of Twitter, which is oriented around real-time discussion.

So how will all of this affect your page or your feed? Probably not as much as you’d think, at least in the short run. Whenever Facebook alters its ranking system, it tends to err on the conservative side to minimize unintended consequences. Over time, those changes that seem to be working as intended will be gradually reinforced, while those that aren’t helping will be dropped or revised. This helps to explain why Facebook seems to have targeted the same types of manipulative content on so many different occasions, using so many different tactics.

That said, these tweaks have real effects, and they’re often discernible in retrospect as subtle turning points in the social network’s evolution. Perhaps this will be the one that positions Facebook to fill the vacuum left by Twitter, should the latter falter or get sold.