A smartphone with icons of news organizations is surrounded by shield icons from NewsGuard, star ratings, traffic lights, a piece of poop, and other rating symbols.
Photo illustration by Natalie Matthews-Ramo/Slate. Photo by Charles Deluvio/Unsplash.
The Industry

Just Trust Us

In the era of fake news, a cottage industry of startups is competing to turn media credibility into a booming business. Do we really want that?

In August, an analyst for a little-known U.S. startup called NewsGuard tried to contact the editors of the Daily Mail Online, the U.K.-based news and celebrity-gossip site that ranks as one of the internet’s most-read publishers. NewsGuard staff called twice and emailed twice, asking questions about the Mail’s policies on deceptive headlines, source linking, and editorial transparency. It got no response.

This week, that silence came back to bite the Mail Online. The headline in the Guardian, a left-leaning rival to the populist Mail, seemed to carry a note of glee: “Don’t Trust Daily Mail Website, Microsoft Browser Warns Users.”

NewsGuard, it turns out, had spent the previous eight months developing “reliability ratings” for major online news outlets, and earlier this month it struck a deal with Microsoft to incorporate those ratings into the tech giant’s Edge browser as an optional setting. That’s when the Guardian noticed that the Mail Online had been tagged by NewsGuard with a “red” label, a reliability score of 3 out of 9, and the following warning: “Proceed with caution: This website generally fails to maintain basic standards of accuracy and accountability.” For Microsoft Edge users with the “News Ratings” feature turned on, that warning appeared alongside every link to the Mail Online—whether in Google search results, Facebook or Twitter feeds, or the Mail’s own homepage.

The Mail cried foul. “We have only very recently become aware of the NewsGuard startup and are in discussions with them to have this egregiously erroneous classification resolved as soon as possible,” a spokesman said. The two companies are now in talks, both confirmed, with the Mail pleading its case for a better review. For now, NewsGuard is standing by its ratings. If the Mail wants a better one, it will have to improve some of its standards, said Steven Brill, the prominent U.S. media entrepreneur who is NewsGuard’s co-founder and co-CEO.

How did Brill’s company become online journalism’s latest referee? And what does it mean if NewsGuard, or another fledgling credibility-rating project, begins to wield outsize influence over which news organizations garner the most trust on the internet? This would be a dramatic change, and maybe a welcome one. For much of this decade, the news business has seen its fortunes rise and fall at the whim of algorithms, such as the one that ranks stories in Facebook’s news feed. Those algorithms tended to emphasize the catchiness of a given headline over the reputation of the publisher, which helped to fuel fake news and shrill sensationalism. But the Mail’s run-in with NewsGuard may presage a new phase: one in which the big tech platforms’ algorithms begin to incorporate measures of a news outlet’s trustworthiness, while a handful of startups and nonprofits vie to be the arbiters behind those ratings.

The trust industry is quietly taking shape. Should we trust it?

Of all of Facebook’s crimes against the news, the most serious may be this: Its news feed radically changed the basis upon which people decide what stories to read. In the pre-Facebook world, people picked a news outlet first (by buying a print edition or visiting a homepage) and then browsed the headlines on offer. As Facebook gradually became the default news source for hundreds of millions of readers, that order reversed: People scanned headlines from their news feed first and clicked accordingly, generally without regard to the source.

The NewsGuard rating of the Daily Mail Online
NewsGuard’s “nutrition label” for the Mail Online, as it appears in the Microsoft Edge browser on an iPhone. Screenshot from iPhone

The shift sent shockwaves around the media world. Rage-inducing, polarizing, or otherwise tantalizing headlines became the primary factor in setting the news agenda, and opportunists learned to game Facebook’s algorithm with clickbait and propaganda. Old-school news outlets that penned straightforward, factual headlines saw their traffic dry up, while unverified content from unscrupulous startups, scammers, and even foreign agents rose to the top of people’s reading lists.

That’s an oversimplification, of course. Google News, and the internet more broadly, had a similar influence before Facebook came along. Reddit, Twitter, and other sites continued to have this effect after the Facebook news feed became everyone’s front page. And the effects weren’t all negative—the “gatekeepers” of the mainstream media had serious blind spots that social media helped to both expose and fill. (Think of the attention that Twitter funneled to the Ferguson protests, which were initially underreported by the mostly white, coastal legacy media.) But the upshot was the decimation of careful newsgathering operations and an erosion of the distinction between real and fake news.

Now that much of the news landscape lies in smoking ruins (along with chunks of Western democracy), Facebook and its critics have begun to acknowledge their responsibility and think about how to give more weight to publishers’ credibility, if only to ameliorate the company’s PR crisis. The solution that they’re hitting on more and more is to start rating news sources. These ratings, presumably, could be incorporated into news-feed algorithms or search results as a way of re-establishing source credibility—and not sheer clickability—at the center of the news industry’s incentives. Facebook is already doing a version of this work by feeding into the news-feed algorithm the results of user surveys about trusted news sources.

But if ranking stories based on clicks or likes was disastrous, ranking online publishers’ credibility brings its own set of problems. Not least of these is that credibility means different things to different people, and even the best-intentioned arbiters will be subject to both their own biases and outside pressure. The hope is that these systems rebalance the incentives facing online news companies. The risk is that they simply replace one set of self-interested media clearinghouses with another.

To understand one way the ranking of news credibility can go awry, consider the recent example of NuzzelRank. In June, a new list began making the rounds in media circles. It was a ranking of major U.S. news sources by “authority” developed by Nuzzel, a generally well-liked news-monitoring tool, that claimed to assess thousands of top news sources. “It is our long term plan to create the most comprehensive news search system on the Internet,” Nuzzel proclaimed, detailing how it would use software to surface reputable sources and filter out spam. Its search system would be sold as part of a subscription service, but the authority rankings would be public.

It sounded useful enough—and then people looked at the list. At the top were the New York Times (9.1 on a 10-point scale), the Washington Post (8.7), and the Atlantic (8.3). And then came … TechCrunch? At 8.3, the tech blog known for comprehensive but often credulous coverage of Silicon Valley startups tied with the Atlantic as the third-most authoritative news source. That put it just ahead of the Guardian, CNN, Bloomberg, and the New Yorker. Startups such as Medium, BuzzFeed, and Business Insider made the list ahead of such stalwarts as NPR, BBC, and the Economist. (Slate scored a 7.5.)

Lo and behold, it was also TechCrunch that had the “exclusive” news story heralding NuzzelRank’s launch. Its standing on the list was so absurd that TechCrunch’s own story called it “both flattering and a little nuts.” NuzzelRank, it became quickly apparent, was not going to save the media or democratic discourse anytime soon.

Jonathan Abrams, Nuzzel’s founder and CEO, demurred when I asked him in November what lessons he took from the rankings’ chilly reception. “I don’t know if we learned that much,” he said. “A lot of journalists and news organizations complained to us about the ranking. But that wasn’t a surprise.” Abrams said Nuzzel would continue to refine its ranking system, though he declined to share exactly how the scores were calculated—a red flag for those hoping that our new arbiters of the news would be more transparent than the old ones. Abrams did not respond to a request for further comment last week, and as of this week, the rankings appear to have been removed from public view.

NuzzelRank hasn’t even been the least serious bidder in the race to nutrition-label the news. That would probably be “Pravduh,” Elon Musk’s apparently once-real plan for a site that he said would rate the credibility and “core truth” of both individual reporters and articles. (It doesn’t appear to have gotten off the ground.)

But there is an array of more earnest journalistic-credibility initiatives that have sprouted from the wreckage of the 2016 elections and that are worth surveying, like the Trust Project, the Journalism Trust Initiative, the Trust & News Initiative, and others. (Their similar-sounding names and penchant for partnering with one another can make them hard to keep track of.) Some are nonprofits, and those ones in particular have largely been cautious about slapping scores or rankings on news stories or sources, preferring to build opt-in coalitions or fund experiments in building reader trust. Indicators made by the Trust Project are already being used by Facebook and Google News, not yet to rank the news but to label or otherwise sort it. That project has been backed by Craigslist founder Craig Newmark and Google’s vice president of news, Richard Gingras, among others.

Much of the Trust Project’s work isn’t really user-facing yet, and it hasn’t stirred much controversy. That would differentiate it from the for-profit NewsGuard, the company that gave the Mail Online a digital speeding ticket. In addition to the Microsoft integration, other publicly available products are browser extensions, including this one for Chrome. After you install it, links to news articles in your search results and social media feeds appear with color-coded shields beside them.

A green shield indicates that the news source “generally meets basic standards of accuracy and accountability.” Green shields aren’t limited to outlets like the New York Times and the Wall Street Journal, but smaller sites like WBUR.org too. YouTube and Wikipedia links have a blue shield with the letter I inside, signifying a platform that includes unvetted user-generated content. Links to relatively new online publishers, like the Trace and the Outline, get a gray shield with a horizontal line, meaning they’re “still in the process of being rated.” And a handful of sites, including Breitbart and RT.com, get an ominous red shield with an exclamation point inside, meaning, “Proceed with caution.” Hover over each shield and an explanation of the site’s rating pops up.

Of course, the kind of person who would install this browser extension is probably already a close reader of the news. As of Wednesday, the extension has been downloaded more than 40,000 times, and many of those were on public-library systems that are connected to hundreds of devices, Brill said. But NewsGuard wants to be much more than a widget.

The deal to incorporate NewsGuard shields into Edge illustrates its ambitions. This particular integration is unlikely to dramatically shift the news landscape, as Edge holds a minuscule share of the mobile-browser market. But it’s the same type of partnership that NewsGuard hopes to strike with larger platforms in the future. (NewsGuard said Microsoft is paying it a licensing fee, but it declined to discuss further details of the deal.)

Right now, NewsGuard employs a team of 25 human researchers to evaluate news outlets. The startup’s co-founders, Brill and former Wall Street Journal publisher Gordon Crovitz, have developed a set of nine standardized criteria against which to judge each publication. (The criteria were informed by the Trust Project’s indicators, which some newsrooms are already incorporating.)

Some are basic, like “clearly labels advertising” and “does not repeatedly publish false content.” Others are more subjective, like “gathers and presents information responsibly,” “handles the difference between news and opinion responsibly,” and “avoids deceptive headlines.” Get green check marks in enough of those categories and your publication gets the green shield of approval. Enough red marks, and everything you publish will come tagged with that red warning shield, at least for those users who have the feature enabled.

Links to news outlets in a browser with the NewsGuard extension
NewsGuard’s browser extension for Google Chrome puts the startup’s color-coded credibility ratings alongside the links to each major publication. Here, the shields appear alongside search results on Google.com. Screenshot from Google Chrome

That sounds straightforward, but consider whether the article you’re reading right now “handles the difference between news and opinion responsibly.” Certainly, I’ve tried to gather and present the information in this story responsibly, but I’ve also mixed in some of my own views with the facts—as most Slate stories do, by design. (It’s a magazine of opinion and news analysis, after all.)

Indeed, Brill told me NewsGuard struggled with whether to give Slate a red mark for failing to separate news from opinion. But it’s that very kind of edge case that he believes necessitates human, rather than algorithmic, judgment. “We do what algorithms don’t,” he said. “Algorithms don’t call people for comment.” Brill and Crovitz did: Last year, they called Slate’s top editors and challenged them to defend Slate’s policies. Fortunately, unlike the Mail Online, Slate picked up the phone. Ultimately, NewsGuard was satisfied that Slate’s mix of news and opinion was made clear to readers, and was journalistic rather than overtly partisan in its intent. Slate received a perfect score and a green shield.

Some conservatives might object to that decision. Then again, much of the left is likely to object to FoxNews.com receiving a green shield from NewsGuard as well. (Although not a perfect score: It gets six green checkmarks out of nine total criteria. For comparison, left-leaning MSNBC gets seven green marks on NewsGuard’s checklist.) Crovitz noted that the rating applies specifically to the website and not to, say, the network’s more conspiratorial prime-time TV hosts. But it was FoxNews.com that promoted the Seth Rich story, an infamously unsubstantiated conspiracy theory about the death of a Democratic National Committee staff member, before Sean Hannity picked it up.

The conservative Daily Caller also gets a green shield from NewsGuard, barely, as does Ben Shapiro’s hard-right Daily Wire. (Both score six green check marks.). Breitbart and Infowars do not pass muster. Al-Jazeera, as the Middle East’s leading news outlet, is one of the more surprising red shields, scoring just four green check marks. It got red marks not only for “reveals who’s in charge” and “discloses ownership and financing,” but also for “gathers and presents information responsibly.”

Brill insisted the rating is not to blacklist publishers or tell people what sites not to read. “If you get a red, all it says is ‘Proceed with caution,’ ” he said. But that could be a tough sell if a site gets labeled as red on a major news platform and sees its traffic drop off as a result. Crovitz added that NewsGuard encourages publishers to complain if they feel their sites have been mislabeled—something he said few sites have done so far.

Brill and Crovitz know they’re going to be accused of political bias from all sides, and they say they’re prepared. It might help that Brill personally leans left, and Crovitz leans right. They also note that none of their criteria involves evaluating a site’s political leanings, and Brill claimed they’ve been “brutally disciplined about applying those nine criteria unflinchingly.” Their research team includes veterans of journalism and fact-checking operations.

But the objectivity that NewsGuard takes such pride in can lead to some surprising outcomes, like giving the far-right blog Legal Insurrection a green shield while giving the liberal Daily Kos a red one. Legal Insurrection was so proud of its seal of approval that it wrote an article trumpeting its NewsGuard endorsement.

Alexios Mantzarlis, who heads the nonprofit Poynter Institute’s international fact-checking network, told me he appreciates what NewsGuard is trying to do, especially with the nutrition labels. “If they stopped there, that’s where I’d feel completely comfortable and encourage the endeavor,” he said. But Mantzarlis worries that rating entire publications on a red-green scale can be reductive. Noting the red shield for Al-Jazeera and the green one for Fox News, he said, “It feels like one of those recipes where the ingredients all look right, but then you follow it closely and the result isn’t great.”

Whether NewsGuard’s shields become ubiquitous or a footnote in the history of online journalism will depend on the willingness of the large tech platforms to license its product. Brill believes a European Union agreement, little-known stateside, might help to force their hand. Google, Facebook, Twitter, and Mozilla (maker of the Firefox browser) have all signed on this year to the European Commission’s Code of Practice on Disinformation, which commits them to various measures to tackle false news on their platforms.

If it sounds like an empty bureaucratic gesture, well, it might be. But Brill and Crovitz are counting on it to have teeth, and they’ve been making regular trips to Brussels to try to persuade these platforms that adopting NewsGuard is their best path toward satisfying the agreement. If this or other arguments fail to convince Big Tech, NewsGuard will fail too.

Whatever happens, we’re nearing the end of the era in which tech companies served up the news without regard to its veracity or the reliability of its source. And so far, it’s clear they’re more comfortable turning to third parties to help them evaluate content than performing that thankless task themselves. Already, Facebook is using a network of third-party fact-checking organizations to review articles flagged by either its software or its users as potentially false. Those deemed bogus are less likely to be shown to Facebook users in their feeds.

The question now is whether those partnerships will meaningfully address the platforms’ misinformation problems or just provide public relations window dressing. A recent Guardian report found some of Facebook’s fact-checking partners souring on the social network, concerned that the for-profit company’s goals don’t align with their own. For instance, two former Snopes employees told the Guardian they felt Facebook prioritized stories involving its own advertisers—a claim Facebook has denied—and compelled them to devote resources to satirical stories not worth debunking.

A world in which Facebook, Google, and Twitter give their users better and easier ways to evaluate news stories’ credibility seems like an improvement over the current free-for-all. If shoddy journalistic practices now mean your story appears in Facebook with a red shield beside it, that could not only help readers avoid false news but also help shift the incentives for publishers. An outlet with a marginal score might suddenly find it more profitable to try to improve its correction policy or ownership disclosures in hopes of attaining a green rating than to gin up its headlines in pursuit of clicks. Perhaps we will see more organizations explain the reporting methods of their investigations, as the New York Times now sometimes does.

Phone settings allowing you to turn on NewsGuard extension
Microsoft’s Edge mobile browser now includes an option to turn on “news ratings” in the settings menu on Android and iOS devices. Screenshot

It’s also possible to imagine a nightmare scenario in which the ratings authorities become too powerful, their subjective decisions baked into every algorithm and profoundly shaping what people read. Media companies would try to game the green shields the same way they gamed Facebook’s algorithm—or worse, curry favor or influence behind the scenes.

Or perhaps platforms will resist giving up that much control to a third party and find more modest ways to appease their regulators and critics. Even Brill seems to be counting on any developments to be more subtle than profound: He said being baked into algorithmic rankings is not what NewsGuard wants and that he’s more interested in an optional, informational product that users can turn off or on as they choose, like the “family filter” on a web browser.

NewsGuard’s confrontation with the Daily Mail Online may be its first real test. No one seemed to care too much that it had given the Mail only three green check marks and a red shield until Microsoft began incorporating that rating into its browser. Suddenly, it was a controversy worthy of coverage in BBC.

From the perspective of the Mail, which is by some measures the world’s largest online newspaper, it’s no doubt maddening to be flagged as an unreliable source by Microsoft’s browser thanks to the subjective judgments of an analyst at a U.S.-based startup that it had never heard of. The Mail publishes vast amounts of content, and the majority of it is basically accurate. It has broken big stories, including 2016 allegations that former U.S. Rep. Anthony Weiner carried on a sexual online relationship with a 15-year-old girl even after he’d resigned from Congress.

On the other hand, NewsGuard’s dim view of the Mail isn’t unwarranted. Both the Mail Online and the U.K. print newspaper that spawned it have a well-earned reputation for playing fast and loose with facts and citations. And it isn’t entirely an accident that NewsGuard couldn’t reach the Mail for comment back in August: The site does not disclose contact information for any of its reporters or editors, and it funnels reader complaints through an online form system. That’s precisely the opacity that earned it an X from NewsGuard in two categories related to editorial accountability. (I was able to verify, however, that at least one of NewsGuard’s emails never reached the intended recipient’s inbox.)

Speaking of accountability, NewsGuard’s rating system may have its pitfalls, but it seems like an improvement over Facebook in at least one important respect. When Facebook builds results from its “trust survey” into the news-feed algorithm, it doesn’t tell anyone what those results are or how specific news outlets are being affected. For all we know, it may already have the Mail—or any other outlet—rated as unreliable, and those outlets have no recourse.

The Mail may not like having to defend its editorial standards to a startup like NewsGuard, but at least it has that opportunity. Brill said the startup will consider any appeal to its ratings and will post any written complaints from publishers as part of its nutrition labels. But, he added, the Mail’s best chance at an improved rating would be to simply change some of its practices.

Tech companies forcing publishers to change how they do business is nothing new. And the rise of NewsGuard and its ilk might just be another annoying set of hoops to jump through at a time when online media is already hard-pressed. Fortunately, these hoops don’t require news organizations to contort their coverage in pursuit of maximizing traffic. And if NewsGuard’s clash with the Mail Online is any indication, it seems like it’s at least annoying the right kind of people.