Google’s “featured snippets”—those info boxes at the top of search results that display things like basic biographical information for prominent people, or the wrong way to caramelize onions—have long been a source of confusion. Why, for instance, would Google so confidently state that President Warren G. Harding was a member of the KKK?
Now Google, long notorious for its lack of transparency, has launched a behind-the-scenes series to explain what goes into a search, starting with the announcement that it’s trying to correct the “featured snippets” function. It claims that the feature currently has a failure rate of just 2.6 percent—not bad, until you realize that with 3.5 billion queries being processed every day, that means 91 million searches’ worth of bogus information. And while this clearly needs to be addressed, the proposed tweaks may not be as effective as Google hopes.
You might wonder why Google doesn’t scrap snippets altogether, but they make sense as part of a larger business strategy. There’s certainly demand for a quick, correct answer at the top of the page, and they’re also valuable for voice-activated tech, because they offer a single decisive reply to a query instead of forcing the user to listen to lots of results. The problem is that the search engine doesn’t always choose correctly, which leads to Google Home cheerfully informing Twitter users that “every woman has some degree of prostitute in her” and “Obama may be planning a communist coup d’É-T-A-T.” (Home apparently can’t pronounce coup d’état.) Given that 600 million people worldwide use voice assistants at least once a week, it’s crucial that the answers they provide are the right ones. So why does it sometimes go so wrong?
The answer, as Future Tense has previously reported, lies chiefly in the shift toward the “semantic web,” which standardizes the sorting and structuring of web pages so that computers can read them directly. This allows Google to scrape data from other sites, often obscuring the source in the process. The idea is that all of this will lead to smarter, more relevant results, even as the curation of information is being done by algorithms instead of humans.
As a result, it’s possible to game the system to an extent, and search engine optimizers have documented methods of doing so. If you anticipate the questions users are most likely to ask and the way they’re likely to ask them, you can score a featured snippet in just a few days. This may sound obvious, but it also gets at something important: The snippets you see have to do with the way your search is formulated, inadvertently confirming whatever biases you may have had at the outset. For example, as Google acknowledges in its blog post, when you ask “are reptiles good pets,” you’re told that “for every home and owner there is a suitable reptile”—even as the snippet for “are reptiles bad pets” informs you that it’s cruel to keep them at all. At a time when people trust search engines more than either the media or social media, Google has repeatedly been shown to draw from the worst of both, with potentially serious consequences.
To address this, the company is taking a two-pronged approach. It will introduce labels that will allow users to be more specific about their queries, and it will include more than one snippet in response to a single search.
The labels make sense, even if they’re a little clumsy and reductive in their execution. The example Google gives us is that of setting up call forwarding. The infobox can give you a more complete answer if you specify your carrier by clicking on one of the proposed labels— “att,” “tmobile,” “on landline,” and so on. (It’s unclear how comprehensive the labels will be, or how intuitively this will translate to voice-activated assistants.)
Including more than one snippet seems less likely to be successful, especially for a search engine that, by its own admission, historically hasn’t “weigh[ed] the authoritativeness of results strongly enough for rare and fringe queries.” In its announcement, Google says that multiple snippets might help in cases where different formulations of a question turn up “contradictory information,” implying that it plans to show users both sides of the argument. Unfortunately, last April’s updated search quality rating guidelines didn’t magically resolve the algorithm’s tendency to scrape info from obscure, biased web pages. And promoting another source with the opposite view purely because it’s contradictory sounds a bit like putting a creationist on a panel discussing evolution in the interest of “balance”: It runs the risk of fostering more falsehoods than it squashes.
It’s unclear how confident Google is in its proposed fixes, though perhaps noteworthy that immediately before the announcement, the frequency of featured snippets fell pretty dramatically. As of Wednesday evening, they’re one-fourth as likely to appear in search results than they were before. One perhaps promising sign? If you search “can google fix featured snippets,” you don’t get an info box.
One more thing
You depend on Slate for sharp, distinctive coverage of the latest developments in politics and culture. Now we need to ask for your support.
Our work is more urgent than ever and is reaching more readers—but online advertising revenues don’t fully cover our costs, and we don’t have print subscribers to help keep us afloat. So we need your help. If you think Slate’s work matters, become a Slate Plus member. You’ll get exclusive members-only content and a suite of great benefits—and you’ll help secure Slate’s future.Join Slate Plus