Meathead Goldwyn has some long-standing beef with Google.
For the past nine years, the pitmaster and publisher of AmazingRibs.com—one of the internet’s leading authorities on all things barbecue—has warned against the tyranny of Google recipe rankings, which determine how and what millions of Americans cook.
Among other things, Google’s algorithms decide which versions of canonical dishes get prime billing in its search results. The search engine cherry-picks and spotlights only the culinary data points it considers important in its “rich” result links and recipe carousels. Google even displays a handy five-star rating alongside recipes from most major food sites, providing a quick shorthand for quality and eliminating the need to click through infinite variations of a dish. If you’re searching for cornbread recipes, you’re probably just going to click the one with five stars or the most ratings next to it.
But recipe ratings, like much of the “structured data” that Google privileges in its search results, often say less about the caliber of a recipe than they do about who published it. Sites supply the user-generated star ratings that Google pulls, and there are plenty of ways to skew them. Doing so doesn’t change a site’s place in the search rankings, but it does affect how many searchers click through—and, arguably, exposes the emptiness of the entire rating system.
To test that notion, I scraped more than 2,000 Google-ranked recipe ratings from a dozen popular cooking sites in early December. As of this writing, AllRecipes.com—the “distinctly unglamorous,” crowdsourced recipe bank best known for its users’ creative use of canned soups—commands higher average star ratings than both NYT Cooking and Bon Appétit do.
“I think I was the canary in the coal mine—the first food writer to warn about” how Google displays recipes in search results, said Goldwyn, who first wrote about the “pain” and “panic” of the site’s recipe search system in 2011. Since then, he has watched some of his site’s best-loved recipes slide off the first page of Google results, supplanted by “oven-baked” barbecue and “crockpot” ribs.
“But it’s Google’s world, and we just live in it,” Goldwyn said. “If you’re trying to make a living on the internet, you have to worship Google.”
It’s difficult to overstate the power Google has over food publishers: Most major food and recipe sites derive two-thirds or more of their visitors from the search engine, said Faith Durand, a digital food publishing veteran and editor in chief of the Kitchn. The holy grail, for recipe sites of any size, is the featured recipe carousel at the top of Google’s search results page.
Those three slots on desktop, and four on mobile, earn 75 percent of the clicks on any given search term, from “barbecue” to “vegan tomato soup,” said Liane Walker, the managing director of the membership-based consultancy Foodie Digital. Walker’s two-year-old firm is one of several agencies in a growing microindustry aimed at helping food bloggers boost their search engine optimization. As the food publishing field has grown more crowded, publishers have fought harder to access the legions of home cooks using Google search.
“Her life’s work,” Walker said of one client, without a trace of irony, “has gone into getting on the first page for sourdough bread. It took six years.”
Like all SEO strategies, the process of ranking a recipe on Google is both archaic and tedious. Food publishers must first play by all the regular rules of the Google algorithm, accounting for factors like site load time (fast), dwell time (high), and backlinks (numerous).
To land a recipe in the almighty carousel, publishers also need to adhere to an exacting set of data-formatting specifications called “recipe schema,” which standardize recipes across websites so that Google and other tech companies can parse them. Ratings are one possible attribute of recipe schema; so too are yield, nutrition, ingredients, and cook time, which Google also surfaces in “rich” search results—links jazzed up with photos and other contextual information, which tend to see far more clicks than their plane Jane neighbors.
But there’s been little scrutiny of the quality and the usefulness of Google’s recipe data, even as it ferries tens of millions of home cooks around the web. Star ratings are particularly suspect, as they are collected, moderated, and supplied by site publishers—who arguably have nothing but incentive to inflate them. (In a statement, Google said it penalizes publishers “if we find that a website has intended to deceive people with review snippets.”)
Even if a recipe’s ratings do reflect the opinions of its reviewers, that’s often less a measure of recipe quality than a sign of how skilled a publisher is at ginning up rave reviews from fans and disincentivizing bad reviews from critics. Sites that don’t actively moderate their comments sections, for instance, tend to have far lower recipe ratings than those that do—a product of both healthier comment section culture and fewer fake or drive-by reviews.
But there’s no industry standard for what counts as a fair or ethical level of moderation; every publication plays by its own rules. At Foodie Digital, for instance, Walker recommends her bloggers delete ratings and reviews in cases where the commenter is abusive or has significantly altered a recipe, and that they discourage bad scores by requiring a comment whenever a reviewer leaves a rating below five stars. That latter feature comes standard in Recipe Maker, the most popular web design plug-in for generating recipe schema.
At AmazingRibs.com, meanwhile—where recipes averaged 3.77 stars in my sample—a message asks readers not to rate a recipe until after they’ve cooked it but doesn’t require that they comment or log in. At New York Times Cooking (4.46 stars), newsroom editors cull abuse and “unproductive” comments, said Emily Weinstein, the vertical’s editor. But editors don’t touch recipe ratings and have no way to know if reviewers cooked the recipe before rating it.
In other words, high ratings don’t always mean a recipe is good, and low ratings—of the sort that plague domestic empress Martha Stewart, for instance—don’t mean a recipe is bad. Stewart averages 3.49 stars across the first 150 recipe results for Martha Stewart Living, a particularly poor showing when you consider that online reviews, on balance, tend to be pretty generous.
Does that mean Stewart’s team of food editors and recipe developers can’t cook? That her audience, which skews older, is not sufficiently motivated to leave reviews? Or does it really signal, as Walker suspects, that Martha’s publisher hasn’t “adequately resourced” comment section moderation? In a statement, Meredith Corp., which publishes Martha Stewart Living, said that the company requires users to log in before rating recipes and that it has “filtering and content moderation capabilities” to keep things from getting abusive or explicit.
“There are so many confounding factors around this data,” said Durand, of the Kitchn, where recipe raters are required to log in and the average recipe earns around 4.3 stars. “I would, no pun intended, always take recipe ratings with a grain of salt.”
But to Goldwyn, the AmazingRibs guy, almost any attempt to convert recipes into signals for search risks cheapening them. He hates the apparent widgetization of something he considers a craft. “The great MFK Fisher would not be found by Google,” Goldwyn likes to say, by which he means that a lot of classic recipes would not perform well in an industry that demands teams of data engineers, community moderators, and search engine consultants.
And yet, Goldwyn understands there’s no use resisting Google. In the past five years alone, he’s spent $300,000 rebuilding his site to Google’s specifications. In January, he’ll relaunch the site again, this time using Recipe Maker, which promises clean recipe schema and more intensive moderation of reviews and star ratings.
“It’s interesting,” Goldwyn said archly. “Engineers are telling chefs how to write and publish recipes.”