The Industry

Google’s Advertising Platform Is Blocking Articles About Racism

This illustration picture shows the US multinational technology and Internet-related services company Google logo on February 14, 2020 in Brussels.
Google has an opaque algorithm for protecting advertisers from offensive content. Kenzo Tribouillard/AFP via Getty Images

On Martin Luther King Jr. Day this year, the Atlantic decided to recirculate King’s famous “Letter From Birmingham Jail,” which the magazine had run in its August 1963 issue and republished, in print and online, in 2018. Several hours later, the publication’s staff noticed that Google’s Ad Exchange platform, which serves many of the ads on the Atlantic’s website, had “demonetized” the page containing the letter under its “dangerous or derogatory content” policy. In other words: As part of its efforts to protect advertisers from offensive internet content with which they would not want their products to be associated, Ad Exchange had locked out one of the most important texts of the civil rights movement.

Advertisement

Google controls more than 30 percent of the digital ads market. A big chunk of that business happens through Ad Exchange, a marketplace for buying and selling advertising space across the web. According to its publisher policies, Google does not monetize, or allow advertising on, “dangerous or derogatory content” that disparages people on the basis of a characteristic that is associated with systemic discrimination—race, gender, sexual orientation, disability, etc. As the policy outlines, this might look like “promoting hate groups” or “encouraging others to believe that a person or group is inhuman.” Because of the scale of Google’s ad-serving business, however, it can’t enforce this policy on the front lines by hand, so instead the company uses an algorithm that, in part, scans for offensive keywords in articles. But the system doesn’t always take context into consideration. Several mainstream publishers, including Slate, have had articles demonetized under this policy when covering race and LGBTQ issues.

Advertisement
Advertisement
Advertisement
Advertisement

For example: Last Thursday, Google informed Slate’s advertising operations team that 10 articles on the site had been demonetized for containing “dangerous or derogatory content.” The articles in question covered subjects like white supremacy, slavery, and hate groups, and most of them quoted racial slurs. They included pieces on the racist origins of the name kaffir lime, the 2017 police brutality movie Detroit, Joe Biden’s 1972 Senate run, and a Twitter campaign aimed at defaming Black feminists, which all had quotes containing the N-word. Another, about the use of offensive words in tournament Scrabble, referenced a book that had the N-word in the title, and a demonetized Dear Prudence column reproduced a reader letter asking for advice about a racist nephew who had lobbed an ethnic slur for Middle Eastern people. Articles about the end of slavery in Massachusetts, the legacy of “assimilation,” and Twitter debates, as well as a podcast transcript from the Slow Burn season on white supremacist David Duke, either quoted or described racist views.

Advertisement

Needless to say, the articles were not promoting the discriminatory ideologies affiliated with these slurs but rather reporting on and analyzing the context in which they were used.

Advertisement

Once flagged by the algorithm, the pages were not eligible to earn revenue through Ad Exchange. Slate appealed the moderation decisions through Google’s ad platform last Thursday morning, as it normally would when a demonetization it feels is unjustified occurs. Not long after, as part of the reporting of this story, I contacted Google’s communications department, whose personnel said they would contact the engineering team to look into it. The pages were subsequently remonetized by Friday morning.

Other publications told me they’ve run into similar issues with the “dangerous or derogatory content” policy. BuzzFeed has had articles covering racism and sexual orientation demonetized, such as a profile of former Breitbart editor Katie McHugh, an investigation into Chicago police officers writing racist posts on Facebook, a list of distinguished queer people of color from history, a feature on controversies surrounding Pride events, a news post about a woman who yelled slurs at a CVS, and a column about Tyler Perry. BuzzFeed did not appeal the decisions, so the articles are still demonetized.

Advertisement
Advertisement

Google’s advertising policies for publisher content were recently in the public eye when it removed the far-right blog Zero Hedge from Ad Exchange and threatened to do so for the Federalist, a right-wing news and commentary site, in June. However, Google had taken these enforcement actions because of racist reader comments that kept appearing on these sites’ articles. Some commentators took the incident as evidence of conservative censorship by Big Tech, but Google did the same thing to the tech blog Techdirt in 2018. In Slate’s case, Google’s algorithm flagged content within the articles themselves rather than the comments. (Slate puts comments on separate pages from the articles.)

Advertisement
Advertisement

“We have strict policies against content that promotes hatred, intolerance, discrimination or violence against others and have automated tools to help quickly uncover the use of slurs. We recognize that context can matter, in news reporting for example, so publishers always have the option to appeal the decision and we will quickly review,” a spokesperson for Google said in a statement. The spokesperson declined to disclose what exactly had triggered the flagging mechanism in the case of the recent demonetizations on Slate out of concern that it might give bad actors information about how to game the system, but did admit that the moderation algorithm can at times single out keywords associated with discrimination without recognizing the context. The spokesperson framed it as a trade-off between being able to flag offensive content immediately with automatic triggers for keywords and allowing publishers to earn revenue on journalism dealing with racism. She also noted that it could be tricky to whitelist a publication like Slate so that it would never get flagged for these keywords because they could appear in derogatory contexts in the comment pages.

Advertisement

How big of a problem are demonetizations like this for a publisher like Slate? “We decide what to cover based on what we think is important for our audience and what we think is important for society, and that won’t be impacted by Google’s demonetization decisions,” said Dan Check, Slate’s CEO. He further noted that while revenue from Google ads is important, Slate has many other revenue streams. Slate publishes hundreds of articles per month, so demonetizing 10 articles likely wouldn’t create a huge dent.

Advertisement

Still, the demonetizations underscore the fact that publishers’ ability to make money from their work remains partially at the mercy of opaque algorithms that can be tweaked at any time, possibly with significant consequences for their business. The potential concerns might be especially pronounced for news organizations whose missions are to focus on topics, like racism and discrimination, that aren’t necessarily ad platform–friendly.

Advertisement

It does make sense that Google would err on the side of demonetizing too much, as opposed to too little, when it comes to “dangerous or derogatory content”—because there’s a lot of offensive stuff on the web, certainly, but also because the advertisers that spend billions with Google are notoriously skittish about placing ads next to content that is even remotely controversial. Apart from Google’s own policy enforcement mechanisms, advertisers themselves can choose to limit where their ads appear based on content, which can be a disincentive for covering certain stories. In March, for example, when the coronavirus began severely affecting the U.S., publishers saw huge spikes in traffic as readers sought out information on the pandemic. Yet businesses didn’t want their ads appearing next to news about sickness and death, so many publishers weren’t able to capitalize on a coverage area where there was intense reader interest.

Advertisement
Advertisement
Advertisement

While Slate’s articles were remonetized by the next day after I contacted Google’s PR team, the appeals process can often take longer, and simply appealing a decision doesn’t always mean that an actual human employee at Google will get involved to look at the context. Indeed, the effectiveness of Google’s system hinges on the efficiency of the appeals system. A rapid system that gets humans involved early and often shouldn’t pose too many problems. But in an environment in which so much of online journalism is still funded through advertising, sluggish ad-serving that relies too heavily on automation could dissuade publishers from running slurs in any context, or even from devoting more resources to covering issues having to do with racism and other forms of discrimination in the first place.

Advertisement