Future Tense

There’s No Easy Tech Fix for Online Hate Speech

Mark Zuckerberg says A.I. is on the way to help. But you should be skeptical.

An illustration of a robotic hand over angry, hate-spewing emojis.
Photo illustration by Natalie Matthews-Ramo/Slate. Photos by Thinkstock.

When Facebook CEO Mark Zuckerberg went before Congress last week, the nominal focus was on the Cambridge Analytica scandal. But legislators also grilled him on a whole menu of other issues, including fake news, foreign manipulation, and Facebook’s business model. Diamond and Silk came up a lot. At one point, Sen. John Thune, a Republican from South Dakota, asked Zuckerberg about the company’s content screening policies—and specifically, how it draws “the line between what is and what is not hate speech.”

Advertisement

Zuckerberg’s answer focused on a speculative future in which artificial intelligence will keep Facebook civil. “We’re developing A.I. tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook,” he said. He acknowledged that those tools are hard to perfect because “determining if something is hate speech is very linguistically nuanced,” but he predicted that A.I. will start getting the hang of those nuances “over a five- to 10-year period.”

Advertisement
Advertisement
Advertisement

There are reasons to be skeptical of this, though. For one thing, breakthroughs in technology are always five to 10 years away.

More importantly: Computers are incredibly bad at reading.

Of course, the very act of performing a simple Google search demonstrates that automated systems can spot words in strings and groups and various permutations, and collect them from across the internet almost infinitely faster than a human could ever hope to. But to reliably identify hate speech, an A.I. would have to know what words or phrases mean, in a changing world occupied by human beings. And there’s no clear evidence that A.I. will master meaning in the next five to 10 years—or, for that matter, ever.

Advertisement
Advertisement

To understand the depth of the challenge Zuckerberg has taken up, I called Dan Faltesek, a researcher at Oregon State University and author of Selling Social Media: The Political Economy of Social Networking. His assessment was blunt: “It won’t do what they say it’s going to do, so I think right now they’re overpromising and they’re going to underdeliver, as a lot of these folks do.”

Faltesek studies, and teaches his students to use, the same kinds of content-screening tools that Facebook hopes will someday clean up its act. The current methods—used, for instance, by corporations monitoring their social media mentions—are startlingly dumb. “Topic modelers” prune the text of articles and highlight the most common repeated terms. “Sentiment analyzers” simply identify terms that have already been assigned particular emotional values by human trainers and tally the results. These systems see texts as mere collections of words or phrases, rather than as larger structures with complex internal relationships. “The computer is not semantically intelligent,” Faltesek told me. “It’s just scanning for that combination of letters.”

Advertisement
Advertisement
Advertisement
Advertisement

To prove the point, Faltesek recently took digitized short stories by Franz Kafka and “jammed them into a sentiment analysis system to see what it came up with. And it can’t really read them.” Kafka’s work includes plenty of the hurdles to machine reading recently cataloged by a group of Indian computer scientists, including “complex phrasal structures” and the fact that words can have different meanings in different contexts. Those challenges have already proved amazingly resistant to computerized solutions—IBM researcher H.P. Luhn, one of the pioneers of automated data processing, outlined many of them in 1957.

Even without semantic comprehension, simple keyword-flagging algorithms can reduce the workload on humans—but even they hardly seem ready for prime time. In September, Google unintentionally drove the point home when it publicly debuted its Perspective comment-screening system. Tests of the algorithm showed, among other things, that self-identifying as black or gay was deemed “toxic”—apparently because such terms had been flagged as contentious by designers, and the algorithm couldn’t judge otherwise based on context. We see the real-world outcome of such false positives with increasing regularity, such as when YouTube restricted otherwise benign videos whose titles merely included the words gay or lesbian. The Electronic Frontier Foundation warns that such overpolicing by A.I. would disproportionately “impact marginalized communities, journalists who report on sensitive topics, and dissidents in countries with oppressive regimes.”

Advertisement
Advertisement
Advertisement
Advertisement

At the same time, all forms of automated screening would likely have much less impact on those determined to evade it. “All of these people who are putting content on these systems, they are all adapting and highly creative,” said Faltesek. “They’re always playing within the game created by the A.I., so they’ll always find ways around it.” They might not have to be much more creative than Chinese social media users, who already deploy code words to beat their government’s dumb filters.

Advertisement

Zuckerberg’s present-day solution isn’t necessarily more viable. In October, Facebook announced plans to hire 10,000 more human content screeners, at least until Zuckerberg’s A.I. dreams come true. That’s a stunning number considering that Facebook had only a little more than 25,000 employees in late 2017, but it’s probably still not enough. Speaking to Congress last week, Zuckerberg was specifically asked about allegations that Facebook helped spread hate speech that led to ethnic cleansing in Myanmar, and said he was working to hire more Burmese-language content reviewers. There are more than 7,000 languages spoken on Earth—how many of those deserve robust localized screening teams? Facebook currently screens much of its offensive content “reactively”’ once it’s reported by users. Will it, on similar principles, only hire more local screeners in a nation after the genocide has begun?

Advertisement
Advertisement

Growing its workforce to the needed scale to responsibly oversee its billions of users worldwide, in short, could begin to cut into even Facebook’s stratospheric profits, putting its A.I. efforts into a frantic three-way race with both mounting public pressure and the stark mathematics of capitalism. Even huge improvements in current sorting-and-flagging approaches would probably still need humans in the loop to review flagged material. And other methods are still in their infancy—Faltesek says he has tried using much-vaunted neural networks without better results.

Pushing beyond that, to a system that could actually exercise dynamic and informed judgment about humans’ online communication, remains in the realm of science fiction. “The test to know when we actually have artificial intelligence,” said Faltesek, “will be if it has autopoesis”—a biological term encompassing autonomy and self-awareness. “Can it make its own text? Can it cause itself to exist?” In other words, he believes that before Facebook can truly automate its moderation systems, we will have to reach the so-called Singularity, creating intelligent digital life in our own (cognitive) image.

Advertisement
Advertisement
Advertisement

And even that epochal achievement might not be enough—after all, even humans can’t easily agree on what’s hateful, obscene, or harassing, and often miss the true intent of communication. Just look at Americans’ collective, yearslong failure to recognize that the entire “alt-right” was exchanging coded endorsements of white supremacy. Terms that many now consider deeply offensive only became taboo through lengthy campaigns of public persuasion that convinced a majority of people that they were being used to motivate and organize material oppression.

Advertisement
Advertisement

Viewed as history, those processes can seem deceptively linear. But they’re actually messy, and can remain unresolved for decades. Many Americans still believe that denigration of Muslims reflects a justified fear of violence and social breakdown. Is cracker a racial slur or a gesture of resistance against an ethnic group that itself engages in systematic racial oppression?

Advertisement

Facebook would clearly like to become a primary channel for public discourse around the globe and seems finally to be acknowledging that the resulting profits come with serious responsibility. And there is certainly a role for even simple algorithmic filtering here—social media would almost certainly be a better place without brazen ethnic slurs, just as it’s a better place without ISIS propaganda. But if Facebook wants to automate away thorny judgments, it won’t just have to invent a machine smarter than any we’ve ever seen—it will have to create something more empathetic and insightful than we are ourselves.

Advertisement