Like every other major social media company since the 2016 election, Twitter has spent the past year and a half arguing with itself and its users over how to police harmful speech on its platform. The latest stage of this debate is coming as the company has received criticism over why it has not outright banned conspiracy-theorist Alex Jones, as Facebook and YouTube now have. In recent weeks, those discussions have begun to center on a concept that Twitter seems to think should be the north star of efforts to make the social network a more welcoming, productive, and above all safe public commons: dehumanization.
The idea to use dehumanization as a rubric for knowing when and how to police content comes after conversations that Twitter’s leadership has had with teams across the company over what it can do to “help customers feel safe as it relates to hate speech, driven by a principle of minimizing real-world harm,” wrote Del Harvey, Twitter’s vice president of trust and safety, in a letter to Twitter employees last week. “Academic research has consistently linked dehumanization with violence,” Harvey noted, adding that acts of dehumanization often precede acts of violence and that dehumanizing speech not only can hurt the people it targets but can also fan hatred in those who read it. The company said it is reviewing a dehumanization policy with staff and that it is also working on other changes to how it moderates potentially harmful behavior on its platform.
Harvey is right about dehumanization. Before someone acts violently against another, it’s not uncommon for that person to see the other as less than human. It’s not only physical violence where effects of dehumanization are at play. Racism, sexism, and other prejudicial behaviors and systemic abuses are likewise also often symptoms of some flavor of dehumanization, wherein one person sees another person, or even a whole class of people, as less human than herself. Not all violence, of course, is preceded by dehumanization—violence can stem from competition or distrust, too—but a lot of it is.
So, yes, thinking about hate in terms of dehumanization makes sense—to a point. But relying on the concept of dehumanization as the sole basis for determining harm on the internet does not. It’s not only an incomplete way to address many of the problems with Twitter and other social media platforms, which have pulled hateful ideas and fringe theories closer to the center of public life; dehumanization is also an easily misapplied standard—one that could be used against the very groups whose humanity is most often diminished.
Let’s start with why dehumanization is an imperfect window into harm. On the internet we’re all already digitalized, less human versions of ourselves. And while it is unequivocally true that hate against real people online can translate into harm in real life, the fact that we don’t know precisely who others are online makes dehumanization an inadequate yardstick for knowing when discussions that center on someone’s lack of humanity are harmful. It’s much easier to call someone fat or stupid or incompetent or a racist slur online than it is face to face. It’s easy to pile on, and the person who is on the receiving end of the vitriol may disappear offline in response, making it even easier to keep the hate going against their avatar. These effects are almost certainly amplified by the design of social networks—particularly Twitter, which allows for anonymity and automated accounts. In many ways Twitter is an anti-empathy machine. For the company to start caring about dehumanization now would force it to reckon with its own role in how we all see each other as a little less human when we’re on it. Logging on Twitter, it doesn’t take long to see people who I know to be kind in person rip another person whom they have never met to shreds. Maybe that person said something wrong or even inexcusable, but Twitter’s setup incentivizes its users, at least in part, to take the most uncharitable interpretation possible and respond with the sharpest jab or wittiest zinger. Negativity gets engagement—an issue Twitter CEO Jack Dorsey seemed to acknowledge in a Washington Post interview this week. Twitter is dehumanizing. It’s not just the bigots there.
While there are plenty of bigots engaging in dehumanizing speech, not all of the harmful content on Twitter fits so neatly into that box. A standard of dehumanization, for example, doesn’t help you moderate lies—a problem worsened by the fact that we see everyone as a little less human on Twitter. With his repeated claims that the parents of the victims of the Sandy Hook massacre are lying, Alex Jones didn’t suggest that these people were less than human; he asserted that their lived experience wasn’t true and that the murder of their children was a hoax. When he alleged that the survivors of the Parkland school shooting were actors working on behalf of a plot to craft gun control regulations, he wasn’t asserting that those teenagers were less than human; he was saying they were capable of taking on a complex role as politically motivated actors. That’s not dehumanization. He was manipulating the fact that since most people don’t know who these victims are in real life, it’s impossible to know that they don’t have an ulterior motive. It was hateful and harmful but didn’t call into question their humanity. It was an untruth. Still, it caused harm.
When I’ve experienced harassment on Twitter, like when someone decided to make a fake account pretending to be me and tweeted about how much I hate white people, that wasn’t an act of dehumanization against me. It was a lie. When a lie circulates online about someone, it’s possible to believe it, not because you think that person is less than human, but because you don’t know who they are. On the internet, anyone can pretend to be whomever they want.
If Twitter is truly committed to making its platform less harmful, it will have to not only look for acts of dehumanization, it will have to be forthright about a broader system of values it promises to abide by. If users flagging the tweets of bigots will be able to cry “dehumanization!” then so will bad-faith actors like the far-right trolls who mobilized to convince Disney to fire director James Gunn for a series of vile, but clearly joking, years-old tweets. Dehumanization is an easily weaponizable rubric. In contrast, a policy of banning racism—that is, actual racism against marginalized groups—is not.
Which is why Twitter needs a policy centered on actual values, applied earnestly. That may mean one philosophically attuned to cases of dehumanization but also explicitly geared toward, say, prohibiting racism and discrimination against people of color and people in poverty, one rooted in a historic understanding of oppression. That may mean leaving up content that mocks white people, since such language has not historically been used as a means of justifying oppression. It may mean a dedication to prohibiting hate speech against Muslims if the aim appears to be justifying hateful actions against the religious group. It could mean removing Holocaust denialism with the understanding that it’s an act of anti-Semitism. Sometimes the hate is already in practice, and established hate groups don’t need to publicly dehumanize people to rally their followers, like the cadre of Proud Boys accounts Twitter correctly booted off the platform before the sequel to the white-supremacist Unite the Right rally last week. If Twitter was clear that it is a place where bigotry on the basis of race, class, religion, gender, and sexual orientation is subject to removal, and that the judgment will be made as swiftly as possible with an eye toward context, it would be a new era for the platform. It could mean action against those with large followings first, in order to signal that using Twitter as a platform to promote hate or untruths that could lead to harm is not what the service is here for.
It won’t be easy, and Twitter may well be labeled by some as unfairly censorious or politically biased. But neither conservatives nor liberals should require hate in order to share or express their political leanings, and if someone insists that it is a part of their political thinking, then that political thinking may be a source of racism and may need to find another place to exist on the web. Dehumanization can lead to harm, and it should be watched for, but it’s far more important that Twitter declare it is against racism and the promotion of falsehoods that may inspire harm. Admitting that these decisions aren’t clear-cut, that executing them even with a large staff of moderators is difficult, and that high-profile offenders like Alex Jones, who use social media to promote untruths that have already caused violence, should be prioritized is a good way to start. Whatever Twitter does, it should center it not around a useful but pliable concept, but around clearly articulated values. And then it should have the courage to stand by them.