On Monday, Elon Musk bought Twitter. In a statement from Twitter marking the sale, he said, “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.” Indeed, he has repeatedly pointed to free speech as the reason for his investment in and purchase of the social network. His plan seems to be to reduce content moderation as much as possible. On Tuesday, he tweeted, “I am against censorship that goes far beyond the law.”
Let’s set aside that “free speech” is not “the First Amendment” and that different countries regulate speech in different ways. Musk’s refusal to engage in more than minimal content moderation is worrisome—especially to Black people. In response to the purchase, NAACP president Derrick Johnson released a statement saying, “Mr. Musk: free speech is wonderful, hate speech is unacceptable. Disinformation, misinformation and hate speech have NO PLACE on Twitter.”
Johnson’s statement gestures toward an important idea: Disinformation, misinformation, and hate speech all work together to silence people. As Musk talks up his vision of a “free speech” Twitter, it’s worth asking: Whose speech gets to be free?
Twitter has never really been safe for free speech for Black women or other minority groups. Now it’s in the hands of a man who wants to roll back its existing protections and whose electric car company Tesla is currently being sued for racial discrimination against Black people by California’s Department of Fair Employment and Housing. It’s in the hands of someone who not only has shown his indifference to the rights of Black people but also thinks that content moderation is censorship.
The outlook for Black Twitter users is bleak especially when we think about Musk’s history of turning his army of followers on his critics. All of this is why Twitter users—especially those from communities often facing harassment—are upset about Musk purchasing the platform they might not love but have made work for them. Far from protecting free speech, Musk’s purchase means that they will likely now be less able to speak.
For a very long time, Twitter called itself “the free speech wing of the free speech party” and did very little content moderation besides removing illegal speech (like child pornography). The results of that approach were clear to Black women early on, but few people who weren’t affected paid attention.
In 2019, Slate’s Rachelle Hampton wrote about a group of Black online feminists, headed by Shafiqah Hudson and I’Nasah Crockett, who showed how the notion of free speech is weaponized against Black women. Hudson and Crockett led a 2014 investigation into the operation of the hashtag #EndFathersDay on Twitter, which appeared on tweets advocating for the cancellation of the holiday because of the supposed inadequacies of Black men. Hudson and Crockett suspected the (very poorly written) tweets weren’t written by Black people—because their messages about Black life did not match Black reality. The #EndFathersDay campaign was launched a year after a Centers for Disease Control and Prevention study found Black dads were more likely to spend time with their children than any other group. Furthermore, the purportedly Black female personas sharing the hashtag used a series of negative stereotypes.
So Hudson, Crockett, and a group of their friends created the hashtag #YourSlipIsShowing, based on a phrase used in Southern Black communities to describe a situation where something that should be hidden is embarrassingly on display. They looked for messages using the #EndFathersDay hashtag, then commented, shared, or retweeted them with messages using #YourSlipIsShowing to point out these posts were misleading. The result: The women were subjected to online abuse from what they believed to be fake accounts. Hudson and Crockett eventually found that the #EndFathersDay hashtag originated on 4chan, the far-right message board. Although we do not know who started the #EndFathersDay campaign, we know Twitter’s free speech policy allowed bad actors to post messages that encourage the online harassment of Black women.
It’s a particularly notable story, but not a unique one. Mob attacks online are used to silence the voices of people from marginalized groups. In the years since #YourSlipIsShowing, Twitter has been increasingly willing to take down hate speech and harassment, but it hasn’t been able to stop them. Researchers have found the racism and harassment often found on Twitter are made possible, and made worse, by algorithms. It’s not just the algorithms that can amplify and spread hate speech; it’s that the algorithms ostensibly intended to moderate content end up disproportionately penalizing Black users. A study by a team from the University of Washington showed that algorithms used to flag hate speech were one and a half times more likely to be flagged as offensive or hateful if they were written in African American Vernacular English. In another study, content moderation algorithms on Twitter were found to have the same issues. While they didn’t look at the Hudson and Crockett case specifically, these studies suggest that the A.I.-driven systems would have been more likely to flag the #YourSlipIsShowing corrective as promoting hate speech than the 4chan-driven #EndFathersDay campaign. Hudson and Crockett have been pointing out racial bias in content moderation algorithms for seven years, but their voices have largely gone unheard.
And the racist mob attacks have continued. When three Black members of England’s national soccer team missed scoring opportunities in the Euro 2020 Final, they faced torrents of racist threats on social media. Law enforcement had to investigate. Twitter reported that it removed more than 2,000 posts, which might make the platform look responsive—but it was not. In fact, its treatment of this case was so poor that U.K. Prime Minister Boris Johnson urged social media companies to do more to stop racist abuse on their platforms.
The problem of Black women being attacked online is so significant that it has become a subject for the human rights community. In 2018, Amnesty International published a report that found Black women were 84 percent more likely than white women to be mentioned in abusive tweets. Another Amnesty International report said that in the run-up to the 2017 British general elections, more than half of all the abusive tweets sent to female members of Parliament were directed at one person: Diane Abbott, a Black woman who uses Twitter to advocate for the needs of her mostly Black and minority ethnic constituents.
What does this have to do with Musk? As all of the above shows, the algorithms that Twitter largely uses to handle content moderation have long been inadequate at protecting Black women. Now, according to the New York Times, Musk’s desire to restrain Twitter’s “overly aggressive content moderation policies” is central to his acquisition strategy. It seems likely that Black women will be the first ones to feel the effects of loosening content restrictions.
However, it’s also worth noting that the decision to weaken content moderation will eventually hurt us all. In 2021, the Center for Countering Digital Hate published a report on the “Disinformation Dozen,” a group of 12 disinformation agents responsible for 65 percent of all misleading COVID information on social media. Imagine if Twitter’s policy teams had listened to Hudson’s and Crockett’s concerns in 2014 and made it more difficult to create fake online personas or spread online mis- and disinformation. Researchers have linked COVID misinformation to early vaccine hesitancy, so the platform’s decision not to listen to Black women potentially cost people their lives. Should misleading speech be allowed to spread freely on social media platforms? I would say no. Elon Musk, however, would—and will—probably say yes.
We all think free speech is wonderful—but we live in a society where the voices of men are given more latitude than those of women, where the rich are listened to more than the poor and Black, and other negatively racialized communities are not listened to at all. Despite what Musk says, there is no such thing as “free speech” when Black women and others do not have access to it. We need mechanisms to ensure everyone has the freedom to speak, and not just the people with the most power.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.