At a New York event celebrating the legacy of Martin Luther King Jr. held in the Riverside Church last week, Democratic Rep. Alexandria Ocasio-Cortez sparked a small firestorm when she argued that algorithms reflect human bias.
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she said. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
“Socialist Rep. Alexandria Ocasio-Cortez (D-NY) claims that algorithms, which are driven by math, are racist,” replied Daily Wire reporter Ryan Saavedra, kicking off the latest cycle of online conservative handwringing about something AOC has said.
Ocasio-Cortez was right, though, and what she said should not be that controversial. While algorithmic processes are intended to rationalize decision-making by minimizing human bias and fallibility, the data they consume already reflects that bias, and so algorithms continue to perpetuate racial bias, just at a greater speed and with the illusion of independence. As machine-learning algorithms are being used to replace human decision-making in various areas of life including hiring, law enforcement, and pretrial bail, AOC’s point is a crucial one.
Still, this discussion, while vital, misses an important point. Even if the data were somehow clean, we should still worry that algorithmic decision-making perpetuates racial inequality.
Let’s take the use of algorithms in policing as an example. Increasingly, police departments across the country are employing algorithms to predict patterns of criminal behavior based on historical crime and arrest data. These algorithmic predictions inform police tactics, such as how aggressively different neighborhoods are policed. But as researchers have pointed out, previous arrest data is already skewed by racial bias; as long as there is racially biased police profiling (which there no doubt is), then past arrest data is more predictive of police behavior than of criminal behavior.
But suppose we have good math. Suppose that we have data about past crime rates that does not incorporate biased policing. Then would Saavedra be right to be so derisive? Is it absurd to think that algorithmic decision-making driven by bias-free math could be racist?
No, it is not.
Even with our clean data, it is likely that algorithms would recommend that certain neighborhoods with predominantly socioeconomically disadvantaged populations from historically marginalized racial groups face increased and targeted policing. This would indeed be a form of “bias-free” or “rational” profiling. But it would in no way take historic racism out of the equation.
The thought would be simple: There’s more crime here, so there should be more policing here. Nothing biased about that. One might argue such law enforcement decisions might be based on hard data; the algorithm merely presents us with the rational course of action. The problem is that this would conflate seemingly rational decisions with just decisions. In the case of algorithmic decision-making, a seemingly rational decision can easily be an unjust one.
This is because justice requires not only that political institutions create the formal objective conditions necessary for persons to relate to one another as equals (e.g. equal protection of the law), but also that they foster the subjective conditions necessary for persons to view themselves as equals. In other words, justice demands both that we’re treated as equals, and that we see ourselves as equals.
While often overlooked, this principle of justice has a distinguished pedigree. We can locate it in King’s objection to the practice of racial segregation: “I could never adjust to the separate waiting rooms, separate eating places, separate rest rooms, partly because the separate was always unequal, and partly because the very idea of separation did something to my sense of dignity and self-respect.”
Here, King distinguishes two ways in which segregation was unjust: First, segregation failed to formally accord equal treatment to black people in the South. And second, segregation prevented black people from viewing themselves as possessing equal self-respect and dignity; in other words, it fostered a subjective sense of inferior status.
King’s second objection matches the subjective requirement of justice. The importance of self-respect in a just state is also familiar to liberal philosophy; John Rawls treated “self-respect” as a primary good in his theory of justice. “Self-respect” in the Rawlsian sense is “a person’s sense of his own value, his secure conviction that his conception of the good, his plan of life, is worth carrying out” and “a confidence in one’s ability, so far as it is within one’s power, to fulfill one’s intentions.” Justice requires that the state take steps to ensure that people possess a sense of equal self-respect, that they don’t view themselves as holding an inherently inferior status to others in society.
Rational algorithmic decision-making still violates this principle because it can undermine a person’s sense of equal self-respect. Much like with racial segregation, police targeting of predominantly black neighborhoods can erode a sense of equal status and dignity. As philosopher Adam Hosein put it, people in these neighborhoods “can reasonably interpret profiling as signaling a lack of regard for their interests: either a willingness to rely on racial stereotypes about them or a willingness to use their race as a negative proxy in ways that cause substantial disadvantage.”
In 1967, exactly one year before his assassination, King delivered his famous “Beyond Vietnam” speech at the same Riverside Church where AOC made her recent comments. King offered this prescient advice:
When machines and computers, profit motives and property rights, are considered more important than people, the giant triplets of racism, extreme materialism, and militarism are incapable of being conquered.
As artificial intelligence becomes increasingly involved in the distribution of primary goods, we must keep these prophetic words in mind.