Future Tense

Silicon Valley Pretends That Algorithmic Bias Is Accidental. It’s Not.

A long, empty conference room table in a corner room with floor-to-ceiling windows.
Algorithmic bias is a function of who has a seat at the table. Benjamin Child/Unsplash

In late June, the MIT Technology Review reported on the ways that some of the world’s largest job search sites—including LinkedIn, Monster, and ZipRecruiter—have attempted to eliminate bias in their artificial intelligence job-interview software. These remedies came after incidents in which A.I. video-interviewing software was found to discriminate against people with disabilities that affect facial expression and exhibit bias against candidates identified as women.

When artificial intelligence software produces differential and unequal results for marginalized groups along lines such as race, gender, and socioeconomic status, Silicon Valley rushes to acknowledge the errors, apply technical fixes, and apologize for the differential outcomes. We saw this when Twitter apologized after its image-cropping algorithm was shown to automatically focus on white faces over Black ones and when TikTok expressed contrition for a technical glitch that suppressed the Black Lives Matter hashtag. They claim that these incidents are unintentional moments of unconscious bias or bad training data spilling over into an algorithm—that the bias is a bug, not a feature.

Advertisement
Advertisement
Advertisement
Advertisement

But the fact that these incidents continue to occur across products and companies suggests that discrimination against marginalized groups is actually central to the functioning of technology. It’s time that we see the development of discriminatory technological products as an intentional act done by the largely white, male executives of Silicon Valley to uphold the systems of racism, misogyny, ability, class and other axis of oppression that privilege their interests and create extraordinary profits for their companies. And though these technologies are made to appear benevolent and harmless, they are instead emblematic of what Ruha Benjamin, professor of African American Studies at Princeton University and the author of Race After Technology, terms “The New Jim Code“: new technologies that reproduce existing inequities while appearing more progressive than the discriminatory systems of a previous era.

Advertisement
Advertisement

Tech companies have financial and social incentives to create discriminatory products. Take, for example, Amazon Rekognition, a facial recognition created and sold by the e-commerce giant. Amazon very publicly enacted a moratorium on police use of facial recognition in June 2020 after protests in the wake of the murder of George Floyd. But prior to that, the company developed and sold this product despite mountains of evidence showing that the use of facial recognition by police departments amplifies harm toward Black people. Amazon did so to profit from a criminal justice system that disproportionately targets Black people for surveillance, arrest, and imprisonment and only stopped when protests against anti-Black racism brought attention to the company’s practices. Further, the development and sale of this technology helps to maintain an anti-Black social hierarchy that allows Jeff Bezos and the white men who occupy Amazon’s highest-paid positions to maintain their privilege in our society.

Advertisement
Advertisement

We should view algorithmic bias as a spillover effect from the culture of tech that has persistent racial and gender inequality in hiring and leadership and that has actively discouraged its employees from engaging in political discussions at work. Though the protests in 2020 prompted more explicit conversations about race and identity by tech companies, the culture of avoiding political discussions persists. This was evident when Basecamp CEO Jason Fried published a memo in April in which he banned employee discussions of social and political issues on company Basecamp accounts. In offering his reasoning, Fried wrote, “Today’s social and political waters are especially choppy” and that “you shouldn’t have to wonder if staying out of it means you’re complicit, or wading into it means you’re a target.” The memo sparked online backlash and led to a cascade of employee resignations.

Advertisement
Advertisement
Advertisement
Advertisement

What Fried said has long been implicit within tech companies across the country: Discussions of “choppy” issues like racism, transphobia, misogynoir, and ableism are uncomfortable for those with privilege and that tech companies would prefer to avoid them. And though these companies have, in recent years, attempted to have more explicit discussions of race, gender, and bias within their workplaces, the belief that social issues like race are insignificant to technological development pervades within the corporate culture of Silicon Valley.

Advertisement
Advertisement

By normalizing the avoidance of explicitly talking about social and political issues, tech companies are maintaining a culture that privileges the perspective of people with dominant identities, from hiring to product development. And the truth is that social and political topics are always being discussed. It’s just that people with racial, gender, and other forms of power see their identities and perspectives as the default and therefore not a part of the social and political landscape.

Advertisement

This is particularly true for white people, a group whose racial identity has long been treated as invisible in the United States. In research I conducted last summer, I found that the statements released by tech companies amid the racial justice protests rarely mentioned whiteness or white people. The choice to exclude white people from these statements—while hyper-focusing on Black people and other people of color—normalizes the idea that white people are raceless, and it absolves white people from their role in maintaining the racial hierarchy. In doing this, white people get to maintain their power within tech companies and avoid the feelings of fear, discomfort, and anger that may accompany discussions of racial inequality—a phenomenon sometimes known as white fragility.

Advertisement

It’s time for us to reject the narrative that Big Tech sells—that incidents of algorithmic bias are a result of using unintentionally biased training data or unconscious bias. Instead, we should view these companies in the same way that we view education and the criminal justice system: as institutions that uphold and reinforce structural inequities regardless of good intentions or behaviors of the individuals within those organizations. Moving away from viewing algorithmic bias as accidental allows us to implicate the coders, the engineers, the executives, and CEOs in producing technological systems that are less likely to refer Black patients for care, that may cause disproportionate harm to disabled people, and discriminate against women in the workforce. When we see algorithmic bias as a part of a larger structure, we get to imagine new solutions to the harms caused by algorithms created by tech companies, apply social pressure to force the individuals within these institutions to behave differently, and create a new future in which technology isn’t inevitable, but is instead equitable and responsive to our social realities.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement