A human rights lawyer responds to Catherine Lacey’s “Congratulations on Your Loss.”
A few years ago, I attended a meeting for litigators at a digital rights conference. When entering the room, I saw many familiar faces, and a few that were unfamiliar. When I introduced myself to one of the women I had never seen before, a white woman, she reacted in a most offended manner. “Yes, we met this morning at your office,” she snapped at me. Given that I had been nowhere near my office that morning, I was quite sure she was mistaken. In the course of this awkward exchange, it dawned on me that she was confusing me with my boss: also a woman of color, but in no way resembling me otherwise. “Ah, yes, we all look alike,” I sighed, rolling my eyes, and moved on.
Over the years, I have been told I look “exactly like” other women of color ranging from Naomi Campbell to Michelle Obama to someone’s cousin once removed who they swear could have been my twin. Much can be said about these generalizations, but our inability to tell people with a different ethnic background from our own apart is a well-known phenomenon. “Cross-race effect” or “own-race bias” is thought to contribute to difficulties in cross-race identification and implicit racial bias, but there is no consensus among scientists as to its cause.
Technology also has difficulty distinguishing us from one another: Facial recognition software developed in Western countries generally fails to properly recognize faces that are not those of white men, in part due to a lack of diversity in the databases used to train the technology (which tend to reflect the composition of the designer base: mostly white men). In the future where Catherine Lacey’s “Congratulations on Your Loss” takes place, facial recognition is pervasive and has evolved to such a degree that it is trusted to better know our faces than we know ourselves. When Enid receives a fine for jaywalking, her first reflex is to pay it. As she is writing the check, however, it dawns on her that she could not have been where the photo said she was. Lacey writes, “Enid took the photograph to the window to study it more closely, as it was possible she’d failed to recognize herself, but no—this face was not her face, and even though it can be difficult to know your own face, as they so often appear to us in the dullest moments, Enid felt sure this person was not her person. Also—she didn’t own a blue purse.” When Enid shares her doubts with her partner, however, he is convinced that if the photo said it was Enid, it must have been her. “They sent it to you because … it is you,” he says and proceeds to prepare dinner.
While we are not (yet) living in a full panopticon surveillance state, surrounded by facial recognition everywhere we might go, cases of mistaken identity are already occurring. The story of what has been described as the first known wrongful arrest based on a false facial recognition match almost reads like the beginning of a dystopian novel. After being arrested on his front lawn in front of his wife and daughters in 2020, Robert Williams was confronted with an image from a surveillance video showing another Black man. “Is this you?” Williams remembers a detective asking. After holding the photo next to his face, Williams told the New York Times, one of detectives concluded, “I guess the computer got it wrong.” Nevertheless, Williams was not let go till hours later. In total, he spent 30 hours in police custody. The ACLU has since taken up his case.
Though Williams’ case received widespread attention, it almost certainly wasn’t the first time someone was wrongly implicated by facial recognition technology—and it definitely won’t be the last. While news reports of other cases have popped up in predominantly U.S. news, we know very little about the missteps occurring elsewhere, including in places such as China, where facial recognition is pervasive. Victims of such missteps have been undertaking legal action to get reparations for false arrest, imprisonment, and generally a violation of their civil rights. More systemic challenges of facial recognition are also taking place: Recently, the U.K. human rights organization Liberty was successful in challenging the use of facial recognition technology by the South Wales Police, an important first step that will hopefully spur on many more cases like it around the world.
In the story, Enid decides to contest her jaywalking ticket. Or she tries—but the automated system at the public office makes this impossible. At first, this actually puts her mind at ease, because “Who knew the world better than the cameras, all the cameras aimed on high, cameras without human eyes or human memories to flaw them?” But when she gets yet another fine, and then another, and another, she decides to fight back. She compiles a dossier for her appeal, which gets assessed by a jury of computers “approximating a dozen reasonable human minds. The verdict … would be complete and final, inarguably flawless.”
Defeated by the system, Enid eventually gives up and finds comfort in her apparent interchangeability with other people. “All at once she was certain she wasn’t going to go around being confused anymore, that it is simply true that we’re all mistaken for who we are and who we are not.”
If there is one thing we should take away from the story, it is that we should avoid that mindset at all costs. Enid’s world certainly seems like the logical progression of the way we currently engage with technology: a blind faith in its infallibility combined with an overall carelessness about assigning it its proper place—namely as a tool to assist us, not as something we should automatically value higher than our own observations and opinions. If we give in to that, we will end up enslaved to systems of our own making. As we continue automating our lives, entrusting important decisions to technology in not only our justice system, but also in welfare, health care, and schooling, we need to continually pause and ask ourselves critical questions. One of those questions should be: Even if we can make this technology work nearly perfectly, should we want to? In the case of facial recognition technology, the answer is a definite no.