Future Tense

IBM Says It Will Stop Developing Facial Recognition Tech Due to Racial Bias

A man talks on his cellphone while walking in front of a giant IBM logo
CeBIT trade fair in Hanover, Germany, on March 5, 2008. John Macdougall/Getty Images

Facial recognition software is nothing if not fallible. In 2019, the National Institute of Standards and Technology demonstrated this with a study on A.I. systems used by police departments to identify alleged criminals. The study found that these algorithms falsely identified Asian and black faces 10 to 100 times more often than Caucasian faces. They misidentified Native Americans at an even higher rate. It is these sorts of findings that have led activists to call for bans on facial recognition technology and for technology companies not to develop such products. That movement scored a win on Monday, when IBM CEO Arvind Krishna announced in a letter to Congress that the company will no longer develop, research, or sell facial recognition technology.

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna wrote. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

In 2018, research from computer scientists Timnit Gebru and Joy Buolamwini, who have been leaders in discussing racial justice and artificial intelligence, revealed the breadth of bias across facial recognition software, including systems from IBM. That same year, the American Civil Liberties Union revealed that Amazon’s facial recognition program Rekognition mismatched 28 members of Congress to faces from public mugshots.

Yet this sort of technology is already in use. In January, for instance, a company called Clearview AI came under fire for marketing its facial recognition program to law enforcement agencies. The powerful tool allowed Indiana State Police officers to track down a shooting suspect in just 20 minutes based on footage from a bystander’s phone. (The CEO of Clearview AI has said that the company has a First Amendment right to scrape people’s images from publicly available websites.)

On Twitter, critics of facial recognition technology have cheered IBM’s decision.

In his letter, Krishna also argued that Congress should revise policies that prevent people from seeking damages whenever police violate their constitutional rights. “New federal rules should hold police more accountable for misconduct,” Krishna wrote. The letter ended with a pledge to form more pathways for everyone, particularly people of color, to develop marketable skills. To this end, IBM says it hopes Congress will expand programs such as P-Tech and federal Pell grants.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.