Is This the End of Facial Recognition?

Listen to this episode

S1: Hey, everyone, before we get started, I just want to note that today’s episode about facial recognition technology is a developing story. Things are moving quickly and may have changed even since we recorded this. OK, here’s the show.

S2: Hello. Hello.

S1: Hi, Deb. Hey, how are you? On Wednesday night, I called up Deb Raji. Deb works on identifying bias in artificial intelligence and she audits companies for how their eyes perform. When I called her, Deb was at a loss for words.

Advertisement

S3: Oh, my gosh. Yeah, they were moving so quickly. Oh, my God.

S1: Every few minutes, her phone was lighting up with texts and calls from friends and colleagues because Deb studies bias in facial recognition software.

S4: And this week has been one of the most eventful ones of her career.

S5: IBM said that it is getting rid of their facial recognition programs. The company is also calling for a public debate on law enforcement’s use of it.

S4: On Monday, IBM sent a letter to Congress saying it was getting out of the facial recognition business. The letter also urged lawmakers to look at how technology is being used for mass surveillance and racial profiling.

Advertisement

S6: Amazon says it is temporarily banning police and law enforcement from using its controversial facial recognition software.

S4: Then on Wednesday, Amazon announced that it, too, is backing away from facial recognition, putting a year long pause on letting law enforcement use its platform as the week went on. It was like a dam broke on Thursday. Microsoft said it wouldn’t sell its visual recognition technology to police departments.

Advertisement
Advertisement

S1: People like Deb who’ve been working on this for years and tangling with these companies were blown away.

S3: Amazon is like the most belligerent. Like they’re the meanest company I’ve ever done. To be just frank, they’re just so mean. They’re just so aggressive. They put us through literal hell.

Advertisement

S1: US. In this case means three computer scientists behind a groundbreaking study on facial recognition. Deb to meet Gib Rue and Joy Boyland wienie. If you’ve heard about how these companies facial recognition technology isn’t accurate on darker skin, especially darker skinned women, it’s probably because of their work.

S2: Their initial initial response was incredibly defensive and incredibly dismissive of me, enjoys work and also the position of other people that had attempted to audit or to critique their use of facial recognition of their sale of facial recognition.

S1: So to go from that to a position where they’re actually willing to forfeit the sale of their technology for a year feels like this very important win, which is why my phone is blowing up, because people are very excited about that and they know how involved I’ve been in this story since the beginning. Do you think, like asking you to sort of, you know, step back and look at this from 35000 feet and that may be impossible in this moment? Yeah. Is this a turning point in the kind of big conversation about facial recognition?

Advertisement

S3: I definitely think this is the turning point. I think this is hopefully, you know, the first set of dominos.

S7: The fact that IBM made that public stance and then Amazon followed is so important. That is already enough to shift the public conversation around this topic and hopefully also shift the perception of policymakers today on the show. The fight over facial recognition software, how it’s weaponized against people of color, and why tech companies are hitting the brakes now. I’m Lizzie O’Leary. And this is what next TBD? A show about technology, power and how the future will be determined. Stay with us.

Advertisement
Advertisement

S1: Back in the summer of twenty seventeen before she started her research on algorithmic bias, Deborah Shaji was working at a company called Clarify, a computer vision startup.

Advertisement

S8: And that’s where I got introduced to a machine learning. And I and I kind of enter the research world. And I remember sort of the first time I saw my first face dataset, noticing right away that there was a lack of diversity and representation in the datasets. And I was trying to have this conversation with people in my office.

S2: But also just more broadly, I was trying to say like, hey, I think this is the problem. But the response was always like, it’s so hard to collect data. Why would we think about this extra dimension of representation? This is so hard to do. Like what you’re asking for is something that is so difficult. And like, this is the way it’s done and everyone’s accepted it.

Advertisement

S1: Like it was sort of the response I was getting at the same time that Deb was noticing this lack of representation in the data set. Another computer scientist had noticed it, too.

S6: Hello, I’m Joy, a poet of code on a mission to stop an unseen force that’s rising, a force that I called the coded gaze.

S1: My term for algorithmic bias, Joyful and Wienie is a researcher at the M.I.T. Media Lab. And back in 2016, she gave a TED talk about her work.

S6: Algorithmic bias like human bias results in unfairness. However, algorithms like viruses can spread bias on a massive scale at a rapid pace.

S2: So, yeah, finding Joyce Ted talk was this important moment of like, oh my gosh. There is another person that cares about this. I reached out to her. Sort of exactly. In that moment. And ended up working with her afterwards.

Advertisement
Advertisement
Advertisement

S1: The project they worked on, along with Timit Gib through is called Gender Shades.

S2: And we talk about like how it’s probably not a coincidence that her to a group we’re like all black women were like, I don’t think it’s a coincidence. We’re we’re all the people that noticed this at the time.

S1: The researchers knew that these facial recognition programs with limited data sets were being used by law enforcement agencies around the country. So inaccurate results could have real world consequences. Can you describe the work you did and what it showed?

S2: Janice Shade’s is a black box audit of commercial A.I. products. So these are tools that companies today sell and clients currently use. So nothing that was audited was experimental. Everything was in the wild in use.

Advertisement

S1: This is mass market stuff.

S2: Yeah. Exactly. What if you just tested these products on a benchmark that was representative? With respect to gender and race, what would happen?

S9: What would we discover? And what we discovered was that when you test these products on darker females, it performs 30 percent worse than it did on white lighter males. And that was a really big discovery, was that these products were not actually things that worked well for everybody that they were selling it to. And that coupled with the reality that they were selling this technology or pitching this technology to ICE at the time, to different intelligence agencies, to local police departments was really alarming. It was something that demonstrated the fact that, you know, facial recognition at this point is disproportionally being used to sort of monitor civil minority communities as part of this law enforcement pitch, but also not performing as well. And that those meetings, which is obviously a very alarming safety risk. What do these companies do with the police? What are these contracts for?

S2: In some cases, it’s something reasonable, such as attempting to shortlist a group of suspects for a crime so they might have security footage and a bunch of faces and the security footage and they’ll try to identify who was in their mug shot database fits or aligns with the faces that they see in the video. So that’s a lot of what they do is this idea of face verification or matching a face that I have in my data sets to a face that I know is from a suspect in a crime and have a huge dataset. How do I do that? Quickly and efficiently. So in the case of, like shortlisting suspects, it feels not as bad. But in a lot of cases, they might also use it on sketch photos elsewhere. If they don’t have a picture of the suspect, they’ll ask, you know, victims to describe it to someone that will sketch it out and then they’ll set they’ll put in the sketch photo and use that to search through the rest of their database of mug shots.

Advertisement

S1: Yeah, and you can imagine just how many false arrest happened as a result of that, even though there was research from people like Deb that showed major flaws with the technology. All the big players in the field kept their products on the market. This continued until Monday when IBM announced they were stopping their program entirely. So this week, IBM says they’re no longer going to. Offer or develop facial recognition technology. And I guess I wonder, as someone who is immersed in this and studies it. If you look at this announcement differently than a regular person reading the headlines.

S8: So since we’ve been watching these companies effectively for a while. I know a lot more of the backstory leading up to that announcement. It’s not a spontaneous decision. I don’t think it’s as bold as IBM. It’s sort of set out to look great now. So right now, it’s there for a lot of outsiders. So just kind of like, wow, IBM abandoned all of these important big contracts and just spontaneously made this decision. And because it’s happening at a moment of high racial tension in the states, but also just a lot of reckoning with respect to the racial history of the states, it seems as if like, oh, IBM, you know, had this realization in light of protests and everything that’s happening. But the reality is they got you’ve been working towards this position for a long time, and this is the most financially beneficial position for them to take at this moment. How so? IBM was called out in gender shades and they were the quickest to respond within four months. They had released a new product in response to the revelation that there was this huge disparity in their performance on different demographics. So let me try this idea of like let’s build a big data set to fix it. Following that, they were exposed because they had to use Fluker images without any consent in order to collect that many faces and ended up sort of being this embarrassing situation for that where the conversation around privacy and consent was completely neglected.

Advertisement
Advertisement

S1: After the privacy scandal, IBM then argued that their facial recognition technology could be used if there were strong regulations around it. But critics weren’t happy with that either.

S8: So it’s kind of like summarized they they started off being called out for their bias, trying to fix the bias and then being called out for their privacy, trying to fix the privacy and noting like, oh, it’s only restricted to these particular use cases and then being called out for that restriction. So they’re kind of at a point where, you know, they probably weren’t doing a lot of business. They definitely weren’t a big player and they needed to pull out.

S1: Amazon, though, is a different story. The companies, one of the biggest players in facial recognition technology and their software called Recognition with a K, is being used by police departments around the country from Florida to Washington.

S4: Deb says that out of all the big companies, Amazon was the most resistant to reform when it became clear that there were bias problems with its algorithm. And you can see hints of that resistance in the way Amazon announced its moratorium on Wednesday.

S1: There’s a really interesting difference to me when I read these statements from these two companies. I am writing this letter to Congress kind of about leaving the market and they’re kind of offering all this information to members of Congress. Amazon puts up a blog post that basically says, we’re not going to do this for a year and we’re curious what Congress is going to do. It feels like they sort of kicked the ball toward lawmakers and sort of said, okay, your turn.

S3: Yeah, but they’re not kicking the ball. They’re choosing to play a different type of game. So their strategy is really probably to focus on the regulation conversation because with IBM’s announcement, they’re aware that that conversation is going to shift in favor of some kind of ban or moratorium or restriction. And maybe it could even just be them creating space for themselves to figure out what’s going to happen with respect to regulation before they actually invest more in the product. Reason why I’m still happy that the moratorium is a thing is because this is very premature technology. It’s not something that should really be out there at all. So it should not be sold or pitched to police departments is a huge win because that protects every person that that police department is going to use the tool on.

Advertisement
Advertisement

S1: Well, one thing I’m curious about is in the wake of the protests that we’ve seen around the country, Amazon’s own employees and employees of Microsoft as well have called for their companies to stop working with police forces. And facial recognition on other contracts. Do you get the sense that Amazon was listening to its employees?

S3: Yeah. So there’s sort of this ongoing campaign with respect to pushing for the restriction of facial recognition use. So Amazon really stood out as this target within this broader campaign against facial recognition. There had been letters by Congress members. There had been a campaign by ACLU and ordered by ACLU. We had done an audit and we had published it as a peer reviewed paper against Amazon.

S1: Shareholders had attempted to have a vote against the use of facial recognition. Employees had signed a letter and still they sort of refused to budge. So I feel like the fact that they budged right now is accumulation of so much advocacy that preceded that. I want to talk through kind of the intersection of racial justice and facial recognition technology, something you have obviously studied a lot. As you said, one of the issues is that the data sets are biased and that they are kind of at worst, inaccurate. But I guess I wonder if they were wholly accurate and very good at identifying black people in particular. Yeah. Isn’t that also a concern when you’re talking about the sale of these products to, say, police departments?

S2: Yeah, for sure. I think there’s a danger when things don’t work and there’s a danger when things do work and get weaponized. So, for example, you know, there’s a lot of cases in immigration where false matches happen all the time. The wrong people get deported doesn’t just happen in criminal justice system with mug shots being match with the wrong person in a security camera.

Advertisement
Advertisement

S8: Those are situations where an individual is in danger because the technology does not work. And then there’s also the case of weaponization. Right. Like, if you have the perfect facial recognition model, all the data’s perfectly encrypted. There was a apartment complex in Brooklyn and the landlord and the landlord had decided to install a facial recognition system in the apartment to monitor, quote unquote, monitor the residents, like, explicitly weaponized that tool to sort of harass them and monitor Dohm and, you know, potentially even build his cases of eviction. So in that case, you know, they really lobbied hard to get rid of facial recognition because they understood that whether it works or not, this is not for me. This is not built for my benefit. This is being weaponized against me.

S1: There is a lot of discussion. And I assume after this week there will be more about banning facial recognition technology altogether. Yes, several municipalities have done it. Tamara, just go. Oakland, couple of places in Massachusetts. All of this and talking to you makes me wonder, is there a way to use facial recognition that won’t be dangerous to someone?

S2: I think no. Given the current paradigm of how these models are built, it necessitates this massive scale privacy violation. And this is something that I think sometimes people don’t always realize is that your face is the equivalent of a fingerprint as a biometric. I think we would all be alarmed if we found out that there were massive data sets of fingerprints that people were uploading, you know, as part of their social media profiles. You know, just publicly and freely to the Web, they’re just casually uploading fingerprints like fingerprint data. I think it would be, I don’t know, alarming like, oh, my gosh, don’t upload your fingerprint data. That’s so you know, that’s so crazy. Why would you do that? But we upload our faces all the time to the Web and there’s so much data about each and every single one of us available with respect to this identifiable biometric. So for me, I really feel like there’s there’s no use case that can really justify setting up the infrastructure required to build these models.

Advertisement
Advertisement

S1: Amazon said explicitly that they’re going to continue to employ their facial recognition technology for missing children. Human trafficking in your eyes. Is that a meaningful distinction?

S2: I just can’t imagine a use case where facial recognition is the only answer. Like, there is no way like why would people say, like finding missing children and like there’s other tools, other methods to do that. You don’t only need facial recognition.

S1: And in fact, there’s there’s some research that people are now undertaking where they’re finding out that facial recognition as a tool for sort of like law enforcement is not actually that effective for the reasons I mentioned, where people don’t use a properly where it ends up leading to a lot of false arrests, but not necessarily a more effective law enforcement strategy. And yet when you look at what these big players are doing, it feels like the horses, you know, are already out of the barn. How how do you view the role of someone like you in making sure that this stuff is used responsibly? Is it about making sure it works and is trained on a large and diverse data set or making sure there aren’t privacy violations or something completely different?

S2: We’ve gone through a period of does it make sense to make these models better or should we just bring it all down?

S10: Right now, we have a bunch of tools that don’t work on multiple axes are threats to safety on multiple axes that are currently widely deployed like today. So can we just scale it back and just stop in terms of how widely it’s being used? And then if it is something in the future that is decided upon is like, OK, here’s a version of the technology that we want to use for whatever particular use case that we carefully deploy it with them at axis. It shouldn’t be this free for all that we currently have where anyone can just take this tool and weaponize it against anyone else.

Advertisement
Advertisement

S11: Deborah. Thank you very much. Thanks for having me.

S12: Deborah Orji is a technology fellow at the A.I. Now Institute at New York University. That’s our show, TBD is produced by Ethan Brooks and hosted by me, Lizzie O’Leary, and is part of the larger What Next family. TBD is also part of Future Tense, a partnership of Slate, Arizona State University and New America. If you missed Monday’s episode of What Next? You should definitely go back and listen. Mary Harris talked to Dr. Howard MARKELLE, one of the doctors who came up with the idea of social distancing about the delicate questions of protesting during a pandemic. Mary will be back on your feet on Monday. Talk to you all next week.