Work

How Algorithms Can Punish the Poor

An interview with Automating Inequality author Virginia Eubanks.

An emoji shakes its head no.
Photo illustration by Slate. Photos by Thinkstock.

As Americans look for greater government efficiencies, we increasingly turn to automated systems that use algorithms to determine who is eligible for access to housing, welfare benefits, intervention from child protective services, and more.

But what if automating only increases efficiencies in an already imperfect system—one that already can make life more, not less, difficult for those seeking access to food, shelter, and health care?

Virginia Eubanks, an associate professor of political science at SUNY–Albany, digs into exactly this question in her new book, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. I spoke with Eubanks about her book and the challenges and unintended consequences that arise when we give machines decision-making power about human need, public benefits, and state intervention. This interview has been condensed and edited for clarity.

Advertisement
Advertisement
Advertisement
Advertisement

Amanda Lenhart: In the book, you write about the “digital poorhouse” and discuss three different examples of governments using digital tools to manage access to social safety net benefits and services: a system in Indiana that governs welfare benefits, one in Los Angeles about access to housing, and one in Allegheny County, Pennsylvania, that flags potential child abuse and neglect. Why did you pick these cases, and what do they illustrate?

Virginia Eubanks: These systems are really important to understand for a number of reasons. One is that when we talk about issues about automated decision-making or artificial intelligence or machine learning or algorithms, we have a tendency to talk about them in a very abstract way. And I think it’s really important to look at the places where these tools are actually already impacting people’s lives right now. We have a tendency to test these tools in environments where there’s a low expectation that people’s rights will be respected. So domestically, that would be in programs that serve poor and working people or in communities of undocumented immigrants [or communities of color]. Internationally that would be in war zones. So that’s why I looked at public services.

Advertisement
Advertisement
Advertisement

We [also] have a tendency to talk about these tools as if they’re simple administrative upgrades. But I argue in the book that we actually smuggle all of these political decisions, all of these political controversies, all of these moral assumptions, into those tools. Often they actually act to keep us from engaging the deeper problems—they act as overrides to these deeper concerns. We often believe these digital tools are neutral and unbiased and objective. But just like people, technologies have bias built right into them.

Can you give an example of bias or moral assumptions baked into these platforms?

Advertisement
Advertisement

In Indiana, the governor there signed what eventually became a $1.4 billion contract with a coalition of high-tech companies that included IBM and ACS in an attempt to automate and privatized all of the eligibility processes for the state’s welfare programs. When seen as a question of simple efficiency, I think it makes a lot of sense.

Advertisement

But one of the assumptions that was built into the system was that the relationships between caseworkers—particularly public local case workers and the families that they served—were invitations to collusion and fraud. And that part of making the system more efficient actually lay in breaking the relationship between caseworkers and the families that they develop relationships with and serve. The system was built to replace a casework-based system with a task-based system. So 1,500 public caseworkers were moved into regional call centers far away from their homes.

Advertisement
Advertisement

And rather than carrying a docket of families that they served, they responded to a list of tasks that dropped into a computerized queue. So nobody saw cases through from the beginning to the end. And every time a recipient called the call center, they talked to a new person. The result of that was that these really complicated forms—which can be anywhere from 20 to 120 pages long including supporting documentation—people often make mistakes on them. So the people filling out the forms might make a mistake. The call center might make a mistake. If any of those things happened, then the recipient was denied benefits for the reason “Failure to cooperate in establishing eligibility.” And because this system relied on breaking the relationship between caseworkers and the families they served, no one was there to help people figure out what was wrong with their application. Any fault, any accident, any mistake was [considered] the fault of the applicant rather than the responsibility of the caseworker.

Advertisement
Advertisement
Advertisement

One million applications were denied in the first three years of the program, a 54 percent increase from the three years before that. And these are really horrifying cases—like an African-American woman in Evansville, Indiana, who missed a recertification appointment because she was in the hospital dying of ovarian cancer. She was kicked off her Medicaid because she missed those appointments.

The cover of Automating Inequality, by Virginia Eubanks.
St. Martin’s Press
Advertisement

So a system that was talked about as just digitizing the process, freeing caseworkers up to not deal with paperwork and really do their jobs and to increase efficiency, actually ended up—because of the political and the moral assumptions that were built into the system—acting as a huge machine for diversion and for denying people their basic constitutional rights. [Editor’s note: The controversial program ended after three years.]

In your book, you talk a lot about racial and ethnic inequality and income inequality, but there isn’t a lot explicitly about gender. What’s the story there?

Advertisement
Advertisement

Gender is the water that all of this stuff exists in. One of the things that will increase your likelihood of being poor is if you’re caring for other people. And that falls much more heavily on women, although increasingly it is an issue for men as well—particularly men of color.

Advertisement
Advertisement
Advertisement

But one of the invitations of the book is to shift our thinking from state violence as something that only happens at the hands of law enforcement officers to something that happens in many different ways. Policing is a process that happens not just in our interactions with law enforcement, but also in our interactions with public services, with homeless services, with child protective services. The ways we’re talking about state violence right now have a tendency to center the experiences of men. And I believe that is hugely important. What I’m hoping to do is open the conversation in a way that we can recognize the kinds of policing that primarily happen to women as people who are the primary caregivers for poor and working-class families. I think the move to digital incarceration, like to electronic shackles rather than incarceration, is part of the same system. It is the same as threatening a homeless family with the removal of their children because they lack a home. These are very similar dehumanizing, oppressive forces. The most profound feminist argument in the book is really about how we recognize—or don’t—the experiences of women with policing, and the way that policing operates in all these other agencies beyond law enforcement.

Advertisement
Advertisement

Is there anything good about the use of algorithms around the provision of welfare benefits and the ways we create a social safety net? Can you give some examples of tools that get it right?

Most of the ones that I am aware of come out of civil society. But there are several tools that I’m really interested in. One is called mRelief, as in “mobile relief.” It’s out of Chicago.

One of the most frightening moments of applying for public services is when you don’t know if you’re going to be eligible, and yet you’re going to release just these reams of data about yourself. I remember this from applying for food stamps with my family. We were pretty sure we weren’t eligible, but we were pretty close to the line, and we really wanted to know. One of the things that mRelief does is it allows you to ping state systems without releasing your information. So basically they act as a third party—you release your information to mRelief.
Then it anonymizes it and pings the state system and comes back to tell you, “Yeah, it looks like you’re going to be eligible for this,” or “No, looks like you’re not going to be eligible for this.” So you can make that decision about whether you want to release all that data in order to access a benefit.

Advertisement
Advertisement
Advertisement
Advertisement

One of the key insights of the book is that these tools are not disrupters so much as they’re amplifiers. It shouldn’t surprise us when a tool grows out of our existing public assistance system to be primarily punitive. Diversion, moral diagnosis, and punishment are often key goals of our public-service programs. But if you start with a different values orientation—if you start from an orientation that says everyone should get all of the resources they’re eligible for, with a minimum of disruption, and without losing their rights—then you can get a different tool.

Technology doesn’t drive all change. Technology doesn’t just respond to this society. It also shifts the way we understand ourselves. But it’s not neutral, either. So it’s not like the boot heel of Darth Vader, but it’s not like the magic spaceship that will save us all. [We] have to design from our values and not be surprised at the “unintended” consequences when we don’t design with equity in mind.

Advertisement