How Automation Bias Encourages the Use of Flawed Algorithms

Protesters hold up signs during the daytime. The sign in the middle reads, "ABOLISH I.C.E." A woman holds up a sign in Spanish to the right.
A protest against ICE raids in New York City on July 12.
Spencer Platt/Getty Images

From 2013 to June 2017, the U.S. Immigration and Customs Enforcement’s New York Field Office determined that about 47 percent of detainees designated as “low risk” should be released while they waited for their immigration cases to be resolved, according to FOIA data obtained by the New York Civil Liberties Union. But something changed in the middle of 2017. From June 2017 to September 2019, that figure fell to 3 percent: Virtually all detainees, the data shows, had to wait weeks or even months in custody before their first hearing, even if they posed little flight risk.

All that time, ICE used the same software to determine a detainee’s fate: the Risk Classification Assessment tool, which is supposed to consider an individual’s history—including criminal history, family ties, and time in the country—to recommend whether that person should be detained or released after 48 hours of arrest. When ICE introduced the algorithm in 2013, the Intercept reported Monday, it gave four options: detention without bond, detention with the possibility of release on bond, release, or referral to an ICE supervisor. In 2015, the algorithm was edited to remove the option for bond. Then, it was changed again after the 2016 election to remove the release output. According to the NYCLU and Bronx Defenders, the possibility of bond or release has been “all but eliminated.” (ICE personnel can still technically override the recommendations of the tool, which may explain why that 3 percent of low-risk detainees were still released.)

On Feb. 28, the NYCLU and Bronx Defenders filed a lawsuit alleging that as a result of this change, ICE has illegally detained virtually all the thousands of people its New York Field Office has arrested over the past three years. ICE, the NYCLU argues, has a legal obligation to consider whether release is appropriate within 48 hours of arrest.

The lawsuit aims to restore due process and end what it calls “ICE’s manipulation of the legal process,” by requiring ICE to make individual assessments and abandon its “hijacked algorithm.” “If the New York Field Office were actually conducting individualized determinations pursuant to its stated criteria,” the lawsuit says, according to the Intercept, “the percentage of people released should have actually increased since 2017 because more people arrested qualified for release.”

ICE’s reliance on a risk assessment tool is not unusual, even as it becomes increasingly clear that algorithmic biases tend to affect marginalized communities and people of color. Risk-assessment algorithms in particular have been in use for decades and, more recently, have become an integral part of the criminal justice system, from policing to evidence to sentencing. Algorithms have started to replace bail hearings to help determine who goes to jail, for instance, and police departments use them to predict future unlawful activity, which civil liberties groups say leads to heavier policing of communities of color. In large part, we still don’t know how these algorithms work—many of them are kept secret from the public, often in the name of protecting intellectual property.

One reason for the dependence on algorithms is “automation bias,” in which humans attribute more weight than is sometimes deserved to computer decisions, says Colleen Chien, a professor at Santa Clara University School of Law who researches innovation and the criminal justice system. “There’s been a lot of criticism of risk assessment tools, like this one, particularly in the bail and pre-trial contexts,” Chien said. “But the reality is that human beings like to get information from what they think is an objective source.”

This “veneer of objectivity and certainty,” as Chien puts it, is particularly attractive to government agencies. “The reality is that administrators in governments have to make hard decisions every day,” said Chien, who served in the Obama White House as a senior adviser on intellectual property and innovation. “And they are going to use tools that help them do it more efficiently, do it more consistently, and more accurately.”

Sometimes, as in the case of ICE’s Risk Classification Assessment, these tools can start to feel like an “algorithmic rubber stamp,” Chien said. But algorithms tend to reflect the systems and agencies that use and create them. In this context, the data behind ICE’s tool is perhaps unsurprising, since the agency has escalated its terror tactics and “become an arm of Donald Trump’s nativist agenda” since 2017.

Chien stressed, however, that algorithms also hold potential for making systems more efficient and for upholding the presumption of innocence. For instance, “[t]he criminal justice system is notoriously biased in terms of its history,” Chien said. So “when you look at what’s the impact of the algorithm, you need to take a baseline, and then you need to measure how are we changing from [that] baseline,” she said. Chien advocates for a clean slate policy, which would use algorithms to seal or clear Californians’ criminal records. She also pointed out the potential of facial recognition technologies, which are often criticized, for exonerating suspects.

“I think there’s a lot of ways in which algorithms can be very beneficial in criminal justice,” said Chien. “But I think the reality is, again, that they’re being used whether or not the public thinks they’re beneficial.” (A recent report by the Administrative Conference of the U.S. details how pervasive AI tools are across federal agencies.) “The stories you read about are just the tip of the iceberg,” said Chien.

Given the ubiquity of these algorithms—and the secrecy in which they’re currently allowed to operate—it only becomes more necessary to hold government agencies accountable for their algorithmic practices. The ICE case is just the latest, and most public, example of how algorithms can be weaponized, under the guise of impartial justice, to a certain end.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.