The Department of Housing and Urban Development is the nation’s chief civil rights enforcer for housing, charged with protecting the most vulnerable against housing discrimination. But a new proposed rule, now open for public comment, would radically reinterpret anti-discrimination law to give landlords and mortgage lenders a recipe to discriminate by algorithm.
Algorithms are used for everything from hiring and lending to figuring out whom to release from jail. Housing is no different. Landlords use algorithms to screen tenants by looking at applicants’ credit ratings, and some even use applicants’ social media accounts to track their frequency of attending bars and build personality profiles. But algorithmic predictions are built on incomplete data and past decisions by humans, reproducing their past biases. Even a seemingly nondiscriminatory tenant-screening algorithm can wind up discriminating on grounds like race and gender. So a landlord who uses an algorithm to choose tenants may wind up discriminating without realizing it, and a landlord who wants to discriminate has a formula for doing so.
Housing discrimination has a long and vile history. Housing is the greatest store of wealth that Americans have, and for many years, federal law and policy explicitly discriminated against people of color, preventing wealth buildup and generational wealth transfer. And because home ownership is a focus of wealth, every other form of discrimination that affects economic life in turn compounds housing inequality. Vast housing discrimination continues today, albeit often in subtler form.
The Supreme Court held in 2015 that “disparate impact” discrimination—cases where there is no direct evidence of intentional discrimination, but still a disproportionate effect on the basis of protected classes like race, gender, or disability—is illegal under federal housing law. These cases, which rely on statistical proof of disparate impact rather than proof of differing treatment based on protected factors, have been a critical part of anti-discrimination law for decades. The HUD proposal is designed to make it harder for plaintiffs to win these cases. For example, under the new proposal, instead of landlords having to defend a tenant screening policy on the grounds that it was the least discriminatory option, a prospective tenant would have to prove in court that it was not a valid choice in the first place. This is a major change for HUD, which in 2015 urged the Supreme Court to recognize the doctrine’s applicability.
The proposal, while billed as a mere update to bring regulations in line with the Supreme Court’s 2015 decision, creates entirely new rules for landlords using algorithms. There are functionally two separate defenses being created. First, the proposal allows landlords to use an algorithm as long as its inputs are not “substitutes or close proxies” for protected characteristics and as long as it’s predictive of what it purports to predict—or a “neutral third party” certifies that fact. So if a hypothetical landlord decides to predict number of noise complaints as a proxy for difficult tenants, using music streaming data they somehow obtained, they might find a correlation between preferred musical genre and how difficult a tenant is. Of course, musical preference is not a substitute or close proxy for race, but an algorithm that equates a preference for hip-hop with noise complaints is probably picking up on race as a factor in frequency of noise complaints. Unlike under existing law, under this rule, a landlord would be off the hook even where there may be less discriminatory alternatives. Second, the landlord is also immunized from any discrimination claim if he uses a tool developed and maintained by a recognized third party. These safe harbors supposedly ensure that a model is legitimate and the landlord has not himself “caused” the discriminatory outcome.
The first rule misapprehends the problem with algorithmic discrimination. When it comes to discussions about algorithmic discrimination, the concern is not only that someone might use a well-known substitute for protected class, like ZIP code, as an input when they secretly want to use race. That is a standard concern for disparate impact and would still be a concern after this rule is implemented, but it’s not a problem new to algorithmic discrimination. Algorithms present a more complex problem: Machine learning models rely on interaction between features to find unexpected patterns in the data, which can disproportionately harm people in disadvantaged groups, as in the music example above. For another example, a tool that relies on social media and frequency of bar visitation to build a personality profile might penalize someone who is clinically depressed or just found out about a serious medical condition. If the landlord makes decisions on that basis, it would be illegal disability discrimination. These discriminatory patterns usually cannot be accounted for in any one feature in the model.
When the proposal’s new rule allows defendants to prove nondiscrimination feature by feature, it misses the whole point. And the results of machine learning algorithms are highly sensitive to the problem definition, to the choice of dataset used to train the model, and to social context, so despite insinuations to the contrary, the model is not merely revealing some latent truth in the data. Rather, the use of such a model would itself be the cause of this discrimination.
The third-party defense, which is designed to let landlords use industry-standard tools, is no better. Right now, those industry standards don’t exist, but organizations such as the IEEE and the Partnership on AI are working toward that goal. (Disclosure: I am a Data & Society representative in the Partnership on AI.) But there are no guarantees that any nondiscriminatory standard will emerge. Discrimination is so context-dependent that off-the-shelf solutions are dangerous. Even the most state-of-the-art solutions to correcting algorithmic discrimination rely on demographic data to train the model. When that technology is used in a different city or town than it was trained on, the nondiscrimination guarantees of the advanced technology will be invalid. Furthermore, many industry-focused “ethical A.I.” efforts are often based on too simplistic assumptions and result in recommendations to build and sell more technology.
While HUD claims that its rule “is not intended to provide a special exemption for parties who use algorithmic models,” the agency’s stated goal is to limit potential disparate impact liability, to make it easier to make “practical business choices and profit-related decisions.” If its new rule stands, HUD will be wildly successful in that goal. Landlords who do not want to discriminate will have peace of mind that they need never be troubled by pesky discrimination lawsuits again as long as they buy industry-standard software, leaving their own discriminatory choices unexamined. Malicious actors can easily devise algorithms that functionally redline, as long as they do so without any one factor clearly being a substitute for protected class. And as is too common in housing, people of color, families with small children, and people with disabilities will suffer.
Beginning today, there is a 60-day period when public comments are accepted. Anyone can submit a comment, following the instructions here. Even though it may be unlikely that HUD will change course based on a comment, the agency must at least answer all substantive comments, and citizen comments will be important if the rule is challenged in court. HUD is relying on us not to notice or care, or to be too exhausted to stop its newest assault on civil rights. If you are affected by this or have relevant expertise, you can submit a comment to show them that if the department cares about civil rights at all, this is the wrong approach.
When it comes to new technology and civil rights, we must be extra cautious to ensure that we do not misuse technology to repeat or accelerate the mistakes of the past. Nowhere is that more likely the case than in housing—a fact that the people working in HUD surely know well. By turning algorithm use into a shield for discriminators, HUD once again seeks to make federal policy the agent of discrimination, not the cure for it.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.