Skip to Content

AI Policy & Governance, Privacy & Data

Comments on HUD’s Proposed Implementation of the Fair Housing Act’s Disparate Impact Standard

Office of the General Counsel
Rules Docket Clerk
Department of Housing and Urban Development
451 Seventh Street SW, Room 10276
Washington, DC 20410-0001

Re: Reconsideration of HUD’s Implementation of the Fair Housing Act’s Disparate Impact Standard, Docket No. FR-6111-P-02

Dear Sir or Madam,

The undersigned individuals and organizations represent expertise in the fields of computer science, statistics, and digital and civil rights. We write to offer comments in response to the above-docketed notice of proposed rulemaking (“NPRM”) concerning proposed changes to the disparate impact standard as interpreted by the U.S. Department of Housing and Urban Development (“HUD”). The NPRM creates new pleading hurdles that would make it practically impossible for plaintiffs to have their disparate-impact cases heard, especially when the alleged discrimination results from algorithmic models. The NPRM’s algorithmic defenses seriously undermine HUD’s ability to address discrimination, are unjustified in the record, and have no basis in computer or data science. Adopting the NPRM would violate HUD’s obligation to end discriminatory housing practices and “affirmatively further fair housing.”

Algorithms are being used to make decisions that impact the availability and cost of housing. These decisions include screening rental applicants, underwriting mortgages, determining the cost of insurance, and targeting online housing offers. These models are seldom designed to take protected characteristics into account, yet they still have the capacity for protected-class discrimination. The datasets and correlations on which they rely can reflect societal bias in non-obvious ways that models may reproduce or reinforce. Models may also create or mask discrimination and bias without regard to societal bias in data. This is precisely the type of discrimination that disparate-impact liability is supposed to address.

The Fair Housing Act (“FHA”) prohibits not only intentional discrimination but also practices that result in discriminatory effects regardless of intent. HUD and the courts have recognized disparate-impact liability for over 25 years. Under HUD’s existing disparate impact rule, when a plaintiff makes a prima facie showing of disparate impact, the burden of proof shifts to the defendant to show a business justification for the alleged practices, and the plaintiff can respond by proving that less discriminatory alternatives exist. This “burden-shifting framework” allows courts to examine the specific facts of the case and determine whether the facially neutral practice caused unjustified discrimination. Under the NPRM, most algorithmic models that caused cognizable disparate impacts would never make it to this fact-specific inquiry.

Under the NPRM, a defendant relying on an algorithmic model could defeat a claim at the prima facie stage by showing that (a) the model’s inputs do not include close proxies for protected classes; (b) a neutral third-party determined that the model has predictive value; or (c) a third party created the model. These defenses do nothing to disprove discrimination and undermine efforts to address it. There is no method for restricting a model’s inputs that can guarantee that an algorithm will not produce discriminatory outcomes, nor are there common industry standards for preventing algorithmic discrimination. In many cases, researchers have discovered bias only after testing the models’ outcomes. Indeed, the type of discrimination that algorithmic models create is precisely the type of discrimination that the existing disparate impact test was designed to uncover.

We oppose the NPRM because it would unjustifiably shield users of algorithmic models from liability under the FHA. When a plaintiff alleges a disparate impact based on an algorithmic model, courts should analyze the model under HUD’s existing burden-shifting framework, which allows for the discovery and fact finding necessary to determine whether a model causes unjustified discriminatory effects and whether less discriminatory alternatives exist.