Skip to Content

AI Policy & Governance, Equity in Civic Technology, Privacy & Data

Taking a Hard Line on AI Bias in Consumer Finance

Financial institutions are increasingly relying on AI systems to guide or even make decisions such as whether a consumer is provided credit.  CDT joined the comments of fellow civil rights, consumer, technology policy, and other advocacy organizations to regulators of these institutions (the “Agencies”) explaining that they need to protect consumers against discriminatory and opaque financial AI systems.

CDT also separately submitted our own comments to the Agencies to emphasize a few points on which the Agencies should provide clarification regarding how financial institutions and third parties must fulfill existing legal obligations when developing and using AI systems. Our recommendations focus on:

  • Nondiscrimination: The Agencies should clarify through guidance and regulation that the use of AI systems that embed bias violates antidiscrimination obligations, and provide guidance about how to identify and mitigate risks arising from different types of training data. 
  • Explainability and Transparency: The Agencies should require financial institutions to clearly explain why and how their AI systems use their data and AI decision-making, so as to bridge information gaps that would otherwise prevent consumers from vindicating their rights when use of AI systems lead to violations of antidiscrimination laws.  
  • Shared responsibility with third parties: The Agencies must make clear that financial institutions bear responsibility to ensure that the third-party AI systems they use do not result in discrimination, such as by enforcing due diligence requirements and providing guidance on how small community institutions with fewer resources can meaningfully scrutinize the AI systems they use.

“Neutral” Data and Disparate Impact

Traditionally, credit and other consumer finance decisions use a wide range of data such as credit scores, employment history, and criminal records, and AI systems are often trained on those types of data. Yet we know that reliance on ostensibly facially neutral data can have the effect of entrenching or even amplifying historical discrimination because, for example, it reflects previous discriminatory decisions.

Indeed, AI systems can compound discriminatory effects. For example, some consumers who lost their jobs due to the COVID-19 pandemic have been denied unemployment benefits because of flawed algorithmic public benefits systems, which in turn prevented them from paying for rent and other necessities. Falling behind on those payments could hurt their credit scores and result in eviction proceedings. Other AI systems could in turn render adverse decisions for these consumers in the future based on the nonpayment and eviction proceedings that would be added to their credit records.

The inclusion of data embedding biases is not the only problem.  Disparate impact can also occur when consumers are “credit invisible,” because their credit records do not have enough of the common indicators of creditworthiness. Accessing credit requires having a credit history, but having a credit history requires getting access to credit. This paradox can preserve historical discrimination. Credit invisibility particularly affects marginalized consumers due to socioeconomic barriers and biases (algorithmic or otherwise) in public benefits, education, employment, and the criminal justice system, as well as in longstanding consumer finance practices themselves.  

Consumer Transparency

Consumers struggle to understand what data AI systems use and how they produce their outputs, and the systems’ proprietary nature hinders consumers’ ability to interrogate those systems. The Federal Reserve has noted that even those who create and deploy AI systems may be unable to explain how their systems generate their results. Consumers often are not informed about the data that will be considered to render a decision, so they do not have the chance to provide supplemental information to be considered in the AI decision-making process. Keeping consumers in the dark enables financial AI services to find proxies that consumers may not readily connect to protected traits.

After an adverse decision is made, consumers lack enough information to challenge the data or process the AI system used, or to meet their disparate impact burden by demonstrating that a less discriminatory but effective alternative approach was available. This disincentivizes financial institutions and third parties from proactively explaining how their systems work and developing less discriminatory designs. However, the Agencies need to make clear that even as their financial AI services evolve, financial institutions’ obligations under the law still stand. And those obligations need to encompass transparency about how AI contributes to decision-making and what data it uses, and include an explanation of how AI systems arrive at their decisions or recommendations.  

Passing the Buck on Responsible AI

In many cases, financial institutions may rely on AI developed and/or operated by third parties. Agencies should make explicit that this does not absolve them of responsibility. Financial institutions are responsible for assessing risks arising from those third-party relationships, doing due diligence in selecting third-party partners and systems, and overseeing quality control that ensures that those systems do not lead to discrimination or other violations of fair lending laws. 

The Agencies should articulate how community institutions that may lack the internal resources to fulfill their obligations can connect with external technical and policy expertise. The Agencies should also specify factors and inquiries that community institutions should incorporate into selecting and regularly reviewing the third-party AI systems they use. 

***

The complexity of financial AI services, the numerous stakeholders involved, and the lack of enforcement have made it easier for financial institutions and third parties to avoid responsibility for an AI system’s discriminatory outcomes. A recent FICO survey found that the majority of corporate executives do not prioritize implementing principles of Responsible AI. According to the survey, nearly two-thirds of companies cannot explain how their AI systems work, while only 20% actively monitor their AI products for fairness.

Many of these companies do not take any further initiative beyond checking for baseline regulatory compliance, so stronger guidelines and more aggressive enforcement are needed to foster more ethical use of AI. Like the FTC, the Agencies must send a clear message to financial institutions and third parties: if they do not hold themselves accountable, the Agencies will do it for them.

Read CDT’s full comments here.

Read the comments we joined with the National Fair Housing Alliance, including the full list of signatories, here.