left-caret

Client Alert

Disparate Impact Lives: Massachusetts Attacks AI in Lending

July 15, 2025

By Jonice M. Gray,Kari Hall,Kris Knabeand Jessica Shannon

On July 10, the Massachusetts attorney general announced a $2.5 million settlement with a Delaware-based student loan company to resolve allegations that the company’s lending practices, including the use of artificial intelligence models, violated consumer protection and fair lending laws. The Massachusetts attorney general alleged that the company failed to mitigate fair lending risks of disparate impact to Black, Hispanic and noncitizen applicants and borrowers from its use of AI underwriting models.

Specifically, the Massachusetts attorney general alleged the company engaged in unfair and deceptive practices in violation of state and federal laws, including the Equal Credit Opportunity Act (ECOA), for its use of a “Cohort Default Rate” variable, which was an average rate of loan defaults associated with specific higher education institutions, in its AI model. According to the settlement, the company’s use of the Cohort Default Rate variable resulted in a disparate impact in approval rates and loan terms on the basis of race, “with Black and Hispanic applicants more likely to be penalized than White applicants.”

The Massachusetts attorney general further alleged that the company utilized a “Knockout Rule” in its underwriting decisions to automatically deny applications based on immigration status. According to the settlement, if an applicant did not have at least a green card, the company would automatically deny the application. The Massachusetts attorney general alleged that the automatic denial practice based on immigration status created a disparate impact risk against applicants on the basis of national origin in violation of ECOA and Massachusetts state law.

Additionally, the Massachusetts attorney general alleged that the company deployed its AI underwriting models without taking reasonable measures to mitigate fair lending risks, including by failing to test its AI models for disparate impact.

Under the terms of the settlement, the company is prohibited from utilizing AI models or processes that use a school rank variable or Cohort Default Rate variable as inputs, or that automatically knock out noncitizen applications prior to the underwriting process. The terms of the settlement require the company to pay $2.5 million to Massachusetts, as well as to implement a written corporate governance system and written policies that govern the use of AI models and address compliance with consumer protection, antidiscrimination and fair lending laws. The settlement further requires the company to conduct fair lending test of all AI underwriting models and to implement an internal algorithmic oversight team responsible for fair lending testing.

Takeaways

  • Despite efforts by the federal government to sunset use of disparate impact, it remains a viable theory of discrimination for state agencies and private plaintiffs. Financial institutions should continue disparate impact monitoring and testing to mitigate risks, with heightened focus on the ways in which evolving technology may lead to unintended consequences.
  • While there has been a significant shift in the enforcement priorities and overall resources of the federal government, we have seen an increase in enforcement activity from state financial regulators and attorneys general related to consumer protections and fair lending.
    • States have a tool kit that is, in many ways, more powerful than that of the federal government. They have broad authority to investigate and bring enforcement actions under their state laws prohibiting discrimination as well as unfair and deceptive acts and practices. In addition, pursuant to the Dodd Frank Act, they are bringing actions under certain federal laws, including the ECOA.
    • Key states are increasing their firepower on consumer protection issues and expanding their staff, often with newly departed employees of the Consumer Financial Protection Bureau and other federal agencies.
  • States may pursue tried and true theories, but also are demonstrating an appetite for taking on cases that apply these theories in more novel ways. The potential discriminatory impacts of the use of cohort default had been a significant point for federal government and private plaintiffs, but dissipated in recent years as lenders adjusted practices. Here, we see the tried-and-true theory being used, but with a novel twist focused on AI.
  • The use of AI in the financial services industry continues to be a hot topic and presents numerous potential risks alongside its many opportunities. Some regulators remain skeptical of algorithms, automated decision-making and other technology-enabled processes used by companies to make decisions about their products and services.
    • In this regard, the press release announcing the settlement called out the Massachusetts attorney general’s April 2024 advisory opinion clarifying that existing state consumer protection and antidiscrimination laws apply to emerging technology including AI and algorithmic decision-making systems.
    • Companies should ensure that they have policies and procedures in place to appropriately mitigate risks associated with the use of AI and complex models, including, in particular, rigorous review of inputs and what machines are learning and applying such that there can be rapid course correction if necessary.
  • While we do not expect that equality in lending to non-U.S. citizen will be a focal point for the Trump administration, this will continue to be a focal point for some states. Here, as with disparate impact, institutions will have to navigate competing (and in some instances conflicting) regulatory priorities.

Click here for a PDF of the full text

Contributors

Image: Jonice M. Gray
Jonice M. Gray

Partner, Litigation Department


Image: Kari Hall
Kari Hall

Partner, Litigation Department


Image: Kris Knabe
Kris Knabe

Of Counsel, Litigation Department


Image: Jessica Shannon
Jessica Shannon

Associate, Litigation Department