left-caret

Client Alert

Regulators Target AI Tools Used By Financial Services Companies

March 30, 2022

Allyson Baker, Meredith Boylan, Laurel Loomis Rimon, Erin Cass & Michelle Liu

Companies that offer consumer-facing financial products and services have increasingly relied on artificial intelligence (“AI”) and predictive algorithms to make advertising, lending, servicing, investing, underwriting, and insurance decisions. At the same time, regulators have become increasingly interested in how these tools are—perhaps inadvertently—affecting consumers. Algorithms allow consumer financial services providers to innovate more quickly and scale those innovations in a more cost-effective manner. Ultimately, consumers benefit from this. Market innovations provide greater consumer choice and more competition. And all consumers gain faster access to innovative and safer products, because of algorithms and artificial technology. For example, algorithms also allow financial services companies to more quickly and effectively detect and take action to prevent fraudulent activities that could impact consumer accounts.

The proliferation of AI and algorithms, however, also has led to increased regulatory scrutiny. In particular, federal and state regulatory agencies are especially concerned with the discriminatory impact stemming from the use of algorithms that rely on data models fed potentially biased data. The Consumer Financial Protection Bureau (“CFPB” or the “Bureau”) and the Federal Trade Commission (“FTC”), for example, have issued guidance about how to accurately and appropriately use AI and algorithms. These agencies also continue to warn companies about the potential consumer harm stemming from the use of algorithms, including how these predictive models can inadvertently introduce bias or unfair outcomes. On March 16, 2022, the CFPB announced changes to its supervision and examination policies, and updated its examination manual to “better protect families and communities from illegal discrimination,” by noting its intent to focus on discriminatory impact. The Bureau announced that it would use its authority to prohibit unfair, deceptive or abusive acts or practices (“UDAAP”), and specifically that its unfairness doctrine prohibits discriminatory outcomes. As part of these revised policies, the CFPB noted that it would closely examine financial institutions’ decision-making in advertising, pricing, and other areas to ensure that companies are appropriately testing to ensure that they are not engaging in unlawful discrimination, including through disparate impact stemming from the use of AI and algorithms that rely on potentially biased data. CFPB Director Rohit Chopra emphasized the CFPB’s interest in these issues in a March 23, 2022 statement regarding a report by the Interagency Task Force on Property Appraisal and Valuation Equity (“PAVE”) addressing the use of algorithms in discriminatory home valuations. Director Chopra stated that the report “underscores the critical importance of fair and accurate appraisals in residential real estate,” and noted that the Bureau and other federal financial regulators would be “working to ensure that algorithmic valuations are fair and accurate.”

CFPB also updated its examination manual to state that during supervisory examinations the Bureau would routinely review the following:

  • “Documentation regarding the use of models, algorithms, and decision-making processes used in connection with consumer financial products and services.”
  • Input data, including “[i]nformation collected, retained or used regarding customer demographics . . .” and “any demographic research or analysis relating to marketing or advertising of consumer financial protects and services.”
  • Policies and procedures to ensure that there are “decision-making processes” for potential UDAAP concerns, especially including discriminatory impact.
  • Marketing and advertising materials to ensure that they “do not improperly target or exclude consumers on a discriminatory basis, including through digital advertising.”

In addition, other agencies have recently introduced new guidance around financial technology (“Fintech”) companies’ use of algorithms in their decision-making processes, and potential consumer harm:

  • January 12, 2021: Governor Brainard of the Federal Reserve System provided context for the “black-box” problem inherent in using AI algorithms, and consequently, the importance for companies to create contextual knowledge, which may vary depending on the role of the financial services employee. For example, compliance officers should be capable of explaining to consumers the result of an AI-based business decision.
  • March 31, 2021: The OCC, FRS, FDIC, CFPB, and NCUA jointly announced notice of comment and rulemaking for financial institutions that use artificial intelligence and machine learning for operational purposes, governance, risk management, and the existing controls over these AIs.
  • April 19, 2021: The FTC established its longstanding legal authority to enforce fair and equitable outcomes in AIs through Section 5 of the FTC Act, the Fair Credit Report Act, and the Equal Credit Opportunity Act.
  • October 22, 2021: The White House Office of Science and Technology Policy announced the administration’s priority in developing an AI “Bill of Rights” to ensure that individuals have meaningful recourse if they are harmed through the use of an algorithm.
  • February 23, 2022: The CFPB published an outline of proposals for the purpose of preventing algorithmic bias in home evaluations, focusing on automated valuation models (“AVMs”), which are software often used to determine the value of real estate and which serve as the basis for underwriting, lending, and mortgage decisions.
  • March 23, 2022: The Interagency Task Force on Property and Valuation Equity, comprised of 13 federal agencies and offices, issued a final action plan that outlines steps federal agencies may take to ensure equitable home ownership by addressing the role of racism in residential property valuation. One of the Task Force’s proposals is that “agencies participating in AVM rulemaking commit to address potential bias by including a nondiscrimination quality control standard in the proposed rule.” The CFPB in a press release about the Appraisal Task Force cited to its February 23 outline of proposals on AVM software as one way for the Bureau to take an active leadership role in addressing potential biases in valuation models and ensure that algorithmic valuations are fair and accurate.

Moreover, legislators are showing additional appetite to propose new legislative actions that would increase accountability for automated decision systems, often with the purpose of specifically combatting potential bias. These efforts include:

  • Algorithmic Accountability Act of 2022, introduced by Senator Wyden (D-OR), Senator Booker (D-NJ), and Representative Clarke (D-NY), which requires companies to assess and report the impact of the automated systems they sell and use, and authorizes FTC to promulgate additional regulations for how companies should assess and report on AIs that are critical in business decision-making.[1]
  • Stop Discrimination by Algorithms Act of 2021, introduced to the D.C. Council by D.C. Attorney General Karl Racine and designed to ban algorithmic discrimination. It addresses companies’ use of models that rely on data that are proxies for protected characteristics including race, gender, sexual orientation, age, disability, source of income, and credit information.
  • Colorado SB 21-169 enacted on July 6, 2021, is a law that bans insurers from using external consumer data and information sources, as well as algorithms and predictive models that use external consumer data if it “has the result of unfairly discriminating based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.”

Additional legislation has been introduced in California, New Jersey, and Washington in the 2021 legislative cycle for the purpose of preventing discriminatory impacts that could result from algorithmic bias.

Scrutiny of algorithmic tools occurs at every segment of their lifecycle, including the input data used in training these algorithms, the process and methodologies used to design algorithms, and the decision outcomes that result from algorithms. Scrutiny may come from a variety of sources—by rules promulgated through federal agencies, or through new legislation in Congress and state or local governments—and focus on any number of products or industries. It is important for companies that design or build consumer financial products that use the tools of AI algorithms to monitor these developments and ensure that they have the appropriate policies, procedures, and other controls in place.

 

[1]   This is an updated version of the Algorithmic Accountability Act of 2019, and further clarifies the types of companies and algorithms covered under the proposed legislation.

Click here for a PDF of the full text

Get In Touch With Us

Contact Us