Using AI to underwrite? You need to lock your AI

Yolanda D. McGill, ESQ.
July 24, 2023

The big three reasons lenders should avoid unlocked AI

“AI will be as good or as evil as human nature allows.” Earlier this year on 60 Minutes, Scott Pelley spoke at length to the preeminent tech experts from Google’s leadership team about the future of artificial intelligence and humanity.

Sundar Pichai, Google’s CEO, added to the conversation stating that the AI revolution is coming faster than we know. But what does this mean for those who make, train, and distribute AI models specialized in credit underwriting?

Much of AI and its subset, supervised machine learning, seeks to mimic human thinking, using probabilities gleaned from millions of observations to win games and make choices. When AI underwriting models continue learning, going unchecked — or remaining “unlocked” — their outcomes are unsupervised and lack transparency and explainability.

In order to protect themselves and the communities they serve, lenders should have the knowledge and understanding to differentiate between unlocked AI and the explainable AI-automated underwriting models that promote transparency and fairness. So, here are three reasons lenders should refrain from making credit decisions with unlocked AI.

1. Unlocked AI underwriting decisions cannot be tested and validated for fair lending requirements

Zest AI’s mission is to make fair and transparent credit accessible to everyone by using AI to make underwriting smarter, more efficient, and more inclusive. Transparency and compliance are integral to this mission, and locked AI algorithms are a key piece to ushering in a more equitable financial system. Technology that has been thoughtfully developed, supervised during training, then locked in to perform its task under active, human monitoring can help lenders make better credit decisions, increase revenue, reduce risk, and even automate compliance.

AI-driven models incorporate hundreds of variables — all of which must be evaluated to exclude those that result in inequitable treatment among potential applicants. For an unlocked algorithm that is constantly adding variables and adapting how it reacts to them, it is difficult to ensure that it never incorporates illegal proxies for protected class status or creates disparate impact by changing how it treats certain borrowers as opposed to others. An unlocked model makes the search for less discriminatory variables a moving target and ultimately impracticable, another blow to fairness.

2. Unlocked AI underwriting decisions cannot be accurately explained

In some cases concerning machine learning, there is a sort of unexplainable magic trick that happens. With some sleight of hand motion, the predictive power an unlocked algorithm trains itself into just seems to take effect and start making decisions. Some AI developers will call it magic — we’ll go ahead and call it what it is, unexplainable AI and a direct violation of compliance standards for lending.

During the 60 Minute’s feature, Pichai uses the ‘black box’ phrase to describe how  models adapt and change their actions in ways their developers do not fully understand. If a model is a ‘black box’ in any way to the developer, it will most certainly be a mystery to the deployer.

In underwriting, a lender that uses an unlocked learning model is not experiencing the effects of a magic trick; they’re exposing themselves to the risks of a black box. That lender cannot ascertain how the model uses data variables to determine the outcome. This means that the lender cannot know how the algorithm affects credit policies and cannot define the factors that lead to credit decisions for adverse action notices required under lending laws including FCRA and ECOA.

As Zest AI has pointed out in several articles, federal regulators will not accept ‘complexity’ as an excuse for noncompliance. The CFPB has repeatedly stated that “when the technology used to make credit decisions is too complex, opaque, or new to explain adverse credit decisions, companies cannot claim that same complexity or opaqueness as a defense against violations of the Equal Credit Opportunity Act.” FTC Chair Lina Khan has also made that agency’s position plain: “[t]echnological advances can deliver critical innovation — but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

Unlocked algorithms have the potential to defy explanation at any moment.  Advanced explainability and transparency are a must for compliant AI-based underwriting.

3. Unlocked AI underwriting decisions cannot consistently predict credit risk

When moving from manual credit decisions or using legacy scoring systems to using AI underwriting technology, it’s critical that lenders have a sense of that algorithm’s predictive power when it goes into production. After all, the reason we invest in technology is to improve our processes, right?

An important aspect of an algorithm’s predictive power and accuracy is its stability as time goes on. An unlocked algorithm — one that ingests new information or variables from sources it wasn’t trained on — will gather and use new information as it works on its assigned tasks. Sure, in one sense, you could call the algorithm “smarter.” However, with credit decisioning, these results fundamentally change how an algorithm performs its tasks and, therefore, how it delivers credit decisions.

Take an unlocked algorithm that has been tasked with finding creditworthy borrowers, but is continuously taking in new consumer financial information as time goes on. Since this model is unlocked, it’s developing its own trends and actually determining new patterns that it believes predict someone’s risk levels. The issue here is that this unlocked model might begin using data that falls outside of actual risk prediction — or could even be flagged as biased — and start denying applicants based on this new pattern. In this case, continuous learning could change the task in a salient way that compromises its ability to do the original task in accordance with the lender’s needs.

AI is a tool. Lenders should use it wisely

Whether AI is good or evil goes far beyond human nature and isn’t something we’re ultimately aiming to answer here.

AI is a powerful tool, and there are steps that every lender can take to ensure their models are safe, consistent, and compliant — all in the name of fairness. A lender determines how much risk they are willing to tolerate when making credit decisions, but predicting risk with an inconsistent and unexplainable AI model can lead to discriminatory underwriting decisions. While it may be a business judgment call to accept credit decisioning supported by an unlocked algorithm, such a lender risks violating the law.

___________________________________

Yolanda D. McGill. Esq.


Yolanda is Zest AI’s Head of Policy and Government Affairs. She is a practicing attorney with more than 20 years of experience in financial services. Since 2003, she has worked for mission-driven organizations such as the Lawyers’ Committee for Civil Rights Under Law and the Consumer Financial Protection Bureau, providing highly technical legal expertise with a personable approach to complex topics.

Latest Articles