Let’s add some speed bumps for the bias bus

Yolanda D. McGill, ESQ.
February 06, 2024

Beating bias in credit data through AI

We’ve heard rumbling from state houses and the Hill about AI enabling bias in lending. This, stacked alongside news cycles about lenders discriminating against protected status individuals when it comes to homebuying, can make you wonder “Can equity even be achieved in lending?” Because if technology can only perpetuate bias, and some of the largest lenders in the country can’t solve discrimination issues, we’re sort of out of luck, aren’t we?

Well, fortunately, this isn’t the case.

Good data is the key to doing AI well. And good data is actually the key to doing lending well, too, so why not combine the two ideas. Can good data be the key to doing AI in lending right and increasing economic equity for underrepresented groups? (hint: it is, and Zest AI has been on the beat for a number of years now)

So let’s slow down and take some time to look at the facts before we make a judgment call on AI in lending.

What is economic equity when it comes to lending?

Let’s step back a moment and discuss the word “equity.” It has a lot of different meanings to people, but when it comes to lending better and more fairly, equity has a lot to do with how a lender approaches the decision to lend to someone or not.

Obviously, any decision influenced by a data point that indicates an applicant’s race, ethnicity, national origin, religion, sex, or marital status is off limits. But there is more to consider. Traditional credit scores don’t predict risk as accurately as they could — be that due to a lack of data or a point-in-time view of someone’s credit history — either of  which affects every type of person but disproportionately affects those protected class borrowers. How can lenders be sure they’re following fair lending laws as well as they could if they’re not able to accurately evaluate who they’re lending to through BISG methods?

Access to equitable lending looks like this: everyone is consistently and efficiently assessed for risk through the use of more accurate data, given their decision (and adverse action notice, if necessary), and priced fairly for their loan. AI can do all of this, it’s all about the kind of AI and the approach to its use.

How the data you input affects AI and machine learning decisioning

AI can do a lot of heavy lifting to increase equity in lending decisions, but the work actually starts with trained individuals reviewing what goes into the model. Good data is key, and if you’re working with non-representative data, or data that’s missing, then you’re increasing the guesswork for your machine learning model and possibly enabling biased decisions. This goes for any industry.

But what about the lending industry? Why does the data you use matter? This data must be assessed for bias, and AI can do that. Here’s what our thought partners at FinRegLab had to say about how to properly use data in machine learning in their report “Explainability & Fairness in Machine Learning for Credit Underwriting” released in December 2023:

“A conceptual soundness review for a machine learning model begins with an assessment of the suitability of the data for modeling. This step addresses questions such as whether the information is representative of the population the model is expected to evaluate, whether sufficient controls are in place to protect the integrity of the data, and if the information is of sufficient quality and reliability in light of the use case and the architecture of the proposed model.” (pg 27)

Using machine learning and AI to build models that have a shot at fixing bias issues in lending decisions starts with proper fair lending analysis and practices. This includes key steps:

  • Look at each data point for potential disparate treatment issues and improper proxies for protected class statuses;
  • Assess data points for potential for disparate impact;
  • And look at how data points interact with each other for possible disparate impact

In another blog of mine, using good data was an instrumental pillar for fairness, followed by explainable algorithmic decisioning, accurate race estimation, and improving compliance standards. By using AI to analyze the data and how it interacts with other data points, we can root out bias and make accurate and compliant decisions.

When we work with our clients, we start by applying our AI to a large representative pool of data from the relevant nationwide credit bureau. AI brings more computational power that can quickly parse through and analyze granular information from tradeline data fields. Applying AI to tradeline credit data with the intent of increasing accuracy and fairness can rectify the systemic bias in that data.

Alternative data has been a big buzzword in the financial services industry because it has real potential to fill data gaps and offer accurate credit risk profiles — but if it is not deployed with abundant caution and lots of fair lending analysis to root out improper proxies and possible disparate impact it carries, lenders using alternative data are at substantial risk of fair lending violations. AI addresses gaps and pulls far more risk insights, but with much less fair lending risk than what alternative data offers. But there may be a need for alternative data when a thin-file consumer, or someone with no credit history, is looking for a loan.

Removing AI from banking isn’t going to fix the bias problem — it could make it worse

Bias has existed in banking — AI or no — as long as there has been a formal banking infrastructure. Demonizing all AI isn’t the answer to fixing lending discrimination, especially when AI has been proven to help reduce bias in lending. Instead of attacking it as part of the problem we need to be smart about using the right AI for lending — safe, locked AI — and we need to put in the work to ensure that it’s making equitable transparent decisions.

 

___________________________________

Yolanda D. McGill. Esq.


Yolanda is Zest AI’s Head of Policy and Government Affairs. She is a practicing attorney with more than 20 years of experience in financial services. Since 2003, she has worked for mission-driven organizations such as the Lawyers’ Committee for Civil Rights Under Law and the Consumer Financial Protection Bureau, providing highly technical legal expertise with a personable approach to complex topics.

Latest Articles