Compliance is great but it takes more to excel in fairness

Yolanda D. McGill, ESQ.
August 05, 2024

When it comes to the data and variables we use in building AI-automated underwriting technology, we know that data features with direct ties to disparate treatment are illegal to use. Any data point based on an individual’s protected class characteristics such as race, sex, ethnicity, etc. is discriminatory. People should receive a loan based on the likelihood that they will repay that loan, not by how they look.

However, there’s a bit of a fairness merry-go-round that happens when it comes to judging how much disparate impact should or should not be included in an underwriting model. How do we navigate technological advancements to promote fairness and accuracy more equally?

From Less Discriminatory Alternative to Least Discriminatory Action

In 2022, the CFPB made it plain that lenders must understand how the new AI technology they’re investing in works, and that technology providers have transparent and explainable models. This was a major step in the right direction and set many financial institutions on the path of working with companies that built AI that combined an ethical purpose with their technological innovations.

If we were to take this idea of AI enhancing fair lending a step further, we would start by instituting further clarity to facilitate LDA, Less Discriminatory Alternative, searches and usage. In 2023, Fair Lending Report, the CFPB reported to Congress that it has directed covered entities to take specific measures on business needs and LDA search for models — however, we have not seen new guidance releases from the Bureau.

We need regulators to encourage lenders to not only institute searches for an LDA model. Lenders should also be encouraged to select an LDA when possible. We can accomplish this when lenders can see the benefit to the business that a less discriminatory alternative can bring. The advent of AI technology that can make models more accurate and more fair is making these benefits accessible to any lender that is ready to realize the profits from pushing past fair lending compliance into fair lending reality.

LDA models are integral to how AI/ML applied to underwriting can enhance fair lending. Fairer lending cannot gain traction until policymakers expressly accept informed usage of these advanced techniques and, along with that usage, the accuracy, transparency, and equity AI/ML can bring to how we build and use underwriting models.

“Zest AI has always been ahead of the curve when it comes to fair lending technology, and part of that requires that all credit decisions remain fully transparent for our customers, their consumers, and regulatory partners. Zest FairBoost is our way of taking a stand and saying that finding fairer outcomes will always be a priority and that equal access to credit is a job that is never finished.” – Sean Kamkar, SVP & Head of Data Science at Zest AI

Balancing business and ethics

The data we use in credit decisioning matters, and how that data is deployed and consumed can mean the difference between a low-risk borrower getting a loan or not. AI underwriting technology isn’t a person with intentions, but regulators look at model variable choices and selection made by the humans who are responsible for model development. So we approach data, and use data to inform our underwriting, with meticulous and conscientious protocols.

At a time in the industry when delinquencies are high, both lenders and regulators approach lending with an elevated desire to avoid risk. Any smart business would put the long-term sustainability of their company as a priority, and this is where we can let an LDA step in and do what it was intended to do — find the right balance between smart business and ethical lending practices.

As I responded recently to a question about the promise of purpose-built AI: “What we really think these tools can do is to take institutions past the status quo that quite frankly hasn’t been working for people. Even though institutions have been “compliant” we still have bias in lending, we still have people who are locked out of reasonable access to reasonable and affordable and responsible credit… .”

Getting closer to fairer outcomes

Meeting regulatory requirements is the bare minimum for fair lending practices. The industry’s real challenge lies in actively choosing the fairest and most accurate underwriting tools that minimize discrimination,in choosing models that not only comply with regulations but also promote fairer lending by focusing on equity, transparency, and ethical decision-making.

By prioritizing the pursuit of fairer models, lenders can move beyond mere compliance and towards a more inclusive financial system that serves everyone justly. Embracing this approach will not only enhance the integrity of lending practices but also build trust and fairness in the financial sector, ultimately benefiting society as a whole.

 

___________________________________

Yolanda D. McGill. Esq.
Yolanda is Zest AI’s Head of Policy and Government Affairs. She is a practicing attorney with more than 20 years of experience in financial services. Since 2003, she has worked for mission-driven organizations such as the Lawyers’ Committee for Civil Rights Under Law and the Consumer Financial Protection Bureau, providing highly technical legal expertise with a personable approach to complex topics.

Latest Articles