Four pillars of fair lending

Yolanda D. McGill, ESQ.
October 11, 2023

Fairness stands out as an essential and non-negotiable piece of the puzzle in the conversation of how AI and machine learning can improve lending. With data and technology, the need for consumers to receive fair treatment and for financial institutions to deliver it is being met more often and more efficiently, but is it fair lending? While most lenders aim to give each of their borrowers a fair shot at credit while keeping the associated risks low, fair and smart lending at scale takes a robust and consistent set of tools to do the job.

AI-automated underwriting provides a faster, more accurate, and efficient way to assess the risk associated with a borrower. The effort to bring fairness to this process has made great strides, but it takes intentionality and effort to use the tools of AI and machine learning to promote equitable outcomes and to build a future of financial inclusion.

In order to put the “fair” in “fair lending,” financial institutions, consumers, stakeholders, policymakers, and developers must act with intention. Here are four pillars of fairness that provide a roadmap for how these technological advancements  promote fairness in lending:

1. Data: Painting a better, fuller picture

Fairness starts with data. Traditional scoring models rely on variables derived from a small subset of traditional tradeline data that can only tell so much of a borrower’s story when assessing risk. In an age of machine learning, promoting fairness involves pulling more and richer variables from traditional data and leveraging that data to correct for systemic biases. High levels of automation and comprehensive data are just one piece of promoting fairness; these qualities work in tandem to deliver an accurate and compliant picture.

At first glance, alternative data sources seem an obvious boost for promoting fair lending strategies in AI-automated underwriting. However, in addition to sensitivity towards compliance standards under applicable law such as those set by the Fair Credit Reporting Act, alternative data requires careful evaluation and slow implementation to make a real difference in how borrowers are assessed. In pursuit of fairness, financial institutions must understand where their data comes from and how the math determines variable impact in the underwriting process.

Fairness is not a maximalist approach to vast amounts of accessible data but rather showing considered restraint in how that data is used, understanding the limitations of what data can offer, and using technology to fill those gaps when possible.

2. Algorithmic decisioning: Crafting a toolkit

Once a lender has the right data, how does it know that data is in the right hands to promote fairness? Algorithms are a key piece in determining how data leads to fair outcomes for consumers.Many financial service providers rely on old tools and old information where fairness may take a backseat to the relative safety of the status quo. Advanced AI/ML algorithms that promote fairness, especially when compared to the impacts of legacy scoring methods, reveal a new path forward.

When building an underwriting model with fairness in mind, explainability is the name of the game. Technological advancement and automation should not be synonymous with a black box. Financial institutions must have a firm grasp on how their algorithms are trained and deployed, and this understanding is even more important in the search for less discriminatory variables.

True implementation of technology with a purpose acknowledges the human aspect of automation. Lenders must bring a new level of transparency and sophistication to their underwriting to bring fairness to consumers. In this sense, fairness goes beyond an algorithm and extends to the systems of creation, implementation, and discussions of these algorithms.

3. Race estimation: Understanding the impact

An essential part of fairness is understanding and measuring the effects of an AI/ML model during and after its implementation. To comply with fair lending requirements as outlined by the ECOA, banks and credit unions must have a way of proving that their methods of lending do not discriminate against borrowers based on race or other protected statuses.

Artificial intelligence and machine learning technology can  enhance the tools for lenders and regulators to better understand how their underwriting decisions are impacting their customers. For example, the Zest Race Predictor (ZRP) uses machine learning to improve upon dated statistical methods that struggle to accurately measure the distribution of African-Americans and other protected classes and to assess how lending programs are (not) reaching them. The ZRP helps make impact measurement more accessible and transparent to financial institutions working to ensure that fairness is more than just a word but instead a provable and measurable standard that’s attainable with the right data and implementation.

4. Compliance: Raising the bar for fairness

Simply meeting compliance standards roots an institution’s ambitions in the past. Going above and beyond compliance standards pushes us towards a future where fairness can flourish. Zest AI is dedicated not just to meeting compliance standards but also to creating fairness technology tools to exceed them. Working with financial institutions to innovate, create and maintain AI/ML models that make sense for their organizations, pushing fairness technology forward through ongoing dialogue with our customers and their regulators, and using technology to make compliance easier, such as providing robust LDA searches, are just a few of the ways Zest AI raises the bar.

Putting your money where your mouth is

There is one cross-cutting theme across each of these fairness pillars of data, algorithmic decisioning, race estimation, and compliance — eliminating bias.

Systemic bias is the primary factor that needs consideration when approaching fairness in AI-automated underwriting. While the law prohibits discrimination when it comes to lending, as outlined before, actual fairness pushes to eliminate bias. Fairness is an intentional and holistic approach to data. Fairness is using algorithms to build models that reveal a new way forward in our financial systems. Fairness is using the tools we have, including AI and machine learning, to facilitate outcomes that empower protected classes and give each and every borrower a fair shot.

At the end of the day, technology should allow lenders to put their money where their mouth is by not just hoping for fairness, but by building trustworthy and transparent systems that can provide it.

___________________________________

Yolanda D. McGill. Esq.


Yolanda is Zest AI’s Head of Policy and Government Affairs. She is a practicing attorney with more than 20 years of experience in financial services. Since 2003, she has worked for mission-driven organizations such as the Lawyers’ Committee for Civil Rights Under Law and the Consumer Financial Protection Bureau, providing highly technical legal expertise with a personable approach to complex topics.

Latest Articles