Demystifying three big questions on AI in financial services
Blockbusters like The Terminator and I, Robot notwithstanding, AI-based technology cannot — nor will it ever be able to — take over the world or otherwise plot against mankind.
The reason is that, just like any other computer program, an AI-based program can only do what it’s programmed to do. Granted, through machine learning, an AI-based program will get better and better at performing its assigned task, but it will still only ever do that assigned task. For example, an AI-based loan decisioning engine will only ever make loan decisions. It will never conspire with the smart refrigerator in the lunchroom to take over the building.
Most people understand that. Yet, many banks and credit unions are apprehensive about deploying AI-based systems in their own institutions. Their concerns are many, but in truth, their concerns are also largely unfounded.
Is there a potential for bias?
Given that AI-based decisioning engines typically consider more than a thousand variables when evaluating an applicant, there’s concern that bias against a protected class could sneak into the model.
“Of course, it’s illegal to intentionally build that kind of bias into your model, but if you’re not careful about scrutinizing what goes into the model and what the model’s using to make predictions, you could inadvertently do that,” said Jay Budzik, CTO at Zest AI. “We’ve developed an automated process to screen the variables to make sure that none of them are highly correlated with a protected class.”
A related concern is maybe a little subtler: Machine learning models can pick up and perpetuate discrimination just from being trained on credit bureau data. “Most of our models are based on past lending data,” Budzik added. And the unfortunate fact is that, in the U.S., lending has had a history of bias. “By feeding these machine learning models this biased lending data,” said Budzik, “we’re teaching the model to amplify that bias, or to mimic the bias that was already in the data.”
According to Budzik, the other way bias can creep into a model has to do with the fact that the accuracy of the model is directly related to the amount of historical data it starts with. The more data that goes in, the more accurate the results that come out. “For example, we generally have less data on single women as borrowers,” said Budzik. “That means that, unless we take that into account, our decisions for single women might not be as accurate as they are for a segment on which we have more data.” He added that bias can also show up in regional data, for example in an area with a largely white population.
The good news, of course, is that Zest AI has taken these things into account by building bias detection and mitigation tools into its software so that models come out as accurate, reliable, and fair as possible. “When you’re putting the models together, it’s very important to make sure that you have representation in the data for those populations that are typically underserved, so that your models don’t just fall apart when faced with, for example, a single woman or a Black loan applicant.”
Can AI help navigate the challenges of outdated data?
The use of outdated data is another potential issue. This can come about in a number of ways.
“Let’s say I’m a credit union in Alabama, and I’m making auto loans using Zest AI technology and now I want to do personal loans or credit cards,” suggested Budzik. “Those are slightly different because auto loans are secured. Usually, you’re considering things like the value of the vehicle when you’re making those decisions. That’s a totally different economic equation than a credit card where someone’s assigned a limit and they can spend up to that limit.”
Data can also become potentially outdated when an institution decides to expand into a new geographic area or reach a new demographic. The way Zest AI deals with this issue is by augmenting a client’s historical loan data with bureau tradelines of unfunded populations (customers they never saw) in their region. Using counterfactual scenarios ensures that all models are as relevant and contextually aware as possible.
Can AI help navigate macroeconomic shifts?
“People worry about what can happen if something exogenous happens in the economy, for example, runaway inflation or a global pandemic,” added Budzik. An AI model is only as good as the data it was fed in the past, and if the future doesn’t reflect the past in a meaningful way, statisticians might say the model has no support for the predictions that it’s making.
“We give our clients tools that let them analyze when something changes materially in the world that could impact the way the model is scoring.”
That doesn’t have to mean you have to scrap the model and start afresh. “We give our clients tools that let them analyze when something changes materially in the world that could impact the way the model is scoring,” said Budzik. These so-called model refits are now easier to do than ever with automated model management software from players like Zest. “We offer the ability to run what-if scenarios to say, if we refresh the model, would it be any better? And then deploy those refit models into live scoring.”
But surprisingly, given the economic upheaval caused by the pandemic, new models have not needed much in the way of refits. “In 10 times out of 10 times so far, the answer has been no, which was a total surprise to me,” said Budzik. “I thought going into COVID, we were going to have to redo all our models, but in fact, we didn’t. They’ve been stable over time. We’ve had models in production now for three years or longer.”
Does AI-based lending have its challenges? Sure. Has Zest AI addressed those challenges? Absolutely.
When you consider that a tailored AI-driven loan decisioning engine offers a cost-effective, scalable way to approve more loans without increasing risk, the only remaining question is: What are you waiting for?