Why AI transparency is so important
Truly transparent AI is going to be a crucial edge for those who have it and a roadblock for those who don’t
As we head further down the road of digitizing the financial services industry, AI is becoming more and more ingrained in every lending product and service. While it makes for simplified, quick lending decisions, it’s important to ensure that those decisions are also following fair lending laws. The way we do that is by utilizing transparent AI for our lending decisions.
What is transparency in AI?
For the financial services industry, transparency in AI is the ability for lenders to determine how and why an algorithm arrived at its decision about a loan approval or denial.
While many organizations claim to advocate for the fair and transparent use of AI, their commitment is rarely backed up by real action or business strategies supporting the creation of transparent AI. This leads to a whole host of problems. Without transparent AI, lenders can accidentally sustain discriminatory lending practices from systemic bias, face issues of fairness in their decisions, and build general mistrust from the public — all of which have received increased attention lately when it comes to AI.
Some algorithmic models do get a bad rap for being black boxes that prone to unfair bias and lack of transparency. However, if you have the right tools to explain your AI’s decisions, your institution actually prevents itself from being a black box decision-maker.
Why is it important that institutions know how AI makes its decisions if they’re correct?
Public trust, disclosure, and transparency are necessary governing ethics for AI technologies.
How AI systems work, how they reach the decisions they do, and how those decisions impact borrowers are required knowledge for businesses using AI models in high-stakes applications (like lending). An industry-specific term, “explainability,” is the documentation of this knowledge.
AI users need to create rigorous explainability processes and methods. Failure to do so can lead to lenders adopting opaque and flawed AI that could threaten consumers, unnecessarily perpetuate discrimination, and pose a threat to the safety and soundness of financial systems.
Organizations don’t often want to invite additional scrutiny. Why would they want important, potentially sensitive systems, such as AI, to be more transparent? How does it benefit them?
Transparency is critical because algorithms — like the humans who make them — are susceptible to bias. In order to scrub systemic bias from the algorithms we use in lending decisions, they must be made explainable and transparent. Organizations should strive for AI transparency to reduce risk, increase fairness, and satisfy regulatory and compliance laws.
There are concrete business benefits when it comes to transparent AI. Explainable AI means that lenders can increase approvals for borrowers who have been traditionally misevaluated by traditional scoring methods due to bias baked into the system. And, with increased transparency, lenders have the potential for reducing risk for regulatory fines, meaning they can allocate surplus cash towards other parts of their business.
Can you share an examples of a case where using AI would be impossible without transparency?
Actually, it is entirely impossible to use AI in credit underwriting without transparency.
Until recently, the market didn’t have the tools to open up AI’s black box. Zest AI has been working on solving the explainability problem for AI loan decisions, and the results are pretty solid, if we do say so ourselves. We’ve helped a handful of lenders expand access to credit for underserved populations, with a 15 percent increase in approval rates, on average.
Providing the ability to understand a model’s reasoning and economic value allows lenders to make credit decisions with confidence while ensuring compliance with regulations on disparate impact and adverse action. Without transparent AI, millions of deserving people would find it nearly impossible to get affordable credit to buy a home, finance a car, or take out a student loan.
Would there be any cases where AI transparency is bad?
Sure. Transparency would be bad if the explanations were wrong, aka false positives. But the problem there is not with transparency, but rather the algorithm itself — transparency might actually afford a lender to catch those false positives and fix them.
As an aside, not all AI needs to be explained in detail, especially if the use case is not regulated. A conversational marketing or customer service chat bot or an image recognition algorithm doesn’t require an explanation, since those are generally not deciding lending outcomes.
Given increasing adoption of AI, where do you see the importance of AI transparency going in the future?
AI transparency is most important in highly-regulated areas — AI credit underwriting being the one we’re in today.
But you can imagine there’s a number of other fields where AI would be similarly need to satisfy regulation or document clear model explainability, like healthcare or government services. There’s a lot of exciting potential applications just beginning to come into view, so we’ll continue to see AI transparency continue to shape critical industries over the next several years.