There’s a fix to the problem of biased algorithms in lending
It’s been a rough few months for algorithms in the lending business.
First New York regulators announced an investigation into UnitedHealthcare over a study that says the insurer’s algorithms reduced the number of African American patients identified for extra care by more than half. Then Goldman Sachs, the bank behind the new Apple credit card, took heavy fire after male influencers, including Apple co-founder Steve Wozniak, noticed that their wives, who filed identical tax returns as the men, nonetheless got approved for a fraction of the credit their husbands received.
These controversies came as no surprise to those of us who work in the lending industry. The problem isn’t unique to UnitedHealthcare and Goldman Sachs: nearly all algorithms used in financial services are biased against women and minorities.
Here’s the good news: there is a solution to this problem, using tools that are commercially available today. And by embracing and adopting these tools, financial institutions can increase their profits as well as combat bias.
Right now, the methods most financial institutions rely on to fairness-test their models are primitive. Under these methods, if you run a model and it shows clear racial or gender disparities, a lender or insurer will test whether leaving certain data variables out of the model — such as income or types of existing debt — restores fairness. The problem is that models tend to lose their predictive power if variables are removed, forcing firms to choose between accuracy and fairness…and to justify their decisions to regulators.
These traditional methods of fairness testing leave a lot of bias unaddressed. And yet they remain the dominant methods of testing fairness across the lending industry. Meaning that, when a black or Hispanic consumer applies for a loan, the model determining her credit-worthiness has a high chance of denying her even though she’s qualified.
Traditional fairness testing methods (like “leave-one-out”) are flawed when used with simple algorithms, but with AI-based algorithms, they don’t work at all. The reason is that AI uses even more complex math to uncover hard-to-discern correlations among hundreds or thousands of variables. Pulling out a single variable triggers change throughout the entire model and makes it easy to misinterpret what’s going on, creating a recipe for biased results, harmed consumers and angry regulators.
This problem has not gone unnoticed by AI practitioners. At Zest AI, we’ve been working since 2009 to develop mathematical research and practical tools that help lenders spot and reduce bias. Our studies demonstrate that our explainability techniques accurately, consistently, and quickly render models transparent.
One technique we’re using to vet AI models is called adversarial debiasing. It pits two machine learning models against each other: the first model predicts creditworthiness and the second one tries to predict the race, gender, or other potentially-protected class attributes of the applicant scored by the first model. Competition in this game drives both models to improve their methods until the predictor can no longer distinguish the race or gender outputs of the first model, resulting in a final model that is accurate and fair.
We aren’t the only ones working to build fair machine learning models. Experts like Aaron Roth at Harvard and Sendhil Mullainathan at the University of Chicago have outlined approaches to combating algorithmic bias that lenders can use today.
Meanwhile, the price of ignoring unfairness in predictive models is rising daily. 22 attorneys general recently wrote a letter expressing alarm at evidence of discriminatory and biased algorithms in the lending and insurance industries. The Senate and House have both created committees to explore AI and discrimination, and legislators are already introducing new bills on the issue. In the presidential race, Elizabeth Warren asked federal agencies to scrutinize algorithmic bias, while Bernie Sanders, Cory Booker and Pete Buttigieg all expressed concerns about the matter.
The bottom line: embracing fair algorithms is the ethical and profitable choice, allowing financial institutions to do well and do good. Banks know that the traditional three-digit credit score is an incomplete view of peoples’ ability to pay back a loan or be a reasonable insurance risk. Over-reliance on credit scores was part of the problem in the 2008 financial crisis. Fair algorithms gives financial firms a more nuanced and accurate assessment of an individual’s risk profile, helping more underserved Americans get access to reasonably priced credit and insurance coverage.
Research shows that Americans want their financial institutions to use more information to produce fairer outcomes — but they want it used responsibly. AI can do all of this — but only with the right checks and balances in place. We believe that fair AI in financial services is not only possible but currently available. It’s now on the financial industry to ensure the algorithms they use are fair.