Zest AI comments on the federal guidance for regulating AI
Zest AI recently filed this comment with the Federal Office of Management And Budget in response to its guidance memorandum issued to Federal agencies on the topic of regulating artificial intelligence:
Zest AI appreciates the opportunity to comment on the Office of Management and Budget’s Draft Memorandum entitled “Guidance for Regulation of Artificial Intelligence Applications.” Zest AI has spent the last decade developing models for consumer credit decisions that are powered by artificial intelligence (AI). We agree with the OMB that responsible adoption of AI can “drive growth of the United States economy, enhance our economic and national security, and improve our quality of life.” We further believe that in high-stakes domains like consumer lending, it is important to set high standards for the validation and governance of AI models.
Our work has demonstrated the opportunity for AI to positively impact consumers and businesses. Lenders using Zest AI’s machine learning tools to improve their credit risk assessment of borrowers have seen a 10 percent approval rate increase for credit card applications, a 15 percent approval rate increase for auto loans, and a 51 percent approval rate increase for personal loans, with no increase in defaults. Even beyond consumer lending, AI and machine learning can enhance the safety and soundness of the banking system, fight fraud, and improve customer service.
We must prioritize public trust, transparency, and non-discrimination in AI
When it comes to ensuring fairness and combating discrimination in consumer lending, AI has the potential to do great things. While regulators and lenders must be careful to police bias in AI models, smart regulation can encourage practitioners to use AI in a manner that increases inclusion and expands opportunities for more Americans.
Public trust, disclosure, and transparency are necessary governing ethics for AI technologies. How AI systems work, how they reach the decisions they do, and how those decisions impact Americans are required knowledge for businesses using AI models in high-stakes applications. The industry term for answering these questions is “explainability” and, when making high-stakes decisions, AI users should be required to create rigorous explainability processes and methods. Failure to do so can lead to businesses adopting opaque and flawed AI that could threaten consumers, unnecessarily perpetuate discrimination, and pose a threat to the safety and soundness of financial systems.
Zest AI develops AI-automated machine learning underwriting models for credit products including auto finance, consumer, and mortgage loans. Zest AI’s tools allow lenders to approve more creditworthy borrowers while maintaining the institutions’ risk profiles. Lenders also use our technology and modeling capabilities to explain, validate, interpret, and document the reasoning behind their credit decisions, all critical steps to the responsible use of models.
We have also developed methodologies that allow lenders to easily and quickly identify adverse disparities in their models and generate less-discriminatory alternatives that still serve their business interests. These methodologies do not impose material burdens on lenders; rather, they enable lenders to reduce disparities while making, in the words of the Supreme Court, “the practical business choices and profit-related decisions that sustain a vibrant and dynamic free-enterprise system.”
Machine learning models must be developed responsibly to prevent discrimination and lack of transparency
Machine learning (ML) is a type of AI that uncovers relationships between many variables in a dataset to make better predictions. These models can leverage enormous amounts of data, meaning the models can consider and assess a broader and more diverse set of variables than standard statistical models.
AI-automated machine learning models are the future of consumer finance, offering benefits to both lenders and consumers. According to a 2017 survey, 79 percent of bankers agree that AI “will revolutionize the way [banks] gain information from and interact with customers.” 76 percent believe that “in the next three years, the majority of organizations in the banking industry will deploy [AI] interfaces as their primary point for interacting with customers.”
Perhaps most importantly, ML has demonstrated that it can increase access to credit for more Americans, especially low-income and minority borrowers with thin or no credit files. A major mortgage entity found that Zest’s ML modeling techniques could responsibly expand access to mortgages for thousands of American families that had been previously excluded from obtaining loans.
Unfortunately, many existing ML models are opaque and inherently biased. Because of their lack of transparency, these models are sometimes referred to as “black boxes.” Even the human beings that programmed a model may not be able to discern how the model yielded the result it did. Moreover, ML models run the risk of incorporating human bias during their creation, as humans are needed for key decisions such as which datasets to use in training the model, which variables to choose, and which assumptions to make when calibrating the model—all of which present ample opportunity for bias or discrimination to creep into a model.
In addition, algorithms trained on real-world data can reflect existing discriminatory biases and thus perpetuate prejudice — an effect that we’ve already seen in real-world applications. In one Princeton University study, an AI algorithm learned to link white-sounding names with the category “pleasant” and black-sounding names with the category “unpleasant.” Similarly, people with African American-sounding names are more likely to be served online ads related to arrest records. And the Department of Justice has criticized criminal sentencing algorithms for relying on data that contains racial bias.
These grave risks exist in models used for banking and consumer lending as well. As the Office of the Comptroller of the Currency has explained: “Bank management should be aware of the potential fair lending risk with the use of [Artificial Intelligence] or alternative data in their efforts to increase efficiencies and effectiveness of underwriting…. New technology and systems for evaluating and determining creditworthiness, such as machine learning, may add complexity while limiting transparency. Bank management should be able to explain and defend underwriting and modeling decisions.
Unless businesses maintain a clear understanding of why a particular model made the decision that it did, bad outcomes will occur. We’ve seen this impact first hand. One used-car lender was using a model that weighed two seemingly-benign signals: higher mileage cars tend to yield higher risk loans and borrowers from a particular state were a greater credit risk than those from other states. Neither of these signals appeared to be an obvious proxy for violating fair lending laws. However, our ML tools noted that, taken together, these two signals alone could predict whether a borrower was African-American and increase the likelihood of denial.
Creating AI without this kind of multi-variate explainability would, in the long-run, both reduce trust among consumers and counteract the federal government’s goal of promoting AI as a tool to grow the U.S. economy.
Zest AI’s tools and processes are fast and efficient, allowing lenders to pick and deploy better, less-discriminatory models without imposing meaningful burdens. They also streamline internal and regulatory reviews of credit lending models, allowing financial institutions to more quickly respond to market conditions and offer new, innovative products. Our products have proven robust enough that Zest AI clients include a Government-Sponsored Enterprise, which uses Zest AI’s fairness tools to meet the letter and spirit of fair credit and lending laws.
The OMB’s guidance must be responsibly implemented in financial services
The American financial services market is the largest and most liquid in the world, directly contributing 7.4 percent of U.S. GDP. Encouraging AI industry participants to embrace rigorous explainability and debiasing techniques as a central part of AI development and implementation in consumer lending will reduce the need for increased regulatory oversight, saving taxpayer money and facilitating American innovation.
The financial services industry has been working well with regulators to create space for innovation while promoting transparency, self-reporting, and the integration of explainability. These conditions provide an environment that fosters consumer trust.
The federal government should continue to work with a wide cross-section of stakeholders to create standards around AI’s performance, measurement, safety, security, privacy, interoperability, robustness, trustworthiness, and governance in consumer finance. Doing so will be an effective way to balance the needs of developing AI in a competitive and increasingly global financial services market while engendering trust and protecting fairness for all Americans.