Scoring Solutions
Is there hope that artificial intelligence and machine learning approaches might soon “square the circle” of delivering superior pattern recognition and prediction, while also adhering to regulatory compliance? The field of explainable AI (xAI) may hold the answer.
FICO's research team explored this topic in a new paper, “Developing Transparent Credit Risk Scorecards More Effectively: An Explainable Artificial Intelligence Approach”. We presented this at the Data Analytics 2018 event organized by IARIA/Think Mind, where it won the Best Paper Award. We also recently presented it at the Federal Reserve Bank of Philadelphia.
What makes explainable AI relevant is that as financial service firms increasingly embrace AI and machine learning technologies, concerns have emerged around the opaqueness and lack of trust in some of these models. In her opening keynote at the Federal Reserve Bank of Philadelphia’s Fintech Conference, “What Are We Learning about Artificial Intelligence in Financial Services?”, Governor Brainard from the Board of Governors of the Federal Reserve System discussed opportunities and uncertainties around the use of these technologies.
Among the opportunities, “firms view AI approaches as potentially having superior ability for pattern recognition, such as identifying relationships among variables that are not intuitive or not revealed by more traditional modeling.” Among the related uncertainties, she called out the “proverbial black box—the potential lack of explainability associated with some AI approaches.” This lack of transparency is particularly acute for consumer lending, where regulations require lenders to provide adverse action notices. Compliance with these requirements requires explanations for decisions reached through AI models.
Research into AI and Machine Learning for Credit Scoring
To obtain deeper insights, FICO Scores R&D has empirically investigated potential benefits and risks associated with using some of the latest AI and machine learning approaches for credit scoring. We found the biggest benefits to be the efficiency with which highly predictive models can be trained by machine learning. However, this benefit came at the price of increasing opaqueness of these models. We concluded that unleashing pure (i.e., unconstrained by domain expertise) machine learning models into the broad lending market would likely usher in systemic risk, market confusion, and lack of transparency for consumers.
A solution to this conundrum is to forget about replacing the traditional domain expert-led model development process by a purely data-driven machine learning process. Instead, we recommend to focus on ultra-effective ways to augment human domain intelligence with machine intelligence to enable the rapid construction of highly predictive and explainable credit risk scoring models. One such approach developed by FICO is described in detail in our paper and supported by case study results.
At FICO we’re looking forward to more discussions with lenders, domain experts and regulators/consumer advocates around this approach, as well as other approaches we’re pursuing around explainable AI that are applicable for consumer credit scoring and beyond. To learn more, see these recent blog posts.