Explainable AI Breaks Out of the Black Box
Over the last 12 months or so there’s been incredible excitement about artificial intelligence and all of the amazing things it can do for us—everything from driving cars to making…

Over the last 12 months or so there’s been incredible excitement about artificial intelligence and all of the amazing things it can do for us—everything from driving cars to making pizza (super-cool video!). But — and this is a big “but” — artificial intelligence comes with many challenges, including trying to decipher what these models have learned, and thus their decision criteria.
In my last post, I discussed how regulations such as Europe’s General Data Protection Regulation will demand Explainable AI. This is a field of science that attempts to remove the black box and deliver AI performance while also providing an explanation as to the “how” and “why” a model derives its decisions.
DARPA says Explainable AI or XAI:
- Produces more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
Three Ways to Open the Box
FICO has been pioneering Explainable AI for over 25 years; one of our recent Explainable AI patent filings replaced a patent awarded back in 1998. In our experience, we’ve seen various ways to explain AI when used in a risk or regulatory context:- Scoring algorithms that inject noise and score additional data points around an actual data record being computed, to observe what features are driving the score in that part of decision phase space. This technique is called Local Interpretable Model-agnostic Explanations (LIME), and it involves manipulating data variables in infinitesimal ways to see what moves the score the most. (FICO’s own Explainable AI, called “reason reporter,” resembles LIME. This is our patent from 1998 that recently expired.)
- Models that are built to express interpretability on top of inputs of the AI model. Examples here include And-Or Graphs (AOG) that try to associate concepts in deterministic subsets of input values, such that if the deterministic set is expressed, it could provide evidence-based ranking of how the AI reached its decision. These are often utilized and best described to make sense of images.
- Models that change the entire form of the AI to make the latent features exposable. This approach allows reasons to be driven into the latent features (learned features) internal to the model. With this approach, we are going to rethink how to design an AI model from the ground up, with the view that we will need to explain latent features that drive outcomes. This is entirely different than how native neural network models learn.My recent patent application work at FICO has explored this approach, with an architecture called LENNS (Latent Explanations Neural Network Scoring) that exposes more of what’s driving the score. Essentially we are looking at different fundamental algorithms— and they’re becoming more transparent than they are today. It’s very much an area of research, and we are probably several years away from production-ready Explainable AI of this sort.
If you are into analytics and want to learn more about these concepts, follow me on Twitter @ScottZoldi and send me a DM!
Popular Posts

Business and IT Alignment is Critical to Your AI Success
These are the five pillars that can unite business and IT goals and convert artificial intelligence into measurable value — fast
Read more
Average U.S. FICO Score at 717 as More Consumers Face Financial Headwinds
Outlier or Start of a New Credit Score Trend?
Read more
FICO® Score 10 T Decisively Beats VantageScore 4.0 on Predictability
An analysis by FICO data scientists has found that FICO Score 10 T significantly outperforms VantageScore 4.0 in mortgage origination predictive power.
Read moreTake the next step
Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.