Over the last 12 months or so there’s been incredible excitement about artificial intelligence and all of the amazing things it can do for us—everything from driving cars to making pizza (super-cool video!). But — and this is a big “but” — artificial intelligence comes with many challenges, including trying to decipher what these models have learned, and thus their decision criteria.
In my last post, I discussed how regulations such as Europe’s General Data Protection Regulation will demand Explainable AI. This is a field of science that attempts to remove the black box and deliver AI performance while also providing an explanation as to the “how” and “why” a model derives its decisions.
- Produces more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enables human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
Three Ways to Open the Box
FICO has been pioneering Explainable AI for over 25 years; one of our recent Explainable AI patent filings replaced a patent awarded back in 1998. In our experience, we’ve seen various ways to explain AI when used in a risk or regulatory context:
- Scoring algorithms that inject noise and score additional data points around an actual data record being computed, to observe what features are driving the score in that part of decision phase space. This technique is called Local Interpretable Model-agnostic Explanations (LIME), and it involves manipulating data variables in infinitesimal ways to see what moves the score the most. (FICO’s own Explainable AI, called “reason reporter,” resembles LIME. This is our patent from 1998 that recently expired.)
- Models that are built to express interpretability on top of inputs of the AI model. Examples here include And-Or Graphs (AOG) that try to associate concepts in deterministic subsets of input values, such that if the deterministic set is expressed, it could provide evidence-based ranking of how the AI reached its decision. These are often utilized and best described to make sense of images.
- Models that change the entire form of the AI to make the latent features exposable. This approach allows reasons to be driven into the latent features (learned features) internal to the model. With this approach, we are going to rethink how to design an AI model from the ground up, with the view that we will need to explain latent features that drive outcomes. This is entirely different than how native neural network models learn.My recent patent application work at FICO has explored this approach, with an architecture called LENNS (Latent Explanations Neural Network Scoring) that exposes more of what’s driving the score. Essentially we are looking at different fundamental algorithms— and they’re becoming more transparent than they are today. It’s very much an area of research, and we are probably several years away from production-ready Explainable AI of this sort.
GDPR is just one of a growing number of forces driving Explainable AI. It’s clear that as businesses depend upon AI more and more, explanation is essential, particularly in the way that AI-derived decisions impact customers. Research is underway at FICO on how to adjust and advance our proprietary XAI algorithms and models.
If you are into analytics and want to learn more about these concepts, follow me on Twitter @ScottZoldi and send me a DM!