Skip to main content
Explaining analytic models

A posting on the Oracle Data Mining blog made me think about explaining analytics.  Analytics need to be explained because telling a customer (or a regulator) that you took a business decision "because the analytics said to" is not going to fly.

Explainability of a predictive model is essentially the ability for someone to understand the behavior of that predictive model. Often this understanding is in the context of a specified business decision involving real economic consequences. For instance, to understand why a particular applicant was denied credit.

First and foremost, the decision-maker responsible for the decision must sign off on the behavior and performance of that predictive model and must trust that the model is behaving as the expected. In some areas, the model must be provided to regulators in a transparent, mathematically precise form to ensure that it conforms with all applicable regulations regarding that decision area (e.g., the regulation in credit granting that all other things being equal, persons of older age must have a score no lower than persons of younger age). Finally, the model must be fully understandable to the analyst who is creating it.

There are several methods for achieving explainability. Depending on the decision area, the analyst might have to select a model that:

  • Returns a ranked list of reason codes for adverse decisions to help explain its impact
  • Permits verification of its behavior by regulators
  • Ensures that the way in which an output changes does not change direction when an input is changing in only one way
  • Captures the relevant factors but is still simple enough for non-technical understanding
  • Has restrictions that ensure that it conforms with relevant regulations
  • Conforms with the expectations of the analyst and the decision maker

Visualization and other graphical methods can be used to show the results of predictive analytics. The visual representation must clearly show the most important business elements so that those who understand the customers, the business and the regulations can see them.

If you are using predictive scorecards (also known as additive scorecards) explainability can be fairly easy as  a certain set of input conditions creates a score that can be compared to a set of thresholds to identify the decision being recommended

Using a business rules management system to deploy analytics can also be very effective in explainability. Not only can the rules be readily understood by business users, even if the mechanism for deriving them is too mathematically advanced for those users, but as the exact rules fired can be logged for each customer it is possible therefore to look at any given transaction and see exactly how the predictive model played out.

Mathematical models can also be "engineered" so they are robust and respond to changes in the business environment appropriately. One approach is to “weights engineer” when developing models - adjust the contribution of factors to reflect business concerns. For instance a lower weight might be assigned to a characteristic that is not common across customers, and higher weights to those characteristics that are common.

In the end a model has to make intuitive sense as well as statistical sense.

Fair Isaac's Introduction to Predictive Analytics is a great primer on analytics in general, especially the kind useful in operational systems, and my fellow author Rahul Asthana wrote a nice article on "Crossing the analytic chasm" for TDWI. This posting owes much to helpful information from Brendan del Favero in our product management group - thanks Brendan.

related posts