Analytics & Optimization Superior Scorecards at Warp Speed: Algorithmic Learning Meets Domain Expertise

Oct072013

By Dr. Gerald Fahner

Today, debates are raging in blogs as to whether Big Data and machine learning render domain expertise obsolete. Personally, I do not think it is an either or decision. I like to design effective processes and tools to transfer critical domain expertise into algorithmic models. FICO has developed an innovative approach to develop better, business-apt scorecards faster, and we have been invited to present our work as a semifinalist for the INFORMS Innovation Award.

Credit scoring serves as a perfect use case for combining brute force machine learning algorithms (which are all about fitting models closely to historic data) with domain expertise (all about tuning models to the context into which they are being deployed – respecting business goals and constraints.)

But first, what is algorithmic learning, and why is it so powerful? Algorithmic learning comprises a large body of theories and algorithms, and has many subfields beyond the scope of this blog. From the perspective of scoring, well-known, readily available tools and procedures include Regression and Classification Trees (CART), ensemble learning methods such as Random Forests and Stochastic Gradient Boosting, Support Vector Machines, and Neural Nets. The advantages of these procedures over the traditional workhorses of scoring – logistic regression and discriminant analysis (which assume parameterized functional forms of the predictive relation) – are multifold:

  • Provide minimal assumptions on consumer behavior due to no parametric model form (objectivity)
  • Deliver highly flexible approximation of complex predictive relationships (nonlinearities, interactions)
  • Enable advanced procedures that protect well against over-fitting to noise
  • Yield deep insights into predictive relations (“black box” models can be made transparent!)
  • Tend to outperform traditional approaches to scoring (if training data are representative)

I should qualify the last point further: prediction technology generally tends to matter less than data quality/representativeness and analytic experience of the modeling team. A top-notch model developer could enhance a traditional logistic regression or discriminant analysis model with complex data transformations and determine model segmentation schemes to eventually be competitive with any machine learning procedure. But this comes at a cost. While algorithmic learning works by pushing a button (more or less), developing a competitive scoring system using traditional techniques can take weeks or months of an analytic rock-star’s time.  

Unfortunately (for credit scoring applications, while this may be less of a problem for other scoring applications), off-the-shelf algorithmic learners also have severe limitations: the modeled relations may be unpalatable. The procedures do not allow imposing domain expertise or constraints into these models. Legal and operational constraints may prevent direct deployment of these models.

Our innovative approach reconciles the prowess of algorithmic learning with practical constraints, to effectively develop superior, palatable scorecards that can be practically deployed. 

The new approach consists of two stages:

  1. Learn and diagnose a purely data-driven, algorithmic model of the data. This is great for gaining objective insight into the empirical relation, and yields a highly accurate fit to the historic data. However, the result may violate certain business constraints.
  2. Transmute the algorithmic model, at minimal loss of information, into a (segmented) scorecard model, whereby constraints on score weights can be imposed, as desired, to adhere to legal and operational constraints.

The approach combines several analytic techniques in a new way:

  • Prediction with tree ensembles, in particular stochastic gradient boosting
  • Visualization and diagnostics of these models including interaction detection
  • Automatic recursive partitioning for scorecard segmentation
  • Constrained Maximum Likelihood Estimation for score weight optimization respecting deployment constraints

We like this new approach because it allows us to build better models faster. We tested it on several projects ranging from credit scoring to credit application and insurance fraud. We frequently see significant lift over traditional model developments.

And the high degree of automation inherent in the new approach shortens the time to build deployable models, especially by cutting time off otherwise laborious manual scorecard segmentation research. Model risks that otherwise exist for “black-box” models, when accidentally trained with erroneous data elements, are also greatly mitigated; this is because the model diagnostics and easy inspection of the final scorecards tends to reveal potential data issues, and scorecard constraints sometimes can allow for rectifying minor data issues.

Readers interested in a detailed exposition of the new approach may turn here:  http://www.business-school.ed.ac.uk/waf/crc_archive/2013/33.pdf or here: http://events.fico.com/Macine-Learning-and-Human-Expertise

 

1 Comment