I’ve been blogging about problematic modelling practices that may be creeping into some retail banks under the auspices of Basel II compliance.
This time, I’ll focus on two areas related to sample bias, a problem which often leads to models that don’t perform to expectations:
- Not building models on the population they are going to be used on
Reject inference is the process of estimating how people whose credit applications were rejected would have performed had they been accepted. When building an origination PD model, if you only use applications that were opened, your development sample will be inherently biased (unless the existing accept rate is very high), and won’t be representative of the overall applicant population. As a result, the developed model is unlikely to work well on the overall applicant population, usually resulting in more high-risk applications being accepted.
Warning signs that this could be an issue include large “population swap sets” between new and old models, or early delinquency levels being significantly higher than expected.
At the very minimum, in any model development, we recommend you ensure any sample bias is accounted for in the sample design or characteristic selection. The best practice is to use performance inference techniques.
- Not using origination PD models at all
This approach can introduce a wide range of sampling and performance biases. Whilst on paper the models may look to be very predictive, this approach usually has similar consequences to my point 1 above, in that the models will not be representative of the population or decision for which they are being used. Certainly this approach will not work as well as specific PD models designed for each decision area.
The best practice is to ensure the model design best fits the required use. This includes ensuring models are built on the populations they will be used on.
In my next post, I'll continue sharing questionable practices we've witnessed in the name of Basel II compliance. Stay tuned to the blog!