How can banks leverage their transactional and non-traditional data sources to fight attrition and risk prediction? At this week’s Credit Scoring and Credit Control XIV conference, I will be discussing this subject in detail, but I thought I’d give you an overview of my talk.
Transactional and non-traditional data sources show a lot of promise for banks. Using transactional analytics, for example, we can build more predictive behavior risk models using combination of Masterfile and transaction data. Such models are also better at predicting risk of default earlier than the traditional models. So banks can achieve the twin benefits of identifying more instances of future bad cases much earlier. Similar benefits accrue in case of attrition detection. Working with transaction data can also eliminate the need for expensive Masterfile data while keeping the performance gains intact.
With the advent of Big Data technologies, it has become far easier for banks to accumulate and access non-traditional data sources for analytic purposes. Multitude of non-traditional data assets, say, details of customer interaction on various product-holdings across the bank, can be utilized to provide lift to the predictive power of the risk models. Similarly, where other options are not available, clickstream data can be used to provide insight into customers’ future risk.
Unfortunately, not many banks have been able to benefit from these analytically novel data sources and build better predictive models using them. Reasons include the inability to:
- Easily derive predictive variables during model development,
- Deploy the variable generation in production without investing in prohibitively expensive IT, and
- Manage the variables centrally and consistently across all business decisions in the production environment.
Temporal behavior maps can derive thousands of powerful predictive variables from various data sources, including credit card transaction data, banking products data like checking accounts and clickstream data and clickstream data to name a few. With out of the box support for different types of data sources, it is convenient to start working with these data sources. It works in the modeling and production environment. So it is used for model discovery process and once the models are ready, for deployment, reducing time and cost of implementing and operationalizing models. Further, its usage of a common underlying variable library allows for regulatory requirements to be met conveniently. Using temporal behavior maps, leveraging transaction and non-traditional data sources is not only easy and convenient but it significantly cuts down on the time and cost of developing and deploying these models.