Are your analytics delivering results?
The word “analytics” means different things to different people. Depending on the analytical maturity of your organization, analytics could mean reports on your performance, analytics could mean predictive models, or it could mean fully optimized analytic decisions.
No matter where you are on that spectrum, many organizations report that while they have many different analytical systems or models, they don’t know how well they are performing. Many times organizations implement expert or predictive models with the expectation of enhanced operational performance, but they don’t measure the results, and don’t assess if the model is delivering the business value needed and expected.
Measuring and tuning models as is important as implementing models. Without ongoing monitoring they can fail to achieve the desired results. If you, as a leader of an organization wants to assess your analytics, there are a number of steps you can take.
1) Take an Inventory of your Analytics
One of the first challenges may be identifying all of the analytics in place within your organization, or even within a single department. Often there is no well documented inventory of models. You may find that some of the analytics that are used aren’t even well understood. You may also find that some analytics that you thought were in use are no longer being executed.
As part of this effort, document what each model was intended to do, if it is still being used, who is using it, if its performance is being measured, and what its current effectiveness is.
2) Measure analytical accuracy
Even when an organization has a good inventory of their models, most don’t have an updated measurement of the effectiveness for each of their analytical tools. For example, if a model is supposed to predict whether a customer will self-resolve an issue, an effectiveness score could show that the model made the correct prediction 89% of the time. Depending on the model, that may show that the model is at peak condition, or could show that it has degraded from its expected performance.
Traditional models were built at a point in time based on available data. They were static, meaning they did not change, and in fact degraded over time as the underlying business conditions changed. Often models would only be re-calibrated infrequently, and in the interim they became less and less effective. In contrast, newer models utilize machine learning, where the model utilizes its own results and self-calibrates based on those results to become more accurate over time.
If an organization has models that are more than a few years old and hasn’t measured their current accuracy, it is highly likely that their performance has degraded and they could increase their performance through a tuning process, or the model could be improved by upgrading to one which is self-calibrated using machine learning.
3) Are your models being used as intended?
Models are predictors of future events. However, if that future event does not match the business process you are executing, the model is unlikely to achieve the designed result.
One good example is when an organization uses a credit score to prioritize their delinquent collection cases. A credit score is built to predict which people are most likely to re-pay a specific grant of credit (credit card, loan, auto, etc.). A credit score is not desired to predict which of a specific subset of individuals who later become delinquent, were most likely to re-pay their debt. While it likely would be more predictive than a coin flip, it is unlikely to produce the results that a purpose-built model could achieve. Similarly, a collections re-payment model built for one client, or one debt type may not achieve its same level of precision with a different debt pool.
Another example shows where a model could lead to less than expected results. Let’s say an organization has a model that was built to predict if a brand new collection case will result in a payment in full within 60 days. Operationally, if the case management system only holds a low risk case for only 30 days, the model won’t achieve maximum effectiveness. The model will be identifying cases where they will pay during days 31-60 (in addition to days 1-30), but those cases aren’t being given time to resolve themselves. Either the model needs to be adjusted to predict payment within 30 days, or the case management system needs to be adjusted to hold onto low risk cases for 60 days. If the two are not in sync, the model predictably under-performs.
Many organizations that have made significant investments in analytics view that investment as one-time projects. However, analytics need continual monitoring and tuning. If your organization hasn’t reviewed your analytics recently, then a small project to review your analytics, assess their current accuracy and usage will help you assure you are achieving your goals, and point to areas for improvement.
Where your analytics have degraded over time, you can tune your models to improve their performance, or your can upgrade your models to ones which use machine learning. These models utilize the performance of the model to update itself to maximize effectiveness over time. The following graphic shows the FICO approach to continual learning in analytic management.