An old colleague of mine, John Parkinson (hi John) writes in CIO Insight and recently posted this article on Decision-Support Systems: Lessons from the Military. I remember John telling me this story once before years ago and his column on it made me think about the differences between decision-support and decision-automation.
Using the example of a military flight simulator - aimed as he puts it at "a tiny proportion of the human ability spectrum and extraordinarily well-educated and capable people" - he shows how you can create situations where "the next piece of information - even if it's useful or even vital - can degrade decision-making". It seems to me that this is often a problem in the process most organizations follow when deciding how to solve a problem using their data. All too often the question asked is "how can I give this person more or better data" when, as John points out, this may well not help. Instead I would argue that the right question is "how does the organization make better decisions". The answer to this broader question might be more data, or data analyzed differently, or it might be some change to the whole decision process.
John goes on to say (among other things) "recording, analyzing and replaying decisions helps improve decision-making capability". This is, to my mind, not only completely true but also a great driver for using business rules management systems to automate decisions. One of the great features of a business rules management system is that, because rules are atomic and either fire (take the defined action) or do not, each decision instance can log exactly which rules fired. These rule logs are definitive and can be readily analyzed, making improving the decision something that can be made more systematic.
John also says that "automation of routine decisions helps almost all the time" and points out that any automated decision needs to have ways to track that it is not going well or is trending poorly. It is also pretty clear that it is worth having some percentage of the cases where automation does not attempt to come to a conclusion so much as refer the decision for manual review, hopefully with some strong context as to why it is being referred. One of the ways in which the inclusion of predictive analytics into rules-based decisioning systems really helps is in addressing ambiguity - where there is uncertainty. When I am trying to decide about an insurance policy there is uncertainty as to how risky a driver you are. When I am trying to decide which cross-sell offer to make there is uncertainty about how you will respond. Building predictive models to give you some assessment of how likely these things are can reduce the number of cases when you must defer to manual decision-making.
Decision-support is different from decision-automation but uses many of the same skills and approaches. They are highly complementary - they blur together for most of the really interesting problems.