Skip to main content
Decisions, information, decoupling and automation

Andrew McAfee posted The Great Decoupling last week and Ross Mayfield followed up with Decoupling Decision Rights and Decentralization. These posts discussed the decoupling of information from decision-making. They assert that the decreasing cost of getting information around an organization (and the digital nature of almost all information) means that organizations can have whoever they like make a decision and simply deliver the information to them. While I don't necessarily disagree with this basic assessment, I think that the basic problem I discussed in those who know first win - NOT still applies. It is the decision that matters - as Ross says "Information has no value until it informs a decision that results in an outcome". Thus I believe organizations should think about how to get decisions (or at least candidate decisions) to people, not just information. Turning information into decisions typically implies making predictions and assessments of the information (predictive analytics) and then applying regulations, expert judgment and policies (rules). Automating decisions in this way has a number of benefits.

Firstly compliance is easier than with a central group (which is in turn easier than with decentralized decision making). A single automated decision service can be checked for compliance with regulations and policies much more easily than the behavior of a group of people. Secondly an automated decision means I can decentralize decision delivery as anyone can deliver the answer. I can empower the most junior, most front-line person to make a decision because I have automated it. Automation also allows for more systematic improvement of decisions through the process of adaptive control and controlled experimentation. 

Ross expressed a worry in his post:

By taking the social interaction out of the hands of the mortgage officer who is closer to the actual customer and in a position to assess different kinds of risks beyond the FICO score. And perhaps worse in the long term, it dehumanizes the organization's capability to develop a relationship with the customer.

But people's emotional reactions to others can be unhelpful - consider a pre-FICO score example of a woman of color who was trying to get a mortgage from a white loan officer relying on his sense of risk, for example. He probably would not make an accurate assessment because of his reaction to her sex and color. Some time ago I reviewed Malcolm Gladwell's blink and noted an old Fair Isaac campaign "Good credit doesn't necessarily wear a suit and tie". Relying on a social interaction is not automatically helpful - it could be just as likely to be misleading and to reduce the quality of decision-making as it is to enhance it.

Unstructured, ad-hoc exception handling might be better done by people but repeatable business decisions can be automated very effectively. Automation and the delivering of decisions, not just information, need not eliminate all judgment. Done right it allows the person facing the customer to focus on the relationship not on running the numbers. It also means that an organization, even one subject to strict regulations and tight risk management policies, can ensure its front line staff can act to help their customers, not simply refer them up the line (something I discuss in gethuman or not in the context of whether customers want to talk to systems or people). 

Technorati Tags: , , , , , ,

related posts