Skip to main content
Decision Fundamentals: Making Analytics More Accessible

This blog is the final post in this series covering my FICO World keynote presentation, sharing the five steps organizations can take to improve their operational decision-making.

1. Capturing subject matter expertise 2. Intelligent solution creation 3. Faster insight to execution 4. Building institutional memory 5. Greater analytic accessibility

In this blog I’ll cover the topic of greater analytic accessibility––which seems like it should be easy to achieve these days, but it is still elusive.

In the age of the customer, Big Data is the key to intimately understanding customers’ desires and translating those desires into products, services and personalized experiences. This kind of differentiation is critical to companies’ achieving a competitive advantage.

The challenge of Big Data is extracting meaningful, timely and actionable information from it at a reasonable cost. The complexities of doing so mean that two-thirds of businesses derive little or no tangible benefit from the information they hold. Only 4% of companies extract the full value from their data. In fact, Gartner predicts 60% of all Big Data projects will fail.

Too many choices make decisions difficult

The key reason for this high failure rate? The lack of clear business objectives. It is too easy for business users, and even data scientists, to get lost in the overwhelming amounts of data at hand, the multitude of different algorithms available, and the possibilities behind every interesting observation that needs further investigation. Ultimately, they can lose sight of the big picture: These new insights are supposed to propel the business forward.

The only practical way to continuously mine value from Big Data and translate that value into business benefits is to adopt a top-down approach. Start with the decisions you want to make, and then determine what data is needed, as opposed to a bottoms-up approach where you collect the data and then ask what decisions you can make. Top-down is far more likely to achieve the business goals desired. We must empower business analysts to apply their acumen to the world of Big Data analytics without the bottleneck of limited data science resources, and the top-down methodology enables this empowerment.

Let’s talk about an example, a business analyst working on a decision strategy for onboarding new wireless customers. As I talked about in my first post in this series, Capturing Subject Matter Expertise, this business analyst can leverage Decision Model Notation (DMN) to build a requirements document specifying all the data, rules and decisions required for the new strategy. Now, in the past they had to work intensively with IT or a data scientist to help build and execute that strategy, but with FICO’s Decision Management Suite we see a better way.

The Spark of creativity

I am going to digress for a minute about one of the most exciting technologies developed in recent years, Apache Spark, before we return to our business analyst. Spark is the most active open source project in history and the largest open source project in data processing. It was developed in response to limitations in the MapReduce cluster computing paradigm, which you may have heard people refer to simply as “Hadoop.” Because Spark can process data in a distributed shared memory, it can execute up to 100 times faster than MapReduce.

We have been using Spark with our customers for awhile. We helped a major US bank reduce the 48-hour processing time on its 20 million prospects to just 4.8 hours with Spark. Our cloud Spark implementation not only reduced the job to just 4.8 hours, but also delivered it at a fraction of the previous cost of dedicated hardware.

In addition, Spark is especially interesting because it can be used for both analytic discovery on Big Data, and for massively scalable distributed execution of analytics in production.

FICO has built a Spark abstraction layer, enabling two important capabilities:

  • Business users can perform Big Data analysis.
  • More technical business users can run massively scalable analytic executions with little or no intervention from IT.
This is a huge breakthrough for our customers. Going back to our example, the abstraction layer we have built on Spark allows the business analyst to take their decision requirement document, and convert it into an executable (technically a Spark Directed Acyclic Graph). In other words, the business analyst can build the document with user-friendly tools like FICO® DMN Modeler, push those changes to an approval queue, and with the right approvals then execute the decisions related to the wireless onboarding strategy on a massively scalable execution engine.

FICO is putting together the best of man and the best of machine to deliver advanced analytics developed in conjunction with you. The combination of FICO® Decision Management Suite 2.0 and Spark brings the power of world-class decision management and Big Data together, and puts it at the fingertips of business users…delivering greater analytic accessibility.

You can learn more about the new capabilities of Decision Management Suite 2.0, and how it supports Apache Spark for lightning-fast Big Data processing, here.

related posts