As advanced analytics permeated nearly every industry in 2018, FICO’s thought leaders continued to push it into new areas. Here were the top 5 posts in the Analytics & Optimization category last year.
Mathematical optimization, or prescriptive analytics, applies robust science to decision strategies in order to reach the best outcome. Horia Tipi noted a breakthrough for operations researchers and data scientists worldwide:
Last week we announced that FICO Xpress Mosel, the leading optimization modeling, analytic orchestration, and programming language is now open and available to everyone free of charge. From the boardroom to the classroom, anyone can now create optimization models to solve problems more efficiently and make better business decisions based on data.
FICO Xpress Mosel is available by downloading the FICO Xpress Community License.
With FICO Xpress Mosel, organizations can create optimization models that can solve bigger problems more efficiently, design solutions faster, and make better decisions in virtually any business scenario. In addition to its modelling, solving and programming features, FICO Xpress Mosel also supports the orchestration and execution of analytic models built in virtually any tool.
Whether a problem needs solving in milliseconds, requires a vast array of cloud computing resources, or has to solve for hundreds of millions of decision variables, Mosel is there to meet the challenge. For example, Southwest Airlines have been using FICO Xpress, including Xpress Mosel, for years to handle some of their biggest, most critical business problems.
How are analytics helping banks accelerate their digital transformation, while keeping the customers at the center of their strategies? Manish Pathak explored a roadmap for success.
Financial institutions understand the need to tailor experiences to individual needs and personalize their interactions. In fact, more than half (55%) of bankers plan to increase spending on customer experience initiatives [CSI], and nearly 80% consider it important to deliver guidance to customers in real-time [The Financial Brand]. Currently, though, only about 20% of financial institutions are delivering more than basic personalization [Digital Banking Report/Everage]; clearly, there is still a significant gap to fill.
We’ve identified three key imperatives financial institutions need to address to deliver personalized, real-time experiences.
#1 Focus on Data Gathering and Consolidation
All of these multichannel, “always on” interactions that we’ve referenced generate large amounts of data, which is typically captured in various sources such as CRM applications, transactional data stores, disparate account management data stores, etc. Each data source provides a fragmented image of a consumer—a glimpse into one of the personas. Financial institutions need to bring these data sources together to create a comprehensive profile of a consumer, rather than a series of disconnected account holders across platforms and lines of business. By connecting the digital clues and gaining a single customer view, it’s then possible to interpret and anticipate future needs. Currently, according to Forrester only 0.5% of all generated data is analyzed. Richard Joyce, a Senior Analyst at Forrester says, “Just a 10% increase in data accessibility will result in more than $65 million additional net income for a typical Fortune 1000 company.” These stats are simply overwhelming and identify a clear-cut target for which the industry must aim.
#2 Build Powerful Analytic Engines that Predict and Prescribe
Personalization is not a matter of simply gathering data, but also acting on that data. And the hard truth is that anticipating customer needs requires powerful analytics engines that were once thought of as “nice-to-have.” However, only about 55% of organizations were expected to increase budgets for data analytics in 2017 [The Financial Brand]. Machine learning and predictive analytics are necessary to optimize how financial institutions market to digital consumers and should no longer be considered “optional.”
#3 Deliver Data-Driven, Highly Personalized Customer Experiences
Having covered the need to gather and analyze data, our third imperative is about putting it all together into a cohesive system. Because digital consumers demand tailored, contextualized, interactive dialogues, marketing workflows need to synthesize and coordinate inbound requests for information with the appropriate outbound messaging. While certain modular tools can provide short-term fixes to these challenges, financial institutions will ultimately need to undergo organizational changes that break down silos and connect systems, thereby improving data flow and knowledge sharing.
With machine learning and AI the focus of many firms, FICO Chief Analytics Officer Scott Zoldi went deep into how these technologies work. Here’s an excerpt:
Most of the well-known applications of machine learning and computational AI involve supervised learning. The modeler amasses a vast set of existing data (e.g., financial transactions, internet photographs, or the texts of tweets) and a base-level “ground truth” outcome that is already known, perhaps in retrospect or by expensive human investigation.
Equipped with any number of computational algorithms, the scientist becomes the “supervisor” whose code trains the model to reproduce, in the lab, the known outcomes with a low probability of error. The models are then deployed to live a happy life scoring credit risk and fraud likelihood, finding pictures of Chihuahuas and muffins, or flagging insulting tweets. Technically, each model computes a probabilistically weighted predicted outcome that we believe to be like those outcomes from the training examples. The state of the art for supervised learning is now well established; you can choose from dozens of comprehensive predictive analytics and neural network packages.
But what if there is no set of “true outcomes” known, or the ones at hand are restricted in quality or quantity? What can machine learning do for us then? This is the domain of the far trickier unsupervised learning, which draws inferences in the absence of outcomes.
Good unsupervised learning requires more care, judgement and experience than supervised, because there is no clear, mathematically representable goal for the computer to blindly optimize to without understanding the underlying domain.
It’s not just banks or commercial operations that can benefit from advanced analytics. Ted London discussed how analytics can be applied in the public sector.
There are significant opportunities for government and higher ed institutions to reduce their procurement and travel expenses using predictive analytics. Historically, building analytical models had been a challenge due to the complexities of analyzing data across the entire procure-to-pay cycle. Data are often disjointed across ERP, Procurement, Travel, and P-Card systems. Even when data is available, it is often spread across multiple tables within complex databases. Also, once data is extracted, it is stored in different formats, and it can require significant manual manipulation.
However, new tools are in place that can now automatically consolidate this data, and analytics can provide valuable insights to reduce costs and risk to organizations. Through this risk modeling, waste, fraud and abuse can be found and corrected, before any financial outlays take place, saving millions of dollars per year. Here are five ways to save money every government department should know about.
1) Use Scores To Measure the Risk
2) Use Analytics to Reduce PO Leakage
3) Redirect staff to higher value work
4) Prevent Social Engineering Fraud
5) Reduce Duplicate Invoice Payments
Author TJ Horan, FICO vice president for fraud solutions, wrote a five-part series on the keys to using AI and machine learning in fraud detection. In the first post, TJ discussed the use of supervised and unsupervised models.
Because organized crime schemes are so sophisticated and quick to adapt, defense strategies based on any single, one-size-fits-all analytic technique will produce sub-par results. Each use case should be supported by expertly crafted anomaly detection techniques that are optimal for the problem at hand. As a result, both supervised and unsupervised models play important roles in fraud detection and must be woven into comprehensive, next- generation fraud strategies.
A supervised model, the most common form of machine learning across all disciplines, is a model that is trained on a rich set of properly “tagged” transactions. Each transaction is tagged as either fraud or non-fraud. The models are trained by ingesting massive amounts of tagged transaction details in order to learn patterns that best reflect legitimate behaviors. When developing a supervised model, the amount of clean, relevant training data is directly correlated with model accuracy.
Unsupervised models are designed to spot anomalous behavior in cases where tagged transaction data is relatively thin or non-existent. In these cases, a form of self-learning must be employed to surface patterns in the data that are invisible to other forms of analytics.
Follow this blog to see where the analytics industry is heading in 2019.