Skip to main content
Explainability in Fraud Management — What to Focus On Now

While it may seem like an esoteric topic, explainability in fraud management is important – and it’s especially relevant when it comes to using AI and machine learning in fraud detection and prevention.

At the core, explainability in fraud management means that we have an understanding of the methods used, how the systems work and how data points inform the decisioning that happens. In other words, a fraud management system isn’t a black box. There are several ways to think about explainability in fraud management.

My Take on Explainability

For financial and fraud professionals, probably the easiest way to wrap our heads around explainability is at the golf course level—that is, when I’m out on the golf course and playing with someone new, and they ask me what I do (for a living, not when knee-deep in a bunker).

In this scenario I have about a minute to explain my job to someone outside the payments industry, otherwise their inquisitiveness is quickly replaced with a glazed look. To communicate effectively, I’ve got to understand how the payments industry works, how fraud is fought within it, and the impact I try to make. The key is really the elevator pitch of what I do; I understand how I fit into the ecosystem and what my professional purpose is. Clearly, there is more to my professional responsibilities than what I can explain in a few simple sentences, but it all starts with having an understanding of what I’m doing and why.  

Explaining Explainability

Stepping back from the golf course bunker-side chat, the task of explaining explainability in fraud management reminds me of presentations I gave in the late 1990s about demystifying neural networks. It required me to be able to explain what neural networks do, how they work, and to make sure people understood how machine learning technology is applied and in what way.

This is the essence of explainability in fraud prevention; we need to understand how fraud systems work, how their decisioning models are trained, and how they are fueled by the right data to provide real-time context. It’s a topic I addressed in my recent conversation with Siddhartha Dash, Research Director at Chartis Research.

 

Effective Fraud Management Requires Explainable Technology

To kick off the topic, Sid and I considered the essential questions of any fraud and risk management operation: Which technologies, techniques and methods will best work for the business? Proprietary or shared fraud detection models? AI systems that include models with supervised or unsupervised learning? We then talked through what companies need to be looking for in detection technology and analytics, and explored two related elements: flexibility and explainability.

Flexibility is the ability to use the right pieces of the fraud management solution (models, data ingestion abilities, extensibility and scalability) to support the specific line of business and its unique problem set. For example, real-time card transaction monitoring requires one set of capabilities; application fraud detection and prevention requires another.

Of explainability, Sid said, “Explainability is really the ability to understand. The fraud detection solution needs to give a framework that will let you extract information on how the fraud system is working. Is it a ‘closed box’ system, or can you figure out what it is doing? You need to be able to understand the data. That’s a simplistic way of describing explainability. There are many parameters around that—but fundamentally can you understand it?”

He continued, “With an enterprise fraud management system, the first facet of explainability is being able to understand and explain it to others, both internally and externally. The second facet is being able to understand why and how the systems are doing what they do, so that you can respond to the problems encountered across different channels and experiences.”

Changing Fraud Patterns Require an Explainable Solution

For example, in some instances there may be individual high-dollar transactions from a customer that are potentially valuable to the organization. Those may have a different level of detection, understanding and trade-offs compared to assessing a high volume of low-dollar transactions from another customer. “Obviously fraud is fraud, and consumer experience is consumer experience,” I said, “but some of those different transactions in different siloed areas will have different sets of metrics and analytic techniques behind them due to the nature of the transaction type, and the risk behind each problem.”

Sid then illustrated, “This is why a fraud management system needs to deliver both facets of explainability — when people outside of your part of the organization ask how you are going to address the problems you’re chartered to take control over,” you need to have a high-level answer. At a more granular and operational level, financial and payments professionals need to be able to understand how the fraud management system works in order to apply the appropriate capabilities to different fraud types.

“Fraud changes all the time — it’s a given, along with death and taxes,” Sid concluded. “Can your fraud management shift along with those changes?” That’s the ultimate test of flexibility and explainability.

FICO Solutions Incorporate Patented Explainability Features

FICO’s Chief Analytics Officer, Dr. Scott Zoldi, has been a relentless proponent of not just why companies need Explainable Artificial Intelligence, but the data science of how to actually do it.

FICO’s commitment to explainable technology starts at the top of our organization. Dr. Zoldi has “a belief that’s unorthodox in the data science world: explainability first, predictive power second, a notion that is more important than ever for companies implementing AI… AI that is explainable should make it easy for humans to find the answers to important questions including:

  • Was the model built properly?
  • What are the risks of using the model?
  • When does the model degrade?”

All of these questions should factor into any discussion about real-world explainability requirements of fraud management systems for financial services.

Finally, in addition to pioneering AI governance models that build explainability into the model development process, last year Scott was awarded an important explainability patent:

Explaining Machine Learning Models by Tracked Behavioral Latent Features. This invention by Scott Zoldi is a system and method to explain ML model behavior, which can benefit not only those seeking to meet regulatory requirements when using models but also help guide users of models to assess and increase robustness associated with model governance processes. 

As you can see, explainability and innovation are in FICO’s DNA. If you’d like to put explainable AI to work for you, please reach out.

Keep up with our latest news and developments by following FICO on Twitter and LinkedIn, and my musings on fraud and golf course conversations on Twitter @FraudBird.

How FICO’s Explainable AI can Help Your Organization Fight Fraud

related posts