Bank of England Validates Need for Explainable AI

The BoE's new report says explainable AI is necessary for machine learning to be applied in credit management

Last month, the Bank of England joined the ranks of those that have expressed interest in machine learning, along with reservations about its potential pitfalls. In Staff Working Paper No. 816: Machine learning explainability in finance: an application to default risk analysis, the authors make the following observations:

  • ML and AI suffer from a black box problem: The sheer size and complexity of these models make it difficult to explain their operating processes to people.
  • Increasing demand for model quality assurance: Stakeholders will want to ensure the right steps have been taken for model quality assurance and, depending on the application, may seek advice on what these steps are.
  • Explainable AI (xAI) tools can improve the quality assurance of AI models: xAI abilities are an important addition to the data science toolkit, as they allow for better quality assurance of these otherwise black-box models. xAI can complement other aspects of quality assurance, such as model performance testing, understanding dataset properties and domain knowledge.

At FICO, we’ve been pioneering explainable AI and machine learning to address the challenges raised by the BoE paper for more than 25 years – in fact, our first xAI-related patent was filed back in 1996!  And since that time, , we’ve seen new methods rise and faulter in their effectiveness.

I have blogged frequently about explainable AI and machine learning trends and some of the ground breaking work we’ve been doing at FICO. Here are some of the posts that may be of interest if you’re looking to explore the topic.

Deep Dive: How to Make “Black Box” Neural Networks Explainable

Interpretable Latent Features is a new way of making AI explainable. This post explores this method of xAI and shows how the future of XAI will involve new machine learning architectures that are inherently interpretable.

Explainable AI Breaks Out of the Black Box

This post shows three ways of explaining the decisions made by AI, based on FICO’s more than 25 years of experience pioneering xAI.

GDPR and Other Regulations Demand Explainable AI

New legislation such as the General Data Protection Regulation (GDPR) is turning the deployment of xAI into a question of how, not when. This post discusses why xAI is an imperative next step for business.

Explainable AI in Fraud Detection – A Back to the Future Story

To help financial institutions act on transactions that are most likely to be fraudulent, FICO introduced a neural network-based, real-time fraud detection system in 1992. It came with an in-built xAI engine that provides reasons for the produced scores. Updated versions of both systems are still in use today, and this post shows how.

How to Make Artificial Intelligence Explainable

FICO’s Analytics Workbench xAI Toolkit has been designed to help data scientists and business users alike better understand the machine learning models behind AI-derived decisions. Analytics Workbench distils decades of FICO’s research and IP into a solution all data scientists can use to build and deploy models.

Moving forward, we see the concerns that have led to xAI expanding to necessitate ethical AI – AI that makes decisions without bias. Check out my latest post on ethical AI to keep ahead of the curve.

Follow me on Twitter @ScottZoldi to see what’s happening in the world of explainable AI.

chevron_leftBlog Home

Related posts

Take the next step

Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.