What Is Responsible AI? Four Important Principles
Responsible AI includes four principles for ensuring that AI is safe, trustworthy and unbiased - it should be robust, explainable, ethical and auditable

Today, artificial intelligence (AI) technology underpins decisions that profoundly affect everyone. This brings great opportunities but also creates risks. That has given greater focus to the practice of Responsible AI. Responsible AI ensures that AI systems and machine learning (ML) models are robust, explainable, ethical and auditable.
Responsible AI means following a set of principles and corporate model AI model development standards to operationalize AI deployments that deliver high-impact business results within important ethical and legal boundaries.
AI has become widely used to inform and shape strategies and services across a multitude of industries, from health care to retail, and even played a role in the battle against COVID-19. But the mass adoption and increasing volumes of digitally generated AI are creating new challenges for businesses and governments, and make Responsible AI a vital consideration for ensuring not just accuracy but fairness.
Principle 1: Robust AI
Robust AI is a well-defined development methodology; proper use of historical, training and testing data; a solid performance definition; careful model architecture selection; and processes for model stability testing, simulation and governance. Importantly, all of these factors must be adhered to by the entire data science organization and enforced as an AI standard.
Principle 2: Explainable AI
Neural networks and other machine learning methods can find complex nonlinear relationships in data, leading to strong predictive power, a key component of AI. But while the mathematical equations of “black box” machine learning algorithms are often straightforward, deriving a human-understandable interpretation for the solution they create is often difficult. Model explainability should be the primary goal of Responsible AI deployments,.
Model explainability focuses on human-understandable interpretation for latent features learned by machine learning models and at scoring time when customers are impacted by machine learning in the overall AI decisioning system.
AI that is explainable should make it easy for humans to find the answers to important questions, including:
- Was the AI model built properly?
- Could a relationship impute bias?
- What are the risks of using the AI model?
- When or under what circumstances does the AI model degrade?
The latter question illustrates the related concept of humble AI, in which data scientists determine the suitability of a model’s performance in different situations, or situations in which it won’t work because of low density examples in historical training data. We need to understand AI models better because when we use the scores a model produces, we assume that the score is equally valid for all customers and all scoring scenarios. Often this may not be the case, which can easily lead to all manner of important decisions being made based on very imperfect information coverage in AI models. Explainability is everything.
In changing environments, especially, latent features should continually be checked for bias. At FICO, we’ve developed a machine learning technique called interpretable latent features to help overcome this challenge, increasing transparency and accountability. Using AI responsibly includes knowing when a model is not effective, or could be even detrimental.
Principle 3: Ethical AI
Machine learning discovers relationships between data to fit a particular objective function (or goal). It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, Ethical AI is achieved by taking precautions to expose what the underlying machine learning model has learned as latent features, and test if they could impute bias.
A rigorous development process, coupled with visibility into latent features, helps ensure that analytics models function ethically. Latent features should continually be checked for bias in changing environments.
Ethical AI models must be tested and bias must be removed. Interpretable machine learning architectures allow extraction of the non-linear relationships that are typically hidden the inner workings of most machine learning models. Human in the loop ensures that there is oversight of the function of these latent features, specific bias testing of the latent features across groups, and methodology to prohibit discovered imputed biases and machine learning models re-spun, Constantly one needs to keep front of mind that the data on which the AI model was trained is all-too-often implicitly full of societal biases.
Consider these important questions:
- How is your company achieving Ethical AI?
- Which AI technologies are allowed for use in your organization, and how will they be tested to ensure their appropriateness for the market?
- Is there monitoring in place today for each AI model and, if so, what’s being monitored?
- What are the thresholds preset to indicate when a AI model should no longer be used?
- Is your organization uniformly ethical with its AI?
- Is your company placing some models under the Responsible AI umbrella (due to being regulated and therefore high risk) while others are simply not built to the Responsible AI standard? How are those dividing lines set?
- Is it ever OK to not to be responsible in the development of AI? If so, when?
In creating ethical AI models, bias and discrimination are tested and removed, and should be continually re-evaluated when the model is in operation. A rigorous development process, coupled with visibility into latent features, helps ensure that analytic models function ethically. Latent features should continually be checked for bias drift
Principle 4: Auditable AI
Auditable AI means “building it right the first time.” according to corporately defined AI model development standards which will be shown followed. Models must be built according to a company-wide model development standard, with shared code repositories, approved model architectures, sanctioned variables, and established bias testing and stability standards for models. This dramatically reduces errors in model development that, ultimately, would get exposed otherwise in production, cutting into anticipated business value and negatively impacting customers.
When conditions change, Auditable AI allows data scientists to determine how operations will respond, and determine if the AI is still unbiased and trustworthy, or if strategies using the model should be adjusted. Auditable AI is enforced and codified through an AI model development governance blockchain built up through the actual AI model build, persisting every detail about the model, and available immediately down the road as data environments change. Auditable AI is not a set of “good intentions” but an immutable record of the adherence to the AI model development standard, allowing organizations to build it right, according to the standard, provide immutable proof of following the standard, and enabling the production of assets that meet governance and regulatory requirements.
As the mainstream business world moves from the theoretical use of AI to production-scale decisioning, Auditable AI is essential. Auditable AI emphasizes laying down (and using) a clearly prescribed AI model development standard and enforcing that no model is released to production without meeting every aspect of that model development standard and requirements.
Auditable AI makes Responsible AI real by creating an immutable audit trail of a company’s documented development governance standard during the production of the model. This avoids haphazard, after-the-fact probing after model development is complete. There are additional benefits; by understanding precisely when a model goes off the rails as early as possible, to fail fast, companies can save themselves untold agony, avoiding the reputational damage and lawsuits that occur when AI goes bad outside the data science lab.
A Playbook for Responsible AI
It’s clear that the business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards of directors need to be aware of the risks associated with the technology and the best practices to proactively mitigate them. Decisions made by AI algorithms can appear callous and sometimes even careless as the use of AI pushes the decision-making process further away from those the decisions affect.
FICO’s AI team has decades of experience in developing analytic innovation in a highly regulated environment. To help our clients, we developed a playbook for Responsible AI that explores:
- Proper use of historical training and testing data
- Well-defined metrics for acceptable performance
- Careful model architecture selection
- Processes for model stability testing, interpretation, bias removal, and governance
This AI playbook gives you an overview of eight critical steps:

Explore FICO and Responsible AI
- Learn more at FICO® Responsible AI
- Download our AI Playbook: A Step-by-Step Guide for Achieving Responsible AI
- Review The State of Responsible AI in Financial Services
- Read my Q&A with TechTarget’s WhatIs.com where I answered questions about Responsible AI
- Follow me on Twitter @ScottZoldi
This is an update of a post from 2021.
Popular Posts

Business and IT Alignment is Critical to Your AI Success
These are the five pillars that can unite business and IT goals and convert artificial intelligence into measurable value — fast
Read more
Average U.S. FICO Score at 717 as More Consumers Face Financial Headwinds
Outlier or Start of a New Credit Score Trend?
Read more
FICO® Score 10 T Decisively Beats VantageScore 4.0 on Predictability
An analysis by FICO data scientists has found that FICO Score 10 T significantly outperforms VantageScore 4.0 in mortgage origination predictive power.
Read moreTake the next step
Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.