I recently did a Q&A for TechTarget’s WhatIs.com where I answered questions about Responsible AI. After that exchange I thought our FICO Blog readers might like to read the Q&A, and I’m sharing my definition of Responsible AI below.
What is Responsible AI?
Artificial intelligence (AI) technology underpins decisions that profoundly affect everyone; responsible AI is a standard for ensuring that AI is safe, trustworthy and unbiased. Responsible AI ensures that AI and machine learning (ML) models are Robust, Explainable, Ethical and Efficient.
Robust AI is a well-defined development methodology; proper use of historical, training and testing data; a solid performance definition; careful model architecture selection; and processes for model stability testing, simulation and governance. Importantly, all of these factors must be adhered to by the entire data science organization and enforced as a standard.
Neural networks can find complex nonlinear relationships in data, leading to strong predictive power, a key component of AI. But while the mathematical equations of “black box” machine learning algorithms are often straightforward, deriving a human-understandable interpretation is often difficult. Model explainability should be the primary goal of Responsible AI deployments, followed secondarily by predictive power.
Machine learning discovers relationships between data to fit a particular objective function (or goal). It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, Ethical AI is achieved by taking precautions to expose what the underlying machine learning model has learned as latent features, and test if they could impute bias.
A rigorous development process, coupled with visibility into latent features, helps ensure that analytics models function ethically. Latent features should continually be checked for bias in changing environments.
Efficient AI means “building it right the first time.” Models must be designed from inception to run within an operational environment that will change. To achieve Efficient AI, models must be built according to a company-wide model development standard, with shared code repositories, approved model architectures, sanctioned variables, and established bias testing and stability standards for models. This dramatically reduces errors in model development that, ultimately, would get exposed otherwise in production, cutting into anticipated business value and negatively impacting customers.
When conditions change, Efficient AI allows data scientists to determine how the model will respond, what will it be sensitive to, how to determine if it is still unbiased and trustworthy, or if strategies using the model should be adjusted. Being efficient is having those answers codified through a model development governance blockchain built up through the actual model build persisting every detail about the model, and available immediately down the road as data environments change.
Follow me on Twitter @ScottZoldi