The CIO’s Role in Achieving Responsible AI: Conduit in Chief
The CIO’s Role in Achieving Responsible AI: Conduit in Chief

The word “conduit” can mean a lot of things. If you’re an electrician, it’s a tube that protects wires and cables, giving electricity a safe and organized path to travel. If you’re a diplomat, a conduit can act as a channel of information and influence connecting two or more parties. And if you’re a Minecraft player, a conduit provides superpowers, restoring oxygen to players underwater, giving them night vision, and increasing their mining speed by 16.7%. (Who knew?)
At FICO, I find myself functioning as “conduit in chief.” As the chief information officer (CIO), spearheading the strategic use of technology and data at an AI decisioning and software company, I’m the conduit to operationalizing Responsible AI.
First, I’m supporting a platform-based, end-to-end approach to artificial intelligence (AI) and analytic model development, providing a safe and organized path for decisioning innovation.
Second, through this process I’m acting as a channel of information and influence between FICO’s AI leader (our Chief Analytics Officer (CAO) Dr. Scott Zoldi), our software development organization, and end users, keeping multiple corners of the company aligned.
Third, through automation and a platform development approach, I’m enabling users' superpowers to produce decisioning models that meet Responsible AI standards for bias mitigation, performance monitoring, secure data handling, and auditability.
A Conduit to Unlocking the Value of AI Investments
Why does the CIO-as-conduit role matter? Because, as revealed in the new FICO and Corinium report, “State of Responsible AI in Financial Services: Unlocking Business Value at Scale” CIO and CTO respondents report that only 12% of organizations have fully integrated AI operational standards. This gap represents a big opportunity for improvement across the financial services industry, with a critical emphasis on implementing Responsible AI standards—the key to building AI decisioning systems that can be trusted, thus unlocking their business value.
Furthermore, a platform is a premier vehicle by which Responsible AI standards can be achieved. Over 75% of the 252 business and IT leaders who participated in the study believe collaboration between business and IT leaders, and a shared AI platform, could drive ROI gains of 50% or more.

Ideally, a platform breaks down functional silos across different teams and roles to build AI decisioning models, test, and monitor them. By codifying a corporate AI development standard and automatically enforcing it, a platform can speed time to deployment by greatly improving model performance, reducing risk, and ensuring accountability.
Here's a quick blueprint for how CIOs can do it.
Make Responsible AI a Platform Strategy, Not a Project
Many enterprises are saturated with point solutions that address various steps of the analytic model development process, from data wrangling to model performance testing. Unfortunately, a disconnected approach will not work over-time or have lasting success because there are no contiguous controls to ensure that models are robust, explainable, ethical, and auditable—the four cornerstones of Responsible AI. An end-to-end platform that encompasses every aspect of model development, deployment, and monitoring is the only way to get there.
Additionally, financial services organizations are extremely aware of the need for auditability; when a regulator asks how a decision was rendered, a complete response must be produced. Ideally, this audit log should be immutable and populated automatically throughout the model development lifecycle. Persisting each granular development decision to an immutable record, such as that provided by blockchain, smooths the audit process by delivering an irrefutable record of events.
Automate and Hardwire Responsible AI Standards through MLOps
Machine learning is at the heart of any AI decisioning model. Correctly implemented, machine learning operations (MLOps) is the key to operationalizing Responsible AI development and governance practices by automating and hardwiring them. A platform happens to be the best place to do it.
MLOps borrows concepts from DevOps, which is a combination of cultural principles, processes, and tools to accelerate software development, delivery, and operations. DevOps is nearly ubiquitous in software development and provides numerous principles that are mirrored in MLOps, including:
Pipelines: Continuous integration, continuous deployment (CICD) pipelines are the basis of the MLOps mindset. Controls, test cases, test scenarios, validations and more are embedded in the pipelining process; these mechanisms serve as the framework for automating and hardwiring Responsible AI practices into all ML and AI model development.
Shift left: Like an old-fashioned automotive assembly line, traditional model development practices wait to test and validate analytic models until after they’re complete. (This is also almost always the case in enterprises using a constellation of point solutions to build AI models.) Often, testing that reveals bias in the model’s decisions or myriad other issues leads to a major cleanup effort, adding costs and delaying deployment.
“Shift left” moves validations, checks, and tests from the endpoint of model development to earlier stages. By running these processes automatically, in real-time as users proceed through development activities, mistakes can be corrected in the moment. By avoiding major cleanup efforts after the fact, a “shift left” can dramatically improve ML development quality and time-to-value.
Policy as code: Similarly, testing to determine if model elements are robust, ethical, explainable, auditable, and unbiased should be encoded into the MLOps pipeline such that it is automatically executed during any aspect or iteration of development. This is what hardwiring is about—Responsible AI concepts are codified as policies, tests, and validations throughout the pipeline. By giving users feedback in real-time, they learn how to incorporate these concepts into their daily development activities, spurring continuous improvement.
Collaborate across the Organization
As you can see, building Responsible AI systems involves many moving, human parts. This is ultimately where a CIO can play a pivotal role as a “conduit in chief,” by collaborating with the multiple stakeholders who must have a voice in customizing a platform to meet an organization’s specific needs.
The CAO and team will prescribe the corporate AI governance framework and associated requirements for data selection and management, model development techniques, explainability requirements, bias testing, model monitoring, and auditability.
The CTO will be charged with implementing appropriate controls, tests, an immutable audit log, and monitoring in a platform.
Finally, the platform will need to be tuned to meet the functional needs of users from across the organization.
Operationalizing the development of Responsible AI is not an easy task, but it’s certainly worth the time and effort for CIOs to unlock the immeasurable investments their financial institutions have poured into AI.
How FICO Can Help You Advance in AI
Read FICO’s new State of Responsible AI for Financial Services report
Explore FICO’s posts on Responsible AI
Download FICO's AI Playbook: A Step-by-Step Guide for Achieving Responsible AI
Popular Posts

Business and IT Alignment is Critical to Your AI Success
These are the five pillars that can unite business and IT goals and convert artificial intelligence into measurable value — fast
Read more
Average U.S. FICO Score at 717 as More Consumers Face Financial Headwinds
Outlier or Start of a New Credit Score Trend?
Read more
FICO® Score 10T Decisively Beats VantageScore 4.0 on Predictability
An analysis by FICO data scientists has found that FICO Score 10T significantly outperforms VantageScore 4.0 in mortgage origination predictive power.
Read moreTake the next step
Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.