The State of Responsible AI in Financial Services

The third annual State of Responsible AI in Financial Services report released today, alerting the industry to come together and self-regulate its use of AI

Today, FICO released its highly anticipated third annual State of Responsible Artificial Intelligence (AI) in Financial Services report, developed in collaboration with market intelligence firm Corinium. More than ever, I believe this year’s report sounds the alarm bell for financial services firms, signaling that the industry must come together to self-regulate its use of AI. Why? Because while 52% of respondents say that demand for AI products and tools is on the rise, the vast majority (71%) have not implemented ethical and Responsible AI in their core strategies.

With only 8% of respondents reporting that their AI strategies are fully mature, and model development standards consistently scaled, it’s clear to me that the industry is on a bullet train to serious financial, legal and reputational fallout from misguided and otherwise incorrect use of this powerful technology. In the absence of definitive U.S. government regulation at this moment, self-regulation will keep financial services firms “on the rails” of ethical AI use, mitigating bias and imbuing responsibility.

Industry Leadership in Artificial Intelligence Already Exists

Having worked closely with business leaders in banking and financial services for decades, I have a deep respect for their rigor in achieving regulatory compliance in a multitude of areas. That’s why it’s frankly shocking to see, several years into full-bore AI revolution, that only 8% of the survey respondents have codified their AI model development standards. That jibes with another survey finding, that 43% of organizations say they struggle with Responsible AI governance structures to meet regulatory requirements – that’s probably because AI regulation now, in the US, is unspecified.

However, we are seeing some promise in these numbers. The 8% figure above means that eight of the survey’s 100 respondents C-level Analytic banking executives have implemented AI model development standards, a cornerstone of regulation-ready Responsible AI.

The Need to Come Together, Right Now

Regulation of AI is almost certainly on the way. In pondering the regulatory and compliance horizon for 2023, the American Banking Association noted:

  • The [CFPB and other agencies] in March [2022] issued a joint request for information seeking input on financial institutions’ use of AI-based models and tools for various purposes, as well as whether it would be helpful to provide additional clarification on using AI when providing services to customers.

I also believe that the White House’s Blueprint for an AI Bill of Rights is a precursor to industry-specific Responsible AI regulations for financial services firms – further turning up the heat on the need for their self-regulation of AI systems.

As for my part, I’m volunteering to provide the data science backbone for Responsible AI self-regulation. In issuing this call for financial services firms to come together to define how, as an industry, they will develop and use AI responsibly, I fervently believe that these cross-industry discussions must start immediately.

It’s All about Trust

It’s clear that perceptions about responsible use of AI are shifting. This year’s study found that the number one benefit of Responsible AI was improving the customer experience; “Delivering better experiences for customers” (74%) was listed as the top benefit of Responsible AI. It is in stark contrast to one of the most troublesome findings from FICO’s second State of Responsible AI report, which noted that “almost half (43%) of respondents say they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods – i.e., audience segmentation models, facial recognition models, recommendation systems.”

In today’s increasingly AI-fueled world, trust is at a premium. And so is the use of responsible AI. "We are in an era of volatility and uncertainty where trust is extremely hard to come by,” says Cortnie Abercrombie, CEO of AI Truth. “Organizations who want to maintain deep relationships with clients while developing artificial intelligence and intelligent automation products for them are going to have to provide clients with assurances about transparency, explainability, and accountability.”

Keep up with my latest thoughts on Responsible AI by following me on LinkedIn and Twitter @ScottZoldi. Send me a DM on either to join the growing movement within the financial services industry to develop and deploy artificial intelligence responsibly.

chevron_leftBlog home

Related posts

Take the next step

Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.