What Are the AI Challenges in Banking? Views from the FT Global Banking Summit
International banking leaders meeting at the FT Global Banking Summit found common challenges in adopting GenAI and managing its risks

The promise of AI in financial services comes with some big challenges. Two weeks ago I joined a group of leaders from top European and American banks in a roundtable discussion at the FT Global Banking Summit to explore challenges and imperatives around integrating AI, particularly Generative AI, into the highly regulated banking sector.
Key themes included:
- The struggle to move beyond the proof-of-concept stage ("pilotitis")
- The critical need for top-down leadership and cultural transformation
- The establishment of robust yet agile governance frameworks
We agreed that while the potential of AI is transformative, its successful implementation hinges on solving foundational issues related to data quality, risk management, and reimagining business processes rather than simply automating existing ones.
Within the framework of Chatham House rules, here are the key takeaways from our enlightening discussion.
Strategic Imperative: Top-Down Leadership and Cultural Transformation
The group unanimously agreed that successful AI adoption is contingent on strong leadership and a profound cultural shift. This top-down vision is essential to move the organization beyond using AI for simple cost-cutting ("quicker, better, cheaper") and toward reimagining fundamental business processes. The goal is to ask transformative questions, such as using AI to prevent customer complaints from ever arising, rather than just processing them faster. Without this leadership-driven cultural change, data scientists and tech teams would struggle to create impactful solutions.
3 Key Challenges in AI Adoption in Banking
Key Challenge 1: Moving from Proof-of-Concept to Enterprise-Scale Production
A major recurring theme was the difficulty of moving AI initiatives from small-scale pilots to fully operationalized, enterprise-class systems - a phenomenon dubbed "pilotitis” by one participant.
Many firms successfully create proofs-of-concept that demonstrate value for a small group, but struggle with the subsequent steps. The challenges of operationalization include achieving scale, ensuring ultra-low latency and high performance, and integrating the solution into the complex, legacy environments of large banks. Participants agreed that without a clear path to production and a focus on real business outcomes and KPIs, many promising AI projects fail to deliver a meaningful return on investment. The discussion highlighted the need to be selective with pilots and ensure they are tied to a strategic business problem that justifies the significant effort of enterprise-wide deployment.
Key Challenge 2: Data Quality and Foundational Management
The adage "garbage in, garbage out" was cited as a fundamental truth in the AI era. AI is not a magical solution for poor data. Trying to apply advanced AI models to a foundation of "crap data" will only lead to poor outcomes. This has led to a renewed focus on data fundamentals. The discussion underscored that before organizations can fully leverage AI, they must invest in comprehensive data management programs to ensure their data is clean, accessible, and useful.
Key Challenge 3: Governance, Risk, and "Shadow AI"
The risk of “Shadow AI” — employees using unapproved, external AI tools and potentially leaking sensitive corporate data — is a big concern. The availability of powerful consumer-grade AI tools means that if companies don't provide a controlled, in-house alternative, employees will find their own solutions, creating immense risk.
The rollout of powerful tools like Microsoft Copilot can inadvertently expose sensitive information if a company's internal access controls are not perfect. For example, Copilot could surface data from a poorly secured SharePoint site that an employee had access to but was unaware of. We concluded that the answer is not to block technology but to fix the underlying issues, such as strengthening access control policies. This highlights the fact that AI often exacerbates existing risks rather than creating entirely new ones.
Opportunity: Large vs. Small Language Models (LLMs vs. SLMs)
A nuanced technical discussion explored the strategic use of different types of language models. While large language models (LLMs) from major providers offer broad capabilities, there is a growing interest in small language models (SLMs). SLMs are not generalists but are specialized for particular use cases, trained on expert data, and validated by subject matter experts. This specialization leads to a higher degree of trust and accuracy for specific tasks.
AI Governance: The Four Pillars of Responsible AI
AI governance was a hot topic during our meeting. I shared FICO’s model for Responsible AI built on four key pillars:
1. Explainable: Understanding the data and logic behind an AI model's decision. This involves transparency into how the model was trained and what data it relies on.
2. Auditable: The ability to trace and understand the decision-making process. FICO uses blockchain technology to create an immutable, time-stamped history of how a model reached a decision, providing a clear chain of custody for audit purposes.
3. Ethical: Ensuring fairness and mitigating bias in the data used for training. This is directly linked to data quality and the strategic management of data as an asset to prevent discriminatory outcomes.
4. Robust: This pillar covers the operational aspects of the AI system, including its availability, resilience, scalability, and cost-effectiveness. The importance of robustness varies by use case; an internal knowledge management tool has lower requirements than a customer-facing system where downtime could halt business operations.
Balancing Speed vs. Risk
A critical tension identified by several participants is the mismatch between the slow, deliberate pace of traditional bank governance and the rapid, iterative nature of AI development. The key takeaway was that governance must not suffocate opportunity. Organizations that can develop slick, proportionate, and agile governance processes — balancing speed with risk — will be the ones that succeed in the AI race.
Customer Experience and the Human Element
The impact of AI on the customer journey was another key topic, and there is a generational shift in communication preferences. While some older customers prefer talking to a human, younger generations are often more comfortable with text messages or chatbots.
AI is currently being applied heavily in onboarding and KYC processes. However, the group recognized that the technology enables a wide array of new touchpoints. This is why FICO has launched “focused sequence models”, which create a 360-degree motion picture of a customer by incorporating a time-series element, allowing the bank to understand a customer's evolution and guide decision-making, moving beyond static data snapshots.
Future Perspective: The Evolution of AI and Regulation
Looking ahead 7 to 10 years, we speculated on the long-term trajectory of AI and its regulation. Clearly, the industry is still in the very early stages of a massive technological shift. Today’s technology will seem primitive in the near future. And along with the industry, regulators are currently in a learning phase.
One participant suggested that meaningful regulation will only emerge once regulators fully understand the technology. Future regulations might not focus on the models themselves (as with current model risk management rules like SR 11-7) but rather on customer outcomes and impact. In the interim, firms must effectively self-regulate by establishing strong internal governance and ethical frameworks.
I was honored to be part of this discussion, and thank the other roundtable members for their candid views. While each participant brought their own concerns and innovations to the table, it was clear that the challenges are felt across the board. These challenges are driving FICO’s innovation in bringing Responsible AI to market.
How FICO Can Help You Adopt Responsible AI
- Read about FICO® Focused Foundation Model for Financial Services Provides Superior Accuracy in Decisioning and Trust When Deploying GenAI
- Read FICO’s new State of Responsible AI for Financial Services report
- Download FICO's AI Playbook: A Step-by-Step Guide for Achieving Responsible AI
- Explore the analytics capabilities in FICO Platform
Popular Posts
Business and IT Alignment is Critical to Your AI Success
These are the five pillars that can unite business and IT goals and convert artificial intelligence into measurable value — fast
Read more
FICO® Score 10T Decisively Beats VantageScore 4.0 on Predictability
An analysis by FICO data scientists has found that FICO Score 10T significantly outperforms VantageScore 4.0 in mortgage origination predictive power.
Read more
Average U.S. FICO Score at 717 as More Consumers Face Financial Headwinds
Outlier or Start of a New Credit Score Trend?
Read moreTake the next step
Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.