Responsible AI: Are European Firms Ready for the Regulators?

As the EU works on an AI Act, FICO research suggests that banks are expanding their use of AI but not necessarily focused on responsible AI

This year, governments and corporations are expected to spend more than $500 billion on AI globally. In banking and financial services within North America, our research shows that AI is an even higher priority now than 12 months ago for 52% of financial services organisations.  But alongside the rise of artificial intelligence, there have been growing concerns around ensuring its ethical use. There are significant changes coming in Europe, which highlight getting a handle on responsible AI use, with the banking and financial services sector among those that will be most impacted by increasing regulation.

The EU Sets Out the First Legal Framework for AI Regulation

In April 2021, the European Commission and EU member states kicked off a coordinated plan placing Europe on a path to becoming a global leader in cutting-edge, trustworthy artificial intelligence. With regulation focused on ensuring AI and machine learning innovation thrives from lab to market, the Commission is set to invest €1bn per year through the Horizon Europe and Digital Europe programmes. They will help mobilise additional investment from private sector and member states to reach €20bn annually during the course of the next decade, while also using the Recovery and Resilience Facility to access a further €134 billion for key digital initiatives.

A key element of the plan is the coordinated approach to the human and ethical implications of Artificial Intelligence. Despite AI's widely recognised value, there has been growing evidence of discrimination from the use of black box AI applications, where imputed identity-related relationships created proxies for gender and ethnicity. The most cited examples point to AI-based experimental hiring tools, which have turned out to be biased against women. At the same time, examples of AI-powered financial institutions denying credit applications because of bias unwittingly loaded into the AI engines’ algorithms also eroded trust in AI for credit decisions.

Organisations have a responsibility to foster trust in the technology they use to make decisions. They must be able to assess whether unfair discriminations are taking place. AI-driven decisions and predictions need to be transparent and explainable, which means that ethical AI must be built into the fabric of algorithms from the outset, tested, and monitored.

Europe has ambitious plans for AI. Ethical AI is as big driver. The EU’s co-ordinated plan is underpinned by the EU AI ACT, which aims to classify different AI systems as high-risk and would impact approaches used to address current business problems, causing many to focus deeply on their responsible AI frameworks. 

Systems used in banking and financial services are firmly in scope and could be enforced as early as the second half of 2024. The sector will need to have well-defined model development and responsible AI standards ready.

The EU AI Act could become a global standard, just as the EU GDPR has, determining to what extent AI has a positive rather than negative effect on individuals wherever they may be. The EU’s AI regulation is already making waves internationally and other countries are beginning to follow suit. One such country is Brazil where their Congress have recently passed a bill that creates a legal framework for artificial intelligence.

Matt Cox on EU AI regulation

Ethical and Responsible AI Still Not a Core Part of Organisational Strategy for Many

Two years on, and with rules expected to be finalised next year, many banking and financial services are yet to make responsible AI a core part of their business strategy. In fact, research in North America shows less than a third (29%) have actually done so. The majority (71%) still believe responsible and ethical AI is a strategic element to be prioritised in the future, rather than today. In fact, a sizable proportion of one in four don’t see it becoming a core element of their strategy for another three to five years yet.

Up to 27% of organisations in this sector haven’t even started defining their ethical and responsible AI capabilities yet. Only 8% have scaled their responsible AI model development standards consistently across their organisations.

The EU AI act warns of financial consequences of non-compliance could be as high as €30 million (US$33m), or 6% of global revenue – far more severe than those incurred by non-compliance of GDPR.

Barriers to Ethical and Responsible AI

A recent Corinium survey reveals that despite surging demand for AI solutions and strong support for it at the highest levels, the same can’t be said for ethical AI. Less than 20% of AI budgets are being spent on it.

There are wide gaps in understanding the importance of AI ethics and AI governance among key business stakeholders in this sector. In our survey, 43% of respondents in North America said they struggle with AI governance structures to meet regulatory requirements, and only 44% have sufficiently defined standards for responsible AI at the board level.

In fact, ethical and responsible AI is no more of a priority among C-suite stakeholders today than it was 12 months ago, according to 69% banking and financial services within North America.

Ethical and Responsible AI Is Maturing and Needs to be Front of Mind

As the regulations moves through the EU legislation, lenders know that there are some highly complex compliance challenges coming down the line to add to an already dense regulatory environment in Europe. They need to be assessing their AI systems now to understand what AI applications will be considered high-risk according to the AI Act. Credit decisions are currently classed as high-risk in this proposal.

Understandably, the past two years have created some all-consuming challenges — post-pandemic recovery, rampant inflation, supply chain challenges, and more. Given the breadth and complexity of the proposed requirements it is time to focus on your strategy for responsible and trustworthy AI.

How FICO Can Help You Win with AI and Decision Management Platforms

We have a host of resources available to help you learn more about the issues and the potential solutions available.

chevron_leftBlog Home

Related posts

Take the next step

Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.