Last month, the European Commission issued a new legal framework for AI, designed to make the European Union a centre of excellence for trustworthy AI. The new rules divide AI systems into different categories of risk (based on the impact of an error in the AI system) and impose restrictions and requirements based on that risk.
The majority of AI systems (including AI-enabled video games or spam filters on your computer) fall into the low-risk category, requiring very little oversight. But more complex and powerful AI solutions, such as a self-driving car, will have to demonstrate high standards in the creation and implementation process, from the data sample used to build and calibrate the AI model, to clear and appropriate human oversight of the results of the AI.
Trustworthy AI has been a concern for FICO for a number of years. With increased computing power allowing data scientists to process more data in a rapid analytic development environment, the potential to introduce bias into models has increased exponentially. The capacity for error exists all the way through the AI process, whether it’s errors or bias in the original sample used for the model, too naïve a reliance on automated machine learning model production (without the necessary experienced oversight and consideration of results) or an inadequate execution framework that compromises the decision made by the AI system. The need to ensure that an AI-driven decision is ethical and responsible has never been greater, and should involve even such aspects as:
- Building a model robustly
- Having a documented model development governance process
- Ensuring models are adequately supported by degrees of freedom to prevent overtraining
- Continuous monitoring (which many organizations ignore)
FICO Chief Analytics Officer Dr. Scott Zoldi wrote at the start of 2021 about what Responsible AI looks like, and his list of areas of focus illustrates some of the potential gaps in the EU’s updated co-ordinated plan on Artificial Intelligence. Whilst the EU have rightly focused on encouraging the use of AI and developing data science strategic leadership, issues like explainability are seen as optional as best. For our clients, explainable AI is one of the key concerns, not just in highly regulated industries like banking where decisions need to be justified, but in other areas like retail, logistics, telco and manufacturing.
Explainability is seen as a core issue of trustworthy AI — going further, transparent model architectures are becoming essential to build ethical unbiased models. From drawing out key factors in fraud scoring to helping understand the drivers of a decision in predictive maintenance, explainability and transparency is the golden egg that unlocks the power of advanced analytics.
Here at FICO we’re continuing to see explainability as one of the core drivers of trustworthy AI, so much so that we package our xAI Python libraries as a standard option in our Analytics Workbench credit risk toolkit. We’re actively exploring how key aspects of explainability operate across the AI spectrum, for example how it affects the imposition of monotonicity in neural networks and driving discussions about the good, bad, and ugly of xAI algorithms and the need for standards and transparency.
We’ll always take a practical view — sometimes you can get the best result by running machine learning models first and then taking the insights they bring into your traditional scorecard build. Either way, it is the ability to explain your results correctly that is one of the key business drivers around AI and one that the EU needs to move up in its priorities if it wants to speak powerfully to the concerns of its member states’ businesses and citizens.