Where Are We Now? 2022 Data Science and AI Predictions Revisited

Within my AI predictions for the year, interpretable machine learning (ML) models have surprised me by garnering greater awareness from data science leaders

One of my favorite things to think about is my annual data science and artificial intelligence (AI) predictions blog. In these blogs, I review the industry trends and developments in technology set to impact businesses. (Links to all of my predictions blogs since 2017 are at the end of this blog.) In the past couple of weeks, vacationing off the grid in Alaska, I’ve had a chance to think about how prescient my 2022 predictions for AI and data science may or may not be. As the late Ed Koch, one of New York City’s more colorful mayors, famously used to say, “How’m I doin’?”, let’s review where these predictions and AI trends are mid-year.

Overall, I predicted that in 2022 Auditable AI and Humble AI will join Explainable AI and Ethical AI under the umbrella of Responsible Artificial Intelligence.

  • Auditable AI is artificial intelligence technology with the capability to produce an audit trail of every detail about itself, including data, variables, transformations, latent features, bias testing, machine learning, algorithm design and model logic.
  • Humble AI is artificial intelligence technology that will know it’s not sure about the right answer. Humble AI addresses the uncertainty of AI decisioning, and uses uncertainty measures (such as a numeric uncertainty score) to quantify the model’s confidence levels in its own decisioning.

Prediction: We’ll See More Use of Auditable AI Techniques

Where we’re at: I am pleasantly surprised to see greater awareness of interpretable machine learning (ML) models, an important instantiation of Auditable AI. Typically, only a small minority of data science practitioners talk about interpretable ML, but in recent months I’ve heard it brought up by data science leaders.

In the credit lending space, interpretable AI – and, by extension, interpretable ML – has become an increasingly well-used term. I’ve heard both frequently sprinkled throughout conversations on how to address transparency and fairness in credit risk. I’m pleased to see interpretability gaining traction, as models in the credit risk area are considered by many to be high-risk applications of AI.

So, in sum, in 2022 I’ve seen increased recognition of the benefits of interpretable ML, which meets the performance of unconstrained ML while providing the interpretability needed to understand what drives scores and the explanations for those scores. Frankly, I’ve been surprised to see the uptake by so many practitioners, even open source devotees. Data scientists of all stripes are taking new responsibility for what a model is learning, defend it, and understand what needs to be monitored for shifts in performance or bias behaviors.

That’s a big win for Auditable AI. Interpretable models can pull bias into broad daylight, recording and monitoring their behaviors both in development and in production.

Prediction: Data Scientists Will Lead the Way in Embracing Humble AI

Where we’re at: A Google engineer recently made waves by claiming that the company’s AI chatbot is sentient (able to feel or perceive) and similar to “a kid that happened to know physics.” I’m not sure that qualifies as Humble AI, but the engineer’s proclamation certainly brought attention to the idea of an Artificial Intelligence system should be able to recognize its own limits.

In more practical terms, I believe that data scientists are indeed leading the way toward operationalizing Humble AI. As a big step in that direction, Machine Learning Model Operationalization Management (MLOps) aims to provide “an end-to-end machine learning development process to design, build and manage reproducible, testable, and evolvable ML-powered software.” (Related: my LinkedIn Live webcast with MLOps expert Shreya Shankar, “Blockchain: How to Build Accountability and Auditability into AI Models.”)

Unpacking that long list of adjectives, MLOps – together with proactive, continuous, and highly focused model monitoring (typically of Auditable AI assets) – enables the concept of there being no differentiation between the development and production use of a machine learning model. These in the past have usually been two distinct phases, often weeks or months apart. However, In the current regulatory environment there is strong demand to intertwine both phases in a continuous lifecycle approach.

So what is continuous model monitoring, and its relationship to Humble AI? Simply put, continuous model monitoring is having the view that when a model is put into production, it’s your responsibility to monitor that it behaves properly and as expected. Continuous model monitoring in a production environment rings early warning bells when there are violations in the assumptions on which the model is built. This can come down to specific types of data, such that the model becomes uncertain, allowing it to defer a decision. This is in contrast to the user blindly and naively accepting whatever score the AI produces.

Model monitoring often gets confused with monitoring model performance – these are two related yet separate things. Model monitoring involves identifying the latent features that drive the model, understanding distributional shifts in production, and how latent features that are activated differently may be precursors of a model performance issue or an ethical AI issue.

With model performance monitoring, you’ll find out far after the fact (decision) if the model is good or bad. Model monitoring allows you to understand how features are shifting and require attention to remediate them, or if you need to drop down to a Humble AI level instead, using an alternate model or decision process. It’s a proactive, more ethical approach to monitoring the tolerances under which the model should be trusted in use.

In other words, you can’t “set it and forget it” with this type of model automation. Model monitoring gives advance warning on issues that may affect model performance or ethics, and should be continuous. It’s like sensors in a car; they’re always checking tires, oil, engine and a multitude of other mechanicals to warn the driver if something’s about to become a problem and prevent far worse outcomes.

Prediction: AI Transparency and Ethics by Design Is the Only Way Forward

Where we’re at: In recent blogs I’ve talked quite a bit about the guidance given in the IEEE 7000 standard, which essentially says, “build models that you can faithfully talk to and provide explanations around, with sufficient transparency.” This notion is gaining serious traction as the regulation of AI is becoming increasingly emboldened.

For example, The Brookings Institute says that the European Union’s AI Act (AIA) “aspires to establish the first comprehensive regulatory scheme for artificial intelligence, but its impact will not stop at the EU’s borders. In fact, some EU policymakers believe it is a critical goal of the AIA to set a worldwide standard, so much so that some refer to a race to regulate AI.”

In my mind high-risk AI is not yet deployed widely, and the EU’s AI regulation suggests over-reaction. The intent is good, but we must clarify that not all AI is at equal risk of being high-risk; interpretable ML in particular has the ability to specify what drives outcomes. I think it’s not unreasonable for every data scientist who is building AI to be asked, “What’s in your AI?” and give a certain and clear answer. It’s like going out to eat at a restaurant; if you have a health condition, it’s perfectly reasonable to ask what’s in a dish you might consume.

Hopefully, regulatory over-reaction to AI will moderate in the balance of 2022. My approach is to find the middle path, by carefully choosing an algorithm that has been crafted ethically by design. And to always ask questions because, in the words of statistician hero George Box, “Essentially all models are wrong, but some are useful.”

Follow me on LinkedIn and on Twitter @ScottZoldi, and check out my past AI predictions blogs for 2021, 2020, 2019, 2018 and 2017. Thanks!

How FICO Can Help You Gain from AI:

chevron_leftBlog home

Related posts

Take the next step

Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.