“Crossing the chasm” is every technologist’s most fervent wish—that their invention leaps the gulf between early adoption and mainstream embrace. Last year I predicted that artificial intelligence (AI) would grow up, with major infrastructure developing around Responsible AI, AI advocacy and AI governance. These and other milestones have come to pass, pushing enterprise-scale AI solidly into the “early adoption” phase, as defined by “Chasm” author Geoffrey Moore.
In 2021 I believe AI will cross the chasm, becoming a reliable and safe, mainstream business technology — but maybe not how, or for reasons why, you might expect.
COVID Is the Mother of AI Maturity
Just like with everything else in 2020, COVID-19 affected AI; demand for AI, data, and digital tools soared as the pandemic has put an unexpected, protracted strain on many enterprises. This theme was a central finding in “Building AI-Driven Enterprises in a Disrupted Environment,” a report based on research conducted by Corinium and sponsored by FICO. More than 100 C-level analytic and data executives were interviewed for the report, to understand how organizations are developing and deploying AI capabilities. Here are a couple of key findings that set the stage for my 2021 predictions:
- Uncertainties caused by the pandemic have forced many organizations to adopt a more committed, disciplined approach to becoming an AI-driven enterprise; 57% of the chief data and analytics officers interviewed saying that COVID-19 has increased demand for AI, digital products and tools.
- However, more than 93% of respondents said that ethical considerations represented a barrier to AI adoption within their organizations. As pointed out in the report, "ensuring AI is used responsibly and ethically in business context is a huge, but critical task." This is despite one-half of survey respondents reporting having strong model governance and management rules in place to support ethical AI usage.
- So, more work is needed to enforce and audit ethical AI usage, as 67% of AI leaders don't monitor their models to ensure continued accuracy and ethical treatment.
With urgent demand for AI ignited by COVID, and increased worldwide focus on responsible and ethical use of AI, here’s what I predict will happen in 2021 to push AI “across the chasm” to become a mainstream, trusted enterprise technology.
Prediction #1: AI Will Be Governed at an Algorithmic Level
Throughout 2020, most of the guidance I’ve read about managing AI development to a corporate standard have been based on amorphous recommendations, such as having an AI Ethics Committee and a general agreement to “do no evil.” This platitudinous approach to ethics doesn’t translate well in the data science labs where AI algorithms are actually developed; here’s just a short list of major AI gaffes that somehow saw the light of day in 2020, despite best intentions.
In 2021, organizations will stop their handwringing over AI and get down to brass tacks. Under pressure for production-quality algorithms, they will take a lifecycle approach to building Responsible AI models that can be audited, monitored and governed. I predict AI will be governed through a blockchain model development framework, to assure that model development standards for explainability and fairness are applied at the algorithmic level, consistently, across the entire organization — without margin for data scientists’ artistry or other interpretation.
Prediction #2: AI as a Service Will Take Off
Over the past decade we’ve seen a trend, in areas like networking and storage, to decouple software from its proprietary hardware and sell the software separately. In 2021, I predict that a similar decoupling will begin in some AI software applications; algorithms and models will be pulled out and sold separately as analytic microservices. These microservices are the beginning of a larger industry, AI as a Service (AIaaS).
By definition, analytic microservices offer succinct functionality, with well-defined application programming interfaces (APIs) for easy integration into enterprise applications, an execution engine and, of course:
- A strongly enforced API
- Data quality monitoring
- Governance documentation
- An explainability component
Microservices address several important issues, by:
- Enabling flexibility of solution design, particularly since the AI in each individual microservice can be sufficiently complex, requiring monitoring, maintenance and retraining.
- Speeding Explainable AI capabilities, and consequently ethical use, into production.
- Allowing people to be extracted away from the complexity of the data science that is at the heart of decisioning, while allowing experts to develop AI applications that embody “security by design” and controls around responsible use.
- Enabling the rapid ingestion, testing and utilization of domain-specific machine learning microservices, allowing the necessary fast response to our world’s rapid digitization.
As for the last point, domain-specific machine learning microservices provide instant access to machine learning technology that is pre-built for a specific problem. Rather than choosing to build your own machine learning solution with open source tools, or using a generic-fitting application, domain-specific microservices will further speed trusted, hardened decisioning technology into production.
Prediction #3: Consumers Will Actively Manage Their Data
With control of their own data imminent, as part of the Open Banking movement, in 2021 we will see consumers increasingly provide consent for specific, prescribed and constrained uses of their transaction data. In light of the rapid changes we have seen in consumer spending during the pandemic, the contribution of additional data (such as on-time bill payment and of inflow/outflow of funds) will become ever-more important in fraud detection, risk management and marketing. AI and machine learning technologies will be critical in delivering personalized experiences, which will require rapid, real-time indications of where the customer is on their financial journey.
Thus, in a post-pandemic environment, the recency and frequency of transaction data will become much more important than ever before. In 2021, I predict we’ll see a higher standard for customer knowledge, consent and participation in the decision-making process, with respect to goods and services.
Prediction #4: “Security by Design” Will Help Fend Off AI Attacks
I first blogged about adversarial AI in 2017 — and in today’s challenging global economy, more criminals are looking to make a buck by building adversarial AI systems designed to manipulate legitimate algorithms.
For example, fraudsters know that credit card fraud models are very effective at detecting high-dollar fraud. But, they might wonder, what is the most effective way to steal $500? An adversarial AI system would attack a fraud model with many attempted transactions, in a wide range of dollar amounts and frequencies, to determine the order and distribution of transactions least likely to trigger a fraud alert. Armed with this information, a fraudster can automate their AI to extract much more illicit gains from stolen cards.
To combat adversarial AI, data scientists are adopting “security by design” — “A proactive, pragmatic and strategic approach that considers risk from the very onset, and not as an afterthought” — to build models that are constantly “self-introspecting” to determine whether they are under attack. While I have a patent application filed for detecting adversarial AI in scoring transactions in fraud detection, I predict that in 2021, AI self-introspection against adversarial AI will be required to get AI models into production across domains outside of financial crime.
Prediction #5: Humble AI Will Let Us Know When to Use a Model, or Not
Using AI responsibly includes knowing when a model is not effective, or could be even detrimental. As organizations move toward specialized model execution (my Prediction #2) within increasingly regulated environments, they will choose machine learning model architectures specifically for each problem area. They will want AI technology that is “explainable first, predictive second.” This capability is key to understanding what latent features are driving model outcomes and when the model should not be used to make decisions, for certain groups of people or at all, because in production those latent features may shift to combinations of values on which the model wasn’t trained, or possibly become biased.
In 2021, I predict that Humble AI will be an essential moderating force between the demand for production-ready AI, and regulatory pressure to prove fair, safe and unbiased decisioning practices.
So that’s it! While COVID-19 ramped up demand for AI, and gave our world challenges we’d never dreamed of, I hope that 2021 brings you happiness, health and chasm-crossing mainstream AI. Follow me on Twitter @ScottZoldi and, if you’re so inclined, check out my past AI predictions blogs for 2020, 2019, 2018 and 2017. Thanks!