The impact of the coronavirus on AI adoption may be one of the beneficial side effects of the pandemic. This "golden age of AI" is one of the topics explored by the German business magazine GI Geldinstitute in its interview with Will Lansing, FICO's CEO. For the benefit of English speakers, I am sharing a translation of an excerpt, in which Will discusses the rise of responsible AI and other subjects.
To ensure that AI in use is not a scary black box, you relied on the so-called "3 E's" in AI: AI must be ethical, explainable and efficient. Can you elaborate on that a bit?
The “3 Es of AI” is how our chief analytics officer, Dr. Scott Zoldi, describes the criteria for responsible AI, which is now top of mind for analytics-driven businesses. Explainable AI ensures that we know how the model operates, and we can provide reasons and explanations as to why a decision was made at an individual level. Ethical AI allows us to understand if a model is being biased toward any protected group of individuals. Finally, AI models must be efficient in enabling actionable monitoring. This monitoring ensures that biases don’t creep in over time in the production environment, and can even indicate whether, over time, the model should be modified or abandoned. This last principle is sometimes called Humble AI — it means using AI when we’re sure it works, and using something else when we’re not sure.
AI and Big Data has spawned two new job descriptions: chief analytics officers (CAOs) and chief data officers (CDOs). What differentiates them and where should they find their use in the enterprise?
In general, CAOs are experts in analytics and AI, and how to leverage these technologies to solve a business problem. CDOs are focused on data assets – provenance, usage right, quality and consistency, recency, etc. CDOs often report to the chief information officer, in the IT organization. CAOs often report to the chief technology officer, typically driving AI integration with software assets, many times having a dotted line to the CEO.
How much awareness of the need for these two employees is already present in financial institutions?
In my experience most organizations, including financial institutions, do not have both of these roles. There is more awareness for the CDO role because financial firms have been collecting customer data for many years and their use of that data is highly regulated. However, CAO positions are growing due to increased awareness of the importance of analytics governance and frameworks. Bad models can negatively impact consumers, even when the data they use is in order and properly consented, and that can drive bad business decisions and big reputational losses.
On the FICO Blog, you quote the Bloomberg news service as saying that Corona could deliver a golden age to AI. Why this connection?
As the global pandemic unfolded, customers embraced all things digital, and companies that didn’t have a good digital strategy rushed in to get things in order. The compressed timeframe for digital adoption raised awareness of the need for AI technology maturity. Additionally, in the tumultuous pandemic business environments, companies have become highly reliant on AI to help them pivot and adjust to quick changes. Together, these conditions and requirements will spur more AI adoption and increased AI reliance across industries.
The topic of AI manipulation mostly deals only with the issue of AI training, not minimal changes - e.g., if one pixel in an image is changed and then the AI classifies a truck as a giraffe. What recommendations do you have here?
This is why explainable AI is critical — it can identify any model sensitivity that causes a disproportionate impact, for example, changes to pixel 47 cause the entire image to be incorrectly classified. Additionally, companies ideally should have programs for detecting adversarial AI — that is when a hostile, external AI system is attacking the classification model in question, for example to make fraudulent transactions appear genuine. AI algorithms can be deployed alongside the AI model to detect manipulation and attack, and inform those who depend on those models.
The year is still young - what predictions do you have on the subject of AI for 2021?
I have three predictions: first, I believe that AI will be governed at an algorithmic level, rather than by a vague “we will do no evil” promise. Because companies are under pressure to deliver production-quality algorithms, they will take a lifecycle approach to building Explainable, Ethical and Efficient AI models that can be audited, monitored and governed. Second, I predict the rise of AI microservices; in some AI software applications, algorithms and models will be pulled out and sold separately. These microservices are the beginning of a larger industry, AI as a Service (AIaaS). And third, in 2021 I believe consumers will increasingly provide consent for specific, prescribed and constrained uses of their data, which will become ever more important in fraud detection, risk management and marketing.