Skip to main content
Kasparov on AI: Why Is Explainable AI Important? (Video)

Why Is Explainable AI Important? Do you want decisions around made by algorithms that you cannot understand, and perhaps their creators don’t understand? While many are not worried about the rise of Skynet (the dangerous AI system in the Terminator films), some are concerned about the rise of machine learning algorithms that are not used responsibly.

Explainability is paramount to the responsible use of AI and machine learning, and fortunately, algorithms for explaining machine learning go back more than 30 years. Now is the time to implement broadly before we see the spread of unregulated algorithms.

Explainable AI is one of the most urgent areas for research right now, including at FICO. In fact, we have just released a new version of FICO Analytic Workbench with an xAI Toolkit.

At FICO World 2018, I sat down with Chess Grandmaster Garry Kasparov to discuss the hot topics in the world of AI, and asked: Why is explainable AI so important?

Watch the video below, and for more of my conversation with Garry Kasparov on AI go to, where you can see other excerpts as well as our full discussion.

I write quite often about the topic of explainable AI. Read my own thoughts on AI and related analytic topics on Twitter @ScottZoldi.

related posts