Analytics & Optimization The Future of AI: It’s Good, But Should We Trust It?

FICO 25 years of AI and machine learning logo

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Imran Ali, a senior engineer in our Tools Department, working in Bangalore.

I propose to consider the question, Can machines think?

Artificial intelligence has come a long way since that opening statement was made by A. M. Turing in his seminal 1950 paper Computing Machinery and Intelligence. AI has become very popular because it has metamorphosed from being of theoretical interest to something that is creating value in business.

To gain further success, AI has to answer an important question that every business has to answer. Consumers will ask, “You are good at what you do, but should I trust you?”

Can I trust you with my money to trade with? Can I trust you with my knee surgery? Can I trust you with my car? One bad car accident, one failed knee operation, and one big loss in bad stock trade can create a chasm of suspicion and distrust.

Will AI be able to recover from such failures over time and regain consumers’ trust?  Will human beings be as forgiving to artificial intelligence as they are to fellow humans?

The challenge corporations will face in the next few years will be to convince consumers that they can trust AI even if it fails at times. Those corporations that most convincingly accomplish this will be more successful in using AI in their businesses.

Why People Need Trust

Trust is a complex emotion and it cannot be evoked just by presenting data or by showcasing complex technology. Then how can we build trust in AI with the average consumer?

“It was an algorithmic mistake” will no longer be a good enough explanation for a bad trade at the stock market. Corporations have to come up with a version of AI that can explain its failures to people in a way they can understand. Not only when they fail, but also when they succeed, AI systems have to be transparent to people, because these systems act on behalf of people: They drive on their behalf, they trade stocks on their behalf, they run their business on their behalf.

It is very important that people know what ethics, values and principles a system represents if that system is going to work on their behalf. No matter how expert a person or organization is in any given field, consumers will trust them only when they believe they share the same values and principles.

Data-driven AI systems may be good at their job, but they broadly fail today to clearly enunciate what principles, values, ethics or guiding principles they follow. When this changes, consumers will be able trust the system as they know that it will never do what they would have never done in any given situation.

How to Build Trust

How do we embed values and principles in AI so that consumers may start trusting it? AI will have to undergo a paradigm shift, similar to what it underwent in the last decade to change from logic-driven systems to data-driven ones.

The astounding success that AI has recently tasted is because the engineering community shifted its focus from developing logic-driven systems to using simple statistical methods with lots and lots of data to process and learn from. Simple statistical techniques with huge data can start doing things which were very difficult to achieve when writing logic for it.

While such methods have met great commercial success, they do not have a great internal representation of underlying logic, or they have logic that cannot be easily explained. Corporations will have to develop their AI systems so that they can leverage the power of data while at the same time sharing the logical features of their system that can be explained to common consumers. Data-driven AI technology should evolve within the parameters of simple if-then-else style logical rules that can act as boundaries the data-driven learnings cannot cross, no matter how much the data may push in that direction. At the same time, since learnings will keep themselves within the boundaries of guiding principles, it will be difficult for malicious and dubious data to drive AI systems in a direction which is ill-intended.

As Marie Curie said, “Nothing in life is to be feared, it is only to be understood.” Consumers will want to understand AI to dispel their fears about it. Given how successfully AI broke the boundaries of laboratories and became pervasive in business and in our daily lives, I expect that it will also break the barrier of being esoteric science and will become more understandable to people. In the end, I believe consumers will learn to trust AI and accept it as a friend, rather than just an inhuman algorithm.

See other FICO posts on artificial intelligence.

Leave a comment