Skip to main content
The Future of AI: Less "A" and More "I"

To commemorate the silver jubilee of FICO’s use of artificial intelligence and machine learning, we asked FICO employees a question: What does the future of AI look like? The post below is one of the thought-provoking responses, from Andrew Fernandes, a lead scientist in FICO’s Product and Technology Organization, working in San Diego.

Why do we call artificial intelligence "artificial"? Why don't we just call it intelligence? Because even though AI often does exactly what we want extremely well, it sometimes gets it horribly wrong.

In the future, AI won't get it wrong. Even when we think it's wrong, we'll end up discovering that we are the ones who are wrong, not it. It will be that Intelligent. In other words, the future of AI is less "A" and more "I".

For a time, Google harnessed the awesome power of its deep learning AI to conclude that I was a teenage girl because once, while preparing for a talk on "Best Practices for Software Development", I spent time looking up background on the book (and movie) 50 Shades of Grey to use as a humorous metaphor.

In reality, I'm a middle-aged man.

It was odd, too, because Google knows everything about me. Google's services have run both the personal and professional lives of everyone in my household for over a decade.

Similarly, after fighting Apple for years about who owns my photo library and where that library might live, I finally gave up, and shot everything up in the cloud. The first thing Apple did was produce a stunning slideshow of the wildflowers of Anza-Borrego from a recent family trip, far better than I could have done myself. Clearly Apple's AI knows infinitely more about photos and slideshows than I do.

So in my experience, today's AI either gets it really right... or horribly wrong.

I think we've all had these kinds of experiences in our digital lives:

"Why, yes, YouTube, I do want to see that cat video!"

     "No, Siri, I did not want you to auto-correct that to 'truck'."

     "Wow, Google, I am looking for a new car! But not one of those..."

I posit that current AI is wallowing in the uncanny valley, the place where human-like replicas appear almost, but not exactly, like real human beings, thereby eliciting feelings of eeriness and revulsion.

Current AI is so good that it's tremendously jarring when it misses the mark completely.

Do We Listen to Crowds or Outliers?

Part of the problem is that there really isn't agreement on a good definition of intelligence.

Take for example, the well-known wisdom of crowds. The unfortunate fact is that, while great minds often do think alike, fools seldom differ. So which does the crowd represent, the great minds or the fools?

In statistical parlance, we are asking which is the better thing to do: follow the crowd and infer that most people will do or think the same, or look at the outliers, the misfits, the rarities to see where true wisdom lies? What's more important, the mean or the variance?

A fascinating story from APM's Marketplace shows the difficulty of the task. In "False information on the internet is hiding the truth about onions", author Tom Scocca tried to publish one factual, evidence-based article on the correct amount of time it took to caramelize onions. He was on a personal mission to correct the public misapprehension, repeated time after time on innumerable websites, that onions could be caramelized quickly.

What actually happened is that the Google AI became a little bit confused. It knew page after page after page on the Internet claimed that onions could be caramelized quickly (the "wisdom of the crowd"). It also knew that Tom's single web page discussing the caramelization of onions was inordinately popular (the "outlier"). Lastly, the Googlebot knew that Tom mentioned somewhere in his article something about "caramelize" and "quickly", missing completely that Tom was saying what not to do.

The result? A top-ranked "search results" page claiming that "Tom said you can caramelize onions quickly!" — the completely wrong conclusion for all the right reasons, falsely attributed to the one person trying desperately to change everybody's mind.

Even if AI just follows the crowd, it can be easily stymied by what is termed the Flaw of Averages. Take 4,063 fighter pilots. Design a cockpit that fits the body-measurements of the "average" pilot. Discover that you've designed what amounts to a near-death-trap because not a single pilot is "average" in all their body measurements. Short arms? Long torso. Long legs? Short neck. Skinny waist? Broad shoulders. Averages, or really any measure of "central tendency" is probably not best way to make predictions!

So does intelligence come from sifting out the hidden gems of rare events? Claude Shannon thought so, and founded modern communication theory on that premise: the rarer the event, the more informative it is. Of course, rare events can also be rather meaningless, such as winning the lottery. "But you can't win if you don't play!" And that doesn't even bring survivorship bias into it...

It’s a good thing AI will be less "artificial" and more about "intelligence" in the future, because it will know the right answer to some of these challenges — even if we ourselves do not.

See other FICO posts on artificial intelligence.

related posts