Skip to main content
Artificial Intelligence Can Beat Humans — And Criminals

There has been a technical and moral debate raging on whether the evolution of artificial intelligence (AI) should be regulated or controlled. The fear is that machine-based learning could reach the “singularity” — the point at which smart machines can build other smart machines and present a threat to humanity.

Many films – most famously the Terminator franchise - have explored the dark world of machines rising up against the human race. But today artificial intelligence promotes pause for thought not just among filmgoers but even amongst some of the world's most eminent technologists and sociologists.

Even those sounding the warning — such as Stephen Hawking — admit that so far AI has been a boon.

When Siri was first introduced as a "virtual assistant" on the iPhone, it was seen by many as a gimmick, but the automated engagement took hold and there are now both Google and Microsoft equivalents in active competitive deployment and regular use. I even use Siri when making phone calls via Bluetooth from my car: It is certainly far safer than trying to navigate even the dashboard menu.

In my own business field, arguably the most productive and effective use of artificial intelligence in a commercial context is in the FICO Falcon Fraud Platform, which protects two-thirds of the world's credit cards. Falcon evaluates billions of payment records and makes sound, risk-evaluated judgements of the suitability and genuineness of the payment activity to an astounding level of accuracy in a matter of milli-seconds. No human, nor any straightforward "dumb" automation, could hope to match this volume, speed and level of accuracy.

Restricting or removing this technology would actually only serve to make the criminal life easier while eroding consumer convenience and safety.

In a recent presentation, my boss remarked on the sensation when IBM's Deep Blue beat Grand Master Garry Kasparov at chess in the late 1990s. This marked an important evolutionary step in the computer v. human development. Where human skills and attributes, even at the highest level of capability, can be matched and exceeded by a machine, it calls into question what AI cannot achieve better than humans .... no small feat!

What is clear to me is that we need to operate within boundaries with AI. We should embrace technology that can, and does, outstrip human capabilities, but ensure that there are pre-defined limits of reliance.

Falcon helps limit the number of occasions where a manual check with the customer is needed, and optimises the chances of finding confirmed fraud in a far smaller population. But even Falcon cannot definitively confirm whether a transaction is fraud, only whether it definitively represents a higher likelihood. In a third-party fraud context, the only person who can truly validate whether something is fraud or not is the consumer themselves (or sometimes the fraudster if they are silly enough to do so!).

Just like the replicants in Blade Runner, criminals may be virtually indistinguishable from the genuine article, but a combination of technology and human interaction can spot the difference. Neither technology nor a human can effectively and efficiently do this alone.

related posts