If you haven’t watched the TV series Breaking Bad but it’s on your holiday (or birthday) gift list, you may want to avoid reading the next two paragraphs.
There’s an episode in that seminal cable series where leading character Walter White, who is recovering from lung cancer treatment, is getting a PET scan at the hospital. A family member, who happens to be a medical technician, mentions that Walter will have to wait a week for the results and comments that he could discern scan results if he quickly glances at them. After his scan, Walter follows her advice, glancing a reflection of the resulting image – which reflects a very scary-looking (read deadly) white mass.
With Walter believing his days are now seriously numbered, he adopts urgent measures to ensure his family is taken care of after he dies – leading to near-disastrous circumstances in the process. But everything is turned upside down at the end of the episode when the white mass turns out to be, well, less critical than first envisioned. And I’ll cut the spoiler short right there.
That, folks, is what we call a “false positive.” It’s one of those terms that used to be known primarily in the medical industry – where you might, for example, get treated for something you didn’t have. In today’s business environment, false positives are most often associated with potentially fraudulent transactions, where a customer might be declined for a perfectly valid transaction due to atypical transaction location, high cost of purchase, or other suspicious factors.
So imagine an infinitesimal line between protecting your business (and consumers) from fraud – and creating that magical moment where your customer decides after yet another false positive, “I just got declined for no reason – I quit this credit card.” In the recent past, many organizations were willing to live with these false positives if the tradeoff meant better protection. But if there’s more potential lost revenue at stake when customers start to exit – and then tell their friends and other listening ears on social media – then too many false positives can’t be a good thing.
As one of the largest regional banks in the US, Cleveland-based KeyBank faced the same conundrum – “How do we reduce the overall volume of alerts to a manageable number without missing any actual cases of fraud?” Specifically, they zeroed in on wire transfer fraud, which entails monitoring multiple layers of wire activity—branch and online, domestic and international, retail and commercial.
Their existing fraud platform allowed real-time scoring, alerting and workflow functionality, but still couldn’t prevent a large fraud attack. The team tried to compensate by writing more rules, based on judgment and experience rather than data, leading to even more excessive alerts – ultimately straining both investigative bank resources and customer relationships. They turned to FICO for help.
The bank’s wire fraud data was analyzed using FICO® Analytic Modeler Decision Tree Professional, a strategy design tool with powerful visualization capabilities that make it easy to analyze existing strategies and test new ones. Among other discoveries, KeyBank’s “one size fits all” fraud strategy needed to be reworked based on different types of segmentation (e.g., consumer vs. commercial, online vs. branch). With a new set of rules, front-line analysts and investigators could focus on identifying genuine fraud attempts that were triggering alerts. After implementation, non-value alerts, false positives, declined by 70 to 80 percent.
But, like with the Breaking Bad episode, I don’t want to give too much more away. If you want to read how KeyBank went “back to the lab” to cook up a better solution for fraud protection, check out this case study(login required).