The tremendous interest in AI and machine learning drove the readership on the Fraud & Security blog in 2018. Here are the five posts with the most views.
Author TJ Horan, FICO vice president for fraud solutions, wrote a five-part series on the keys to using AI and machine learning in fraud detection. In the first post, TJ discussed the use of supervised and unsupervised models.
Because organized crime schemes are so sophisticated and quick to adapt, defense strategies based on any single, one-size-fits-all analytic technique will produce sub-par results. Each use case should be supported by expertly crafted anomaly detection techniques that are optimal for the problem at hand. As a result, both supervised and unsupervised models play important roles in fraud detection and must be woven into comprehensive, next- generation fraud strategies.Read the full post
A supervised model, the most common form of machine learning across all disciplines, is a model that is trained on a rich set of properly “tagged” transactions. Each transaction is tagged as either fraud or non-fraud. The models are trained by ingesting massive amounts of tagged transaction details in order to learn patterns that best reflect legitimate behaviors. When developing a supervised model, the amount of clean, relevant training data is directly correlated with model accuracy.
Unsupervised models are designed to spot anomalous behavior in cases where tagged transaction data is relatively thin or non-existent. In these cases, a form of self-learning must be employed to surface patterns in the data that are invisible to other forms of analytics.
Given the sophistication and speed of organized fraud rings, behavioral profiles must be updated with each transaction. This is a key component of helping financial institutions anticipate individual behaviors and execute fraud detection strategies, at scale, which distinguish both legitimate and illicit behavior changes. A sample of specific profile categories that are critical for effective fraud detection includes:Read the full post
Adaptive analytics technologies automatically adapt to recent confirmed case disposition, resulting in a more precise separation between frauds and non-frauds. When an analyst investigates a transaction, the outcome — whether the transaction is confirmed as legitimate or fraudulent — is fed back into the system to accurately reflect the fraud environment that analysts are facing, including new tactics and subtle fraud patterns that have been dormant for some time. This adaptive modeling technique automatically modifies the weights of predictive features within the underlying fraud models. It is a powerful tool that improves fraud detection performance on the margins and stops new types of fraud attacks.
In a preview of her FICO World presentation, Liz Lasher discussed how adaptive analytics, supervised and unsupervised models, and other AI and machine learning techniques are being applied to catch application fraud. In this realm, she argued, explainable AI is critical:
Many machine learning algorithms are considered “black box” models that do not give fraud analysts, consumers, or regulators the appropriate insights into decisioning logic, e.g., “Why am I being declined for credit?”Read the full post
This is why explainable artificial intelligence is so important: to impart the necessary transparency to pass regulatory muster, while maintaining accuracy of prediction.
FICO are very cognizant of the impact of regulations to our business and for our clients. In fact, we pride ourselves in leveraging mathematical innovation to solve problems in the real world. In the area of account originations, our credit risk and fraud scores are designed to be a tool to assist lenders with compliance to applicable fair lending laws such as the Fair Credit Reporting Act, Regulation B, and the Equal Credit Opportunity Act (ECOA).
Real-time payments are opening up new avenues for fraud, wrote Sarah Rutherford. The speed of these payments makes it more difficult to trace the proceeds of crime, while also making it easier for criminals to move and extract funds. She noted three areas of concern as examples:
Account takeover fraud: A criminal can take over an account and use it to ‘hop’ money through, thereby making it more difficult for the authorities to follow the money. In some instances, the legitimate account holder may not even spot it’s happening, particularly if its an account that they don’t regularly access themselves.
Use of money mules: People who are otherwise upstanding citizens can be persuaded to allow criminals to use their accounts to transfer money through. Again, this helps criminals to hide the source of their funds, and with real-time payments the money can be moved across multiple accounts extremely quickly. In some cases, people allow their accounts to be used as mules for altruistic reasons (they’ve been conned into thinking they are helping someone in genuine need), but in other cases the mule account holder receives a payment. In the UK the widespread use of real-time payments has seen certain groups targeted by criminal gangs to act as money mules – students are often recruited.
Application fraud: Another way criminals can gain access to an account is to open one using a stolen or synthetic identity. With such an account the criminal cannot only move money through it but extract it. As discussed earlier, in the case of authorized push payment fraud, the payee’s bank may not be held liable for losses from the fraud. However, in cases where the fraudulent payment has been sent to an account opened using a stolen or synthetic identity, the receiving bank has been pushed to make restitution to the victims for having opened an account for a fraudster.
Follow this blog for our 2019 insights into fraud, financial crime, cybersecurity and AI and machine learning.