Last month the President of the San Francisco Federal Reserve Bank addressed the LendIt USA conference here in San Francisco. He talked about “unintended consequences” and the role regulators play in protecting wider economic interests.
“I am excited for the changes to come, and I see the potency of the possible,” said John C. Williams. “But for fintech’s potential to be met, we need to make sure we don’t reinvent or exacerbate shortcomings that have plagued our financial system thus far. In that regard, well-designed regulation protects consumers, fosters inclusionary rather than exclusionary practices and enhances fairness and resilience of the financial system should help rather than hinder fintech’s contribution.”
Since that speech, trouble with marketplace lenders such as Lending Club has led many in the industry to say the fintech bubble is bursting. Whether or not that turns out to be the case, it’s worth exploring this issue: What are the unintended consequences that could arise from new ways of assessing risk?
A large body of regulation has built up in the US that provides protections to consumers around use of data, underwriting criteria and marketing practices. Fintech business models that evade this regulation could well have unintended consequences — negative impacts on consumers and the overall health of the financial system.
While fintech compliance with regulations is important, poorly designed or misapplied risk models can have significant unintended consequences. In the lending industry, poor risk models lead to poor decisions that lead to poor business results — and poor treatment of customers. Is there something more at work than selection bias when we see delinquency rates in China for P2P lenders in the 25% range relative to card rates at the banks in the 3-5% range? At the very least it would indicate the need for some new risk tools to guide underwriting policy because costs of this sort are not sustainable.
When you have a decision to make about whether to lend money, getting that decision right is important for the borrower, the lender and its shareholders or investors, as well as for the economy at large. The decision must be sound, and so must the risk model guiding it. Data scientists have ways of ensuring that the models they build have integrity. What does that mean? Integrity means it was “done right”. In the analytics world, we often use the words stability and robustness, meaning the model was built right so it performs well over time. Something that needs a lot of rebuilding lacks integrity – just like a bridge!
Ideally, models need five key attributes, predictive power, coverage, fairness, transparency and the ability for the consumer to dispute the answer. (This is to avoid the classic “computer says no” situation familiar to anyone who watched Little Britain!) It isn’t always possible to achieve all these things at once or to the same degree as is possible with conventional application and bureau data.
It will be doubly hard to achieve this standard if your models have opaque variables, relying heavily on variables that could be easily gamed or are otherwise non-robust (thus necessitating frequent model fixes) or are so complex that you can’t explain them or diagnose what has gone wrong. And that is the danger with many of the scoring models being touted by fintech companies.
Transparency is a little harder to address, because that really does become part of the regulatory framework as well as the maturing of the market. Here in the US we have long had rules that enforce transparency in credit scoring, beginning with score reason codes, and we have a vibrant direct-to-consumer market that has raised people’s awareness even further. At FICO we launched the FICO Score Open Access program that gives 150 million consumers access to their FICO Score through their billing statement or online banking site.
Transparency is something we should strive for in most credit markets, although we have to recognise that there are new challenges in that regard as we venture into data sources beyond the typical. A reason code like “Balances too high” is clear and actionable, even if it is hard to achieve. But if, for example, a score is lowered because of a person’s social network, what can we do — tell that person to get more reliable friends? The key is to design variables and models from these new data sources that still focus on uncovering the old traits of consistency, stability and attention to detail.
It doesn’t matter if you are a bank or a fintech lender — credit decisions affect people’s lives and the broader economy, and we should make that process work for all participants. We should embrace innovations that spur a healthy marketplace, but avoid the unintended consequences of poorly designed risk models.