Model Management Best Practices: Part 8
Welcome to the latest Model Management Monday. This is the seventh in my blog series on model management, each post highlighting a best practice that supports both compliance and i…

Welcome to the latest Model Management Monday. This is the seventh in my blog series on model management, each post highlighting a best practice that supports both compliance and improved performance.
Best Practice #8: Monitor Overrides
Anytime you override a score, regulators will require that you document and monitor that decision carefully. Your overrides should be based on clear and consistent guidelines.
Regulators will ask questions such as: What is your policy for allowing an override? What authority level do you require for override approval? How many overrides are you doing every month? How do you determine that each type of override is appropriate?
All overrides should be assigned an override reason code for tracking in order to evaluate an underwriter’s decisions. Use codes that allow for efficient or effective analysis. Strive to eliminate vague codes such as “general” or “miscellaneous.”
Reasons for high-side overrides (accounts that score above the cutoff but are declined) should be examined carefully to make sure you are not turning away potentially good customers and that no disparate impact is evident. If there is a large volume of such overrides, consider approving a small portion to capture subsequent performance and determine if these overrides are warranted.
You should also analyze reasons for low-side overrides (accounts that scored below the cutoff but are approved). Since these accounts are approved, review their subsequent performance and revise the override policy as necessary.
If an underwriter is frequently overriding a score, find out why. Does the underwriter have the proper understanding of how scores work? Is the model deteriorating at an extent to which the underwriter feels it is no longer accurate, or basing the decision on a piece of information not considered by the model? Overrides increasing with time can indicate reduced confidence in the model, and a model redevelopment may be warranted.
When tracking the business performance of overrides, a general rule of thumb is: accounts booked as a result of a low-side override should perform no worse than the group of accounts falling just above the score cutoff. As you may recall from part 2 of this series, overrides above cutoff, if done effectively, can lead to the group of accounts just above cutoff performing better than this group would have if there had been no overrides.
For more details on this and other best practices, download the FICO Insights white paper, "Comply and Compete: Model Management Best Practices" or Martin Butler’s paper on Model Management and Governance. And check our blog next Monday for my final post in this series.
Popular Posts

Business and IT Alignment is Critical to Your AI Success
These are the five pillars that can unite business and IT goals and convert artificial intelligence into measurable value — fast
Read more
Average U.S. FICO Score at 717 as More Consumers Face Financial Headwinds
Outlier or Start of a New Credit Score Trend?
Read more
FICO® Score 10 T Decisively Beats VantageScore 4.0 on Predictability
An analysis by FICO data scientists has found that FICO Score 10 T significantly outperforms VantageScore 4.0 in mortgage origination predictive power.
Read moreTake the next step
Connect with FICO for answers to all your product and solution questions. Interested in becoming a business partner? Contact us to learn more. We look forward to hearing from you.