top of page

Hitting the Bull’s eye: A practitioner’s note on forecasting Retail Credit Losses.

Author | Vinay Bhaskar

Accurate and reliable forecasting of the Credit Losses has always been a subject of critical focus in Retail portfolio Risk management- in literature and bank’s practices alike. The topic gained new heights of attention in the aftermath of sub-prime crisis when regulators and international accounting standard boards challenged everything, ranging from adequacy of capital that cushions unexpected losses, to the way financial institutions (FIs) provisioned for expected credit losses and to the methods of assessing how FIs would perform during stress conditions. These challenges have been sufficiently effective in developing sound Risk management principles that aim to provide confidence to regulators, investors, customers and management in the smooth operations of banking system under any macro-economic scenario.

While the regulators have published the broad principles and standard frameworks around measuring / forecasting credit losses, the implementation of the principles is left to the FIs. The actual practices deployed by the FIs then determine the accuracy and reliability of such forecasts. With machine learning (ML) algorithms to boost the forecasting engine and the quality of data stored by FIs, the goal of achieving the accuracy in forecasts shouldn't be out of reach now. At Scienaptic, in one of the recent deployments for a premier bank, the long-range Loss forecasts resulted in 99% accuracy. Several factors played key role in achieving the results, some of them are discussed below:

While short run forecasts (0-6 months) are better predicted using Roll-Rate method the longer term forecasts (1-3 years) require sophisticated Model configuration and training. Various ML techniques can be deployed for forecasting such as SARIMAX, Long-Short Term Memory (LSTM), Age-Period-Cohort (APC), Generalized linear Model (GLM), Cox PH to name a few. While the chosen technique depends upon the level of granularity at which the forecasts are done (Portfolio vs. Account level vs. segmented), a Champion- Challenger framework must be created to forecast a range of possible outcomes. Not one technique may fit all types / length of data. LSTM, for example, may provide better results with longer data while APC works well even if data is limited.

Another critical element of forecasting is the level of segmentation deployed. Along the continuum of No Segmentation – which provides robust time series at the cost of averaging the segment level idiosyncrasies to Account level segmentation – which provides insights into account level behavior at the cost of excessive noise, an optimal segmentation design could include the Risk Type, Revolve Behavior, Activity segments and Vintages. The key is to maximize the information value difference between segments without creating too many segments that can potentially lead to overfit of the Loss Forecasts.

Most of the current practices with various Banks focus only on the endogenous dimensions that impact BAU credit losses – such as portfolio quality, cohort, etc. however, the losses are significantly impacted by the macro economic conditions as well, especially if portfolio quality is towards near/sub prime and when forecast are made over a longer horizon. The comprehensive modeling framework must include exogenous dimensions as well in that case - and FIs can leverage either the baseline macro scenarios provided by the regulators for stress testing purposes or develop their own scenarios for forecasting purposes. One critical aspect to keep a note of while modeling is to ensure that model results are stable not only under scenario conditions but also on the actual macro economic variables observed during the period used for modeling. This would ensure the stability of forecasts and weed out any macro-economic dimensions that create high degree of variability between actual and scenario results.

It is imperative to highlight that forecasting Losses in silos and without paying required attention to the future business growth and quality is going to lead to sub-optimal outcomes. The modeling framework should concurrently estimate the business growth under the same set of portfolio assumptions and macro economic scenario leveraged for Loss forecasts. Ultimately, the level of losses depends upon the volume of business! Lastly, all forecasts are “on an average losses” that FIs may expect to incur over the period of time. However, in practice, there always is a range of outcomes around those expectations and, as a best practice; one must create the confidence boundaries around the estimates for an effective intervention in case actual losses tend to move outside those boundaries.

With the above key practices in mind, the challenges of loss forecasting as faced by FIs today would be significantly addressed. The key is to invest in such ML practices and constantly learn and re-learn!

214 views0 comments


bottom of page