As mentioned in this thread, the predictive power of Application PDs decay with time. Thinking in a cross-sectional Basel mode, we look at all the accounts in a portfolio as of this month. If an App PD is available for an account, it will have more predictive power if it is a recent one (i.e. account has low MOB) than if it is an old one (i.e. account has high MOB).

The connection with MOB is not absolute as, for certain products, there can be a re-assessment of application information at some later time in the account history such as an application for a limit increase. i.e. the real point is “how old is the application information and the assessment of the PD”.

The reasons for decay of predictive power merely reflect the fact that older information is often less relevant than recent information.

Some years ago for NNB I studied the decay of the predictive power (measured by Gini) by backtesting on many years of data for various portfolios. The essential output was a graph showing the profile of Gini plotted against MOB. IIRC this showed Gini decaying fairly gently from its maximum in the early months towards lower levels, but still retaining some predictive use even after 3 years. Exact patterns varied between products.

NNB had also developed behavioural prediction models, so I did the same exercise for those Beh PDs. These models concentrated on pure behavioural predictors (dynamic information about recent account performance) rather than the static “application” type predictors. Naturally, with Beh PDs the trend is opposite, in that they start with low predictive power at MOB=1 and ramp up as the behavioural information accumulates. IIRC the ramp up was fast, with the models reaching close to full power within 6-9 MOB. Also, this full power was substantially higher than the full power of the App PDS.

Hence the natural “transition” idea to make the best use of all the information for Basel purposes was implemented as follows:

  • calibrate the App PD for a 12-month OW – because this is the Basel context
  • Beh PDs are built with 12-month OW and so need no calibration
  • form the Basel transition PD as a weighted average of the App PD and the Beh PD
  • i.e. Basel PD = w * App PD + ( 1 – w ) * Beh PD
  • figure the weight w by consulting the previously determined “App decay” and “Beh ramp up” profiles
  • The details of figuring w are not important here but naturally starts out being close to w=1 for accounts that have just opened (MOB=1) and drops fairly rapidly with equal weight (w=0.5) reached after only a few MOB and most of the weight (w=0.2) passing to the Beh PD by 9-12 MOB.
  • One wouldn’t need to be so scientific, and a simple straight-line schedule transitioning from App to Beh over a fixed number of months would be good enough for most purposes.

Technical readers will note that a problem mentioned previously remains: that the 12-month OW for the App PD is longitudinal rather than cross-sectional. Thus it models default for OW=[1,12]MOB rather than, say, OW=[9,20]MOB. However, this is of diminishing importance because by the time it would make a major difference, say OW=[25,36]MOB, the weight will have mostly transferred away from the App PD and onto the Beh PD.

One clean aspect of this weighted approach to transitioning is that validation of the final PD is a consequence of validating its components App PD and Beh PD. As long as those two are accurate (unbiased), the weighted PD is mathematically sure to also be accurate. In practice, this theoretical nicety may not pan out so easily because the App PDs of all vintages would need to all be accurate.

Alternative approaches to the transition issue are discussed below.

One suggestion I have heard but don’t like is to include the App PD as a predictor into the build of the Beh model. This doesn’t have the desired effect because merely including App PD as a main effect doesn’t allow the mechanics of regression to downweight the App PD if it is an old one and vive versa if it is a young one. The regression doesn’t know about MOB and one can’t fix this by including MOB as another main effect. (If you like getting technical, you might get close by designing appropriate interaction effects). 

Rather, a simple and effective approach in this direction (which can be found in an early post by coldies) would be to segment the Beh PD model build: have one model for accounts with MOB<6 (say), which would include the App PD as a main effect, and another model for older accounts that ignored the App PD.

Also note that some application predictors don’t decay with time e.g. Gender; Secured vs Unsecured flag. Any such predictors could be used as main effects in the Beh model wi

thout problem.