You are currently browsing the monthly archive for April 2008.

The credit risk world likes to work with ‘odds’ and related quantities so these are covered today.

You could just do everything in terms of probability, i.e. PD, which is unambiguous. PD lies in [0,1] and a small number (like 0.002) is a better customer than a bigger number (like 0.013). In typical modelling situations (in Australia, in the good times..), a lot of PDs would have one or two or even three leading zeroes and these numbers are not handy for transcription or to quickly convey which zones they lie in.

It goes without saying that it often more palatable to format a PD as a percentage, e.g. PD = 0.013 as PD = 1.3%.

‘Odds’ have a special status because they are intimately linked with logistic regression, the main PD-modelling statistical tool. Odds can be worked out from the PD, and vice versa, as follows:

  • odds = 1/PD – 1
  • PD    = 1/(1 + odds)

For example, odds = 8 means exactly the same thing as PD = 1/9 = 0.1111.. 

Odds are generally taken to be the Good:Bad odds; thus a bigger number for odds is a better situation. I have seen analysts using Odds the other way up i.e. the Bad:Good odds. You can come out alive but it will confuse your colleagues; +/- changes of sign will cascade through and graphs will tilt the opposite way.

One step closer to the logistic zone is to transform to “log_odds”.

  • log_odds = ln(odds)
  • odds        = exp(log_odds)

‘ln’ means natural logs, i.e. to the base ‘e’. Actually, mathematicians always mean natural logs when they say log and as a matter of pride would never mention the base, or contemplate a base other than ‘e’ unless it was a neat way to summarise a problem that had structure particular to integral bases. Ambiguity can arise: computer systems that are tech-oriented, like SAS or MATLAB, assume ‘log’ means ln, whereas those that are business-oriented, like MS/Excel, assume that ‘log’ means log_to_base_10. It also doesn’t help that ‘ln’ is not comfortable in speech.

By ‘log’ I always mean natural log, and I use log10 or log2 to mean logs to base 10 or 2. For the meantime, the terminology ‘log_odds’ will be used, which is easy in speech, but if anyone can suggest better nomenclature they are welcome to put it forward.

If we’ve taken the right choices so far, a bigger number for log_odds is a better situation. Note that log_odds can be negative (when odds < 1 which is when PD > 0.5).

To make the numbers more convenient to handle, it is common practice to convert the log_odds to a ‘score’ on a user-friendly scale that wouldn’t involve negatives or decimal places. For the first time in this chain of transformation, arbitrary scaling constants are involved in this choice: one for location and one for scale (spread). A typical approach is illustrated below:

  • for location: bang a stake in the ground at the point that will represent odds of 1 (== log_odds of zero == PD of 0.5): so, for example, choose a score of 500 to represent this point (which BTW would be a lousy customer)
  • for scale: this is normally done by specifying how many points it takes to double the odds (PDO). A comfortable choice would be PDO=20, which says that a score of 520 <=> odds=2, 540 <=> odds=4, 560 <=> odds=8 etc.

Because log_odds is a logarithmic scale, the above choices work out and amount to a linear transformation of log_odds to score. The two scaling parameters, and hence the transformations from log_odds to score and back, will depend on these fairly arbitrary choices.

PDO=20 gives a nice granularity to the scores, which will mostly land in the 500-800 zone and you won’t feel the need to use decimal points i.e. whole-number scores suffice. As long as PDO is chosen to be positive, it will still be the case that a bigger score is a better situation.   

All the above transformations are absolute arithmetic ones that always apply, irrespective of context such as outcome window, default definition, calibration, closed goods in/out, etc. If you find you disagree with someone via these calcs, it means you started from different contexts and therein lies the entire explanation for your disagreement.

Following the comments on indeterminate, it is timely to introduce some shorthand notation that will help in discussions that follow.

The issue is the point-in-time default definition. Your default definition should in the first instance produce a decision at every point in time as to whether the account is in default or not. The set of possible points in time is determined by the time granularity of your data systems. A typical situation would be monthly data for CC with default flagged for >=90DPD assuming the outstanding balance exceeds some materiality parameter(s).

But why point-in-time default definition? Because this is not the final default story; the re-ageing logic still needs to be superimposed. Re-ageing involves an extension of the point-in-time default definition to the concept of a default episode, which has temporal extent i.e. it is a time window having a start date and an end date. Today’s post, however, covers only the point-in-time default issue, and the qualifier “point-in-time” will be left out to avoid clutter.

Default history for any particular account can be summarised by the string of consecutive default statuses: for example GGGGGGIIIBBG shows the account was ‘good’ for the first 6 months, ‘indeterminate’ for the following three months and then ‘bad’ for two months but then ‘good’ again in the 12th month. These 12 months could be the first 12 months since the account opened, if you are doing longitudinal analysis, or it could be the 12 months of a cross sectional analysis, in which case it might represent something like MOB 33-44.

The definition details of ‘bad’ and ‘good’ will be particular to each institution and product, but status codes that I have found useful include:

  • B = Bad, i.e. point-in-time in default
  • I = Indeterminate. Optional status, not all situations require that one should need to distinguish these from G and B i.e. ask the question: how does ‘I’ differ from ‘G’?
  • R = in recoveries
  • C = in collections
  • W = has been written off
  • G = Good, i.e. not bad nor any other status with a higher precedence
  • U = Undrawn. This can apply to loan accounts that have been set-up and are open on the books, but where the capital has not been drawn down yet. For HLs there can be a few months delay if there are hold-ups in transfer. In the meantime, they appear as accounts with zero balance outstanding. This only applies when this situation happens at the beginning of an account’s history, i.e. not for zero balance accounts that can occur later. Undrawn does not apply to some products such as CC because they are only activated when the first transaction is made.
  • D = Dormant. It may be useful to identify accounts that appear to be dormant, i.e. have returned to a zero balance and there is no customer initiated activity for a long time. Because of Basel treatment, but also for commercial reasons, the bank may want to identify these and do something about them. 

I am going to break my (self-imposed) silence and make a post on this. Following on from an earlier post on the situation in the UK it looks like the legal situation is the same here – or in Victoria at least. Today’s Crikey has a story on how one of their contributors took Citibank to the small claims tribunal over a $40 fee – and won, with costs, after Citi simply failed to turn up. Even better, Citi had paid out the claim even before it hit the tribunal.

The Crikey piece notes that this does not set a binding precendent, but

the fact that a full-time VCAT member provided a judgment noting that the bank-fee charged was unenforceable and amounted to an unfair term in the contract is an indictment on the conduct of a financial institution. While Citigroup did not defend the matter, the VCAT member would have been within his rights to dismiss the application if he was of the opinion that it was without merit.

So, this one is just waiting for a test case. We have an interesting possibility here. If you believe the fees you are paying are excessive then – claim them all back. All of them. Just find a lawyer willing to take on your bank.

Surprisingingly, Citi’s newsroom says nothing on the topic of this court loss.

Thanks to The Sheet for pointing me at this.

In the PD model building world, “indeterminate” seems to have more than one meaning. If any readers feel they could give a balanced view of common usage in Australia (or elsewhere), please do so and this blog will record it and adopt it.

Meaning #1: a status of an account at a point in time which is not “in default” but is some way down the track to being considered “in default”. For example, if the CC default definition requires 90DPD, accounts might be called “indeterminate” if they are 60DPD – or whatever other “not completely good” your default definition might permit.

Meaning #1.1: a meaning derived from #1 can then be evolved for the status of an account across a time window such as an OW for modelling purposes. “Indeterminate” might now mean “ever went indeterminate during the OW without ever going intodefault during the OW”. However, one also sees composite definitions of Indeterminate across an OW such as “ever went 60DPD or went 30DPD on two occasions”.

The idea behind the above definitions of Indeterminate is that the account, whilst known not to be “Bad”, is also known not to be completely “Good”. IIUC these above meanings are the most common in the banking industry but your corrections will be tallied and recorded here.

It will also be handy to adopt the likewise common terminology “Bad” for “in default”, along with “Good” and “Indeterminate” and their abbreviations “B,G,I”. These are used in textbooks for technical formulas like odds ratios and information values AWML. In today’s post this usage remains casual and by “G” might be meant “not B” or perhaps in another context “not I nor B”.

Meaning #1.1.1: A special situation related to #1.1 deserves noting. As it stands, #1.1 means that the account was known not to have gone bad during the OW. It isn’t a situation of doubt as to the outcome (contrary to what the English word “indeterminate” connotes). However, one particular case does involve reasonable doubt: when an account has reached the penultimate stage at the end of the OW – for example, a CC has gone 60DPD in the last month of the OW where the default definition is 90DPD. Unlike other Indeterminates that may have gone 60DPD and then rehabilitated, with this “horizoned” account one doesn’t know which of the categories G, I or B it should really belong to. OK, it belongs to “I”, but not in the same sense as an account that reached 60DPD and then rehabilitated during the OW.    

Meaning #2: More like the natural English usage, this meaning covers situations where one isn’t sure about assigning “G” or “B”. For example, consider application modelling with an OW of 24 months:

  • An account closes good after only 2 MOB. Is this a “Good” account? Not in the same sense as one that was exposed to risk for the full 24 months. One might call it indeterminate in the sense that one doesn’t know whether it would have been G or B if it had hung around for 24 months. I prefer the more specific term “closed good” for this situation. 
  • (Similar to above) Only the first few MOB are known because the account opened recently. I prefer the more specific terms “out of sample” or “out of time” for these situations.
  • For whatever other reasons, such as incomplete data, one doesn’t know the exact outcome of some account at some point in time or across some time window.

This post is only about the nomenclature, and is not even definitive on that point! As to what you do or don’t use “indeterminate” for in the modelling world, that subject is too long for this week.

“Churn” is used here to refer to accounts closing ahead of schedule for reasons not related to default. Perhaps this is not ideal terminology – I tend to use it because it is short and specific – but other suggestions for common usage would be welcome.

One variation encountered is “closed good”, which will be used later in discussions of “closed goods in” versus “closed goods out” as bases of analysis. This nomenclature is more comfortable than “churneds in/out” would be.

Meaning varies amongst products. For CC, there is no fixed product schedule and churn would normally have the marketing meaning of customers taking their business elsewhere – e.g. “balance transfer” to another CC issuer. This has been a particular concern with aggressive marketing by competitors offering low or zero interest for an introductory period.

For term loans with a fixed principal & interest amortisation schedule, churn could come about from re-financing of a HL or PL with another lender. A similar issue is the early paying down of the loan balance on products that allow this. “Churn” is not a descriptive word for this behaviour – the account may remain open and active but have a much lower loan balance than the bank was expecting. Lower funds at risk means lower earnings for the bank, affecting the profitability model for the product cycle. What would be of particular concern, and likely in practice, would be the correlation between early payment and low PD, i.e. the lowest risk customers reducing in proportion of funds at risk.

As regards default analytics and PD models, churn is a countervailing force to default. If a portfolio has high churn, it will make the default experience look better (if analysed on a “closed goods in” basis AWML). To make a clear analysis of a portfolio it is better to analyse the effects of churn and default separately from each other. For profitability studies, each plays a role.

This post is as close as I get to a “rant”.

Some parts of Basel formulas have unnecessary complexity, which involves not just inefficiency but also potential pitfalls.

The specific example is the formula for asset correlation which appears in Basel paragraph [283] and which includes a term 0.12 x (1-EXP(-50 x PD)) / (1-EXP(-50)). There are similar terms elsewhere in Basel formulas, but for focus let’s look at just this case. Surely, this term should be given as simply 0.12 x (1-EXP(-50 x PD)).

Presumably the casters of the formula felt a need to normalise the term to handle PD its full range of [0,1]. This may satisfy academic neatness but, I maintain below, at significant risk of causing error or wasted resource.

The materiality of the normalising denominator is as close to nil as any banker could imagine. The term EXP(-50) evaluates as 2 x 10**-22 which means 0.0000000000000000000002. When one subtracts this from 1 it makes no difference and you still end up with 1. In the old days, this used to cause a computer error known as underflow, whereby the floating point arithmetic processors rearrangeing numbers for calculation would discover that during this process one of the quantities had disappeared, which although not automatically a fatal error, would probably be something you wanted to know about. In the case of the above Basel term it’s not an error but it is frivolous formulaic complexity and in practical terms the denominator equals 1 and the term should be simplified to 0.12 x (1-EXP(-50 x PD)) .

OTOH to make a pedantic point if Basel wants the answer to come out exactly the same, they could change the multiplier from 0.12 to 0.1200000000000000000000024  . Or, add a sentence in the doc saying that, whilst a normalising denominator was academically desirable, it was omitted on materiality grounds.

Whatever, the materiality of the denominator term is less than a thousandth of a cent even when multiplied against a capital figure of $100billion.

What makes this a non-trivial rant is that there is significant cost to extra complexity. In my experience, only the most adept of the technical team would be able to transcribe such a formula without error. Others not directly familiar with the context, such as managers or computer programmers, are prone to transcription errors. But, most perversely, if I were to see such a formula presented by an intermediary – say, for example, as part of a computer program – I would assume strongly that a transcription error had been made because the logic of the formula fails the “sanity test”.

Much as I admire maths, a Basel implementation is fraught with thousands of small hurdles (OK and big ones), and we owe it to the business community to adopt pragmatic standards.    

An important basic concept in default analytics is “exposed to risk” by which we mean risk of going into default unless otherwise specified (one might otherwise be studying risk/propensity of churn, cross-sell etc.)

Abbreviated ETR in this note but AFAIK this isn’t common so won’t be added to the abbreviations list.

Often probabilities are estimated by dividing the number of events that did happen by the number of events that could have happened, and ETR is basically that italicised bit i.e. the denominator of the fraction. The ‘hazards’ and risk PDs of default analytics are just special cases of this situation.

A typical setting is when building an Application PD model: the modelling mart will have some number of accounts that started out at open date (MOB=0), and a certain target OW of (say) 24 months; at the simplest level all the accounts are ETR of going into default within the OW.

However, if account #1 opened only 18 months ago and is still not in default, then although it has been ETR for 18 months, it hasn’t been ETR for 24 months and is not quite the same unit of modelling information as an older account #2 that did survive 24 months. Account #1 has reached the horizon and is said to have been censored. Model builders wouldn’t normally be dealing with these out-of-time (OOT) cases because, knowing that 24 months OW was the target, they would have chosen a sample window (SW) that was at least 24 months before the horizon in its entirety.

But what about account #3 that opened 30 months ago but closed good, i.e. without ever going into default, at MOB=18? Account #3, like account #1, was only ETR for 18 months and is not quite like account #2. There was no way it could have contributed a default event for MOB=19-24 as it was not ETR for 19-24.

That segues into the closed good in vs closed good out discussion AWML but meanwhile opinions and contributions would be welcome from those who have views on the issues. People who study mortality risk have similar issues whereby, for example, they study all individuals for a certain time window. People may emigrate and so be ETR for only a portion of the TW, because one can’t reliably trace their subsequent mortality (survive or die?) in another country. But, you don’t assume they survive (or die); rather you use their information appropriately with respect to their lesser overall ETR.

Because application modelling is longitudinal, the focus is on the first default, so ETR is mostly a matter of the account still being open and not ever having previously been in default. For behavioural modelling which is essentially cross-sectional, there is the additional issue of whether an account is ETR of fresh default or whether it is still included in some previous default episode – link to the re-ageing issue AWML.

There may be subleties in the ETR concept, such as deceased account holders, dormant accounts – are these ETR? Or in a product like reverse mortgage, is there a default risk at all?

As mentioned in this thread, the predictive power of Application PDs decay with time. Thinking in a cross-sectional Basel mode, we look at all the accounts in a portfolio as of this month. If an App PD is available for an account, it will have more predictive power if it is a recent one (i.e. account has low MOB) than if it is an old one (i.e. account has high MOB).

The connection with MOB is not absolute as, for certain products, there can be a re-assessment of application information at some later time in the account history such as an application for a limit increase. i.e. the real point is “how old is the application information and the assessment of the PD”.

The reasons for decay of predictive power merely reflect the fact that older information is often less relevant than recent information.

Some years ago for NNB I studied the decay of the predictive power (measured by Gini) by backtesting on many years of data for various portfolios. The essential output was a graph showing the profile of Gini plotted against MOB. IIRC this showed Gini decaying fairly gently from its maximum in the early months towards lower levels, but still retaining some predictive use even after 3 years. Exact patterns varied between products.

NNB had also developed behavioural prediction models, so I did the same exercise for those Beh PDs. These models concentrated on pure behavioural predictors (dynamic information about recent account performance) rather than the static “application” type predictors. Naturally, with Beh PDs the trend is opposite, in that they start with low predictive power at MOB=1 and ramp up as the behavioural information accumulates. IIRC the ramp up was fast, with the models reaching close to full power within 6-9 MOB. Also, this full power was substantially higher than the full power of the App PDS.

Hence the natural “transition” idea to make the best use of all the information for Basel purposes was implemented as follows:

  • calibrate the App PD for a 12-month OW – because this is the Basel context
  • Beh PDs are built with 12-month OW and so need no calibration
  • form the Basel transition PD as a weighted average of the App PD and the Beh PD
  • i.e. Basel PD = w * App PD + ( 1 – w ) * Beh PD
  • figure the weight w by consulting the previously determined “App decay” and “Beh ramp up” profiles
  • The details of figuring w are not important here but naturally starts out being close to w=1 for accounts that have just opened (MOB=1) and drops fairly rapidly with equal weight (w=0.5) reached after only a few MOB and most of the weight (w=0.2) passing to the Beh PD by 9-12 MOB.
  • One wouldn’t need to be so scientific, and a simple straight-line schedule transitioning from App to Beh over a fixed number of months would be good enough for most purposes.

Technical readers will note that a problem mentioned previously remains: that the 12-month OW for the App PD is longitudinal rather than cross-sectional. Thus it models default for OW=[1,12]MOB rather than, say, OW=[9,20]MOB. However, this is of diminishing importance because by the time it would make a major difference, say OW=[25,36]MOB, the weight will have mostly transferred away from the App PD and onto the Beh PD.

One clean aspect of this weighted approach to transitioning is that validation of the final PD is a consequence of validating its components App PD and Beh PD. As long as those two are accurate (unbiased), the weighted PD is mathematically sure to also be accurate. In practice, this theoretical nicety may not pan out so easily because the App PDs of all vintages would need to all be accurate.

Alternative approaches to the transition issue are discussed below.

One suggestion I have heard but don’t like is to include the App PD as a predictor into the build of the Beh model. This doesn’t have the desired effect because merely including App PD as a main effect doesn’t allow the mechanics of regression to downweight the App PD if it is an old one and vive versa if it is a young one. The regression doesn’t know about MOB and one can’t fix this by including MOB as another main effect. (If you like getting technical, you might get close by designing appropriate interaction effects). 

Rather, a simple and effective approach in this direction (which can be found in an early post by coldies) would be to segment the Beh PD model build: have one model for accounts with MOB<6 (say), which would include the App PD as a main effect, and another model for older accounts that ignored the App PD.

Also note that some application predictors don’t decay with time e.g. Gender; Secured vs Unsecured flag. Any such predictors could be used as main effects in the Beh model without problem.

The App PD thread noted that App models need not have been built on the 12-month OW which is the Basel platform.  

Picking any sample of accounts and following them longitudinally from their open date, the number of defaults naturally builds up cumulatively as one progresses along the MOB axis. Thus default rate @24MOB will be a bigger number than default rate @12MOB. The graph of the cumulative emergence of defaults against MOB is a particularly useful analytical tool that visually characterises the default profile of this sample (which may be a portfolio, cohort, segment or whatever). There are subtleties AWML to do with treatment of accounts that churn.

One use of this ’emergence’ graph is to form a rough idea of the relativities between default rates at different MOB, for example cumulative defaults @24MOB would not typically be double the figure @12MOB – could be more, or less, depending on the product.

Illustrating a slightly more scientific approach: modellers may have already built a model predicting a target of “bad @24MOB” and may wish to calibrate this same model to alternatively predict “bad @12MOB”. As long as the original modelling mart is still available, it should not be too difficult to build an additional column (field) for the “bad 12MOB” flag, which can then be used as the independent variable in a regression against the original model’s score. This would provide a calibration of the model to a 12MOB basis without going to the trouble of building a whole new model for this different default target. Implicitly the hope is that the drivers (predictors) of default by 12MOB are the same as those for default by 24MOB. One can imagine objections to this assumption: it might be that certain variables are better at predicting early defaults.

But in any case, as mentioned in the earlier post, calibrating to 12MOB is still a longitudinal concept which does not closely match the Basel need to predict default in the next 12 calendar months. Hence the incorporation of Application PDs for Basel purposes needs to be more subtle AWML.

A related issue is that the predictive power of Application PDs decays AWML.

Basel systems are likely to make use of the application PD of each account, but this is not a comfortable fit because the Basel requirements are cross sectional whereas the App PD is longitudinal and not generally related to the same outcome window (OW).

The App PD is primarily for the purpose of decisioning: does the bank want to accept the application (made by some individual for some retail credit product).

At the front end the PD is usually presented as a score – merely a mathematical transformation of PD that is easier for general staff to handle – PDs can be a bit painful to look at because of decimal points, counting the leading zeroes, their inherent skewness, and potential confusion between decimal and percentage formats. Scores, by contrast, are chosen to span comfortable three-digit ranges, and are arranged such that high score = good applicant ( = low PD). This is done by linear transformation of the log(odds) which, if you are interested in it, you probably know all about.

So, in its simplest form, the computer knows that the cut-off score is (say) 567 and the applicant is declined if their score works out to be below this. Otherwise, referral, accept, etc.

This decisioning purpose is different from the Basel purpose of PDs. Originally, there would have been a business case for this product, modelling profitability of this line of business based on revenues, costs, and credit losses. This profitability model would ideally analyse a full product life cycle, but depending on the product, life cycles can be variable due to early closure, early repayment, refinancing and the like, which I will call “churn” below (although suggestions for a better term are welcome). A key input would be the default profile to be expected – how many defaults and at what stage (longitudinal MOB) in the account’s life cycle. The estimation of default profiles and churn profiles is not difficult given sufficient amounts of relevant data and the assumption that the future will be like the past (!?).

A common simplistic approach for building an application model is to settle on some fixed OW – such as 24 months – and do the modelling on the basis of predicting this “bad rate @24 months”.

There is no reason for such an OW to equal the Basel OW of 12 months. Its purpose is to help the business make the best decisions on new applications. Presumably, for some products such as HLs, this would require a profitability model that looked well beyond the first 12 MOB of the account. In my experience, defaults on HLs arise more in later years. If this were a technical discussion, we would now pause to sketch hazard graphs AWML.

So, quite likely, a bank’s App PDs are built on a different OW than 12 months, and are therefore not immediately commensurate with Basel needs. Furthermore, App PDs are longitudinal not cross-sectional, so even if it were a 12-month OW, it would be referring to the first 12 MOB for that account, which wouldn’t be the coming 12 months unless that account opened this month. A typical account this month may be 29 MOB, so for Basel purposes one would want to know the conditional probability, given that the account is not in default at MOB=29, that it would go into default during MOB=30 through 41 inclusive. Whilst this calculation could be done using the hazard curve, I don’t think many analysts go to this level of detail.

Rather, there are several simpler potential ways that the App PD can be incorporated for Basel purposes. Follow-ups to come but see also an earlier ozrisk discussion.  

Google Advertisement

We get older

Some Rights Reserved

Follow

Get every new post delivered to your Inbox.

Join 388 other followers