You are currently browsing the monthly archive for July 2008.
In one of my few forays into fact checking (I do not pretend to be
a professional journalist) I have spoken to a few people in and around
BankWest over the last few days. They have all been confident that there
is no sale process in the wind. A couple of them are in areas that
would be crucial in the event that there was to be any movement in this
area, so I can be fairly confident in saying it isn’t going to happen –
at least, it is not in the works now.
The last piece I did on this highlighted the issues with it. In my opinion, short of the relevant Act being amended, it is unlikely that BankWest will be sold to anyone other than another (non-Australian) bank. Operationally, the purchase by an Australian bank just makes no sense.
I also cannot see the WA state government agreeing to amend the legislation any time soon – so BankWest will remain.
Paraphrasing an emailed question from Dominik (who IIUC is not from Australia): is there information out there about the credit risks associated with different categories of business? This is outside my zone (mostly retail, i.e. individuals).
Dominik asks: “I need to set up (for a loan granting purposes) a kind of a rating matrix for different unconnected types of business such as a poultry business or shipyard.”
IIRC in Australia there exists a well codified hierarchical classification of business types, starting at super categories (like agriculture, mining, ..) and moving through a couple of layers down to very specific categories (like “coffin maker”). Analysts concerned with non-retail credit risk would probably have some experience or information about the credit risk characteristics of these hierarchies, but, as Andrew has commented elsewhere, they would be reluctant to share this knowledge as it would be part of the bank’s competitive advantage. However, without sharing the content, perhaps some readers would share some analytical or modelling tips?
From very slight involvement I seem to recall that factors like size of the business, turnover, nature of assets, and (especially) recent financial performance could be more important than fine classifications of business type. Some of these in turn (like the assets) may be more relevant to LGD than to PD.
Dominik further: “I thought about comparing data from different stock exchanges considering some parameters like a market cycle etc”
This wouldn’t be an easy route, given that listed companies are a very select sample of all the medium to large businesses out there. However, there is plenty of received wisdom (and analysis) about cyclical versus non-cyclical sectors of any stock exchange and/or country. Poultry, and coffin makers: non-cyclical! But credit risk – as some recent ASX cases illustrate – will depend heavily on capital structure (gearing) and the management of that company.
Even with a poultry business, if the management borrows to the hilt and pursues an aggressive acquisition strategy, at the same time trying to challenge the purchasing power of the big retailers – they could easily end up with egg on their faces (sorry).
Any advices from those who work in the non-retail area would be a significant improvement on the above and would be appreciated.
While on the subject of “validation” – it can have a range of meanings when applied to credit risk models.
At the most general level it means review by an external authority. This could cover a wider scope than merely reviewing the models themselves. All aspects of how the modelling methodology was chosen, executed, implemented, and integated with the business might be considered. Naturally an external technical review of the models may be a valuable subtask.
Validation using data is a more concrete approach. Widest scope is achieved by having a sample of the bank’s exposures scored by a relevant external agency with similar models for comparison with the bank’s own results. Whilst this covers the most bases, it is hard to do it well in practice because of the difficulty of reproducing the same data environment – for example categorical predictors may need to be ‘mapped’.
Validation using the bank’s own data is the easiest and perhaps most familiar context. Various more specific technical terms apply. Some examples:
- during the model building phase it is good practice to hold out a ‘validation’ sample as a protection against over-fitting. This is also called cross-validation. The validation sample used is randomly selected from the modelling mart to guarantee neutrality with respect to all data effects.
- a proposed new model can be run on ‘out of time’ data – cohorts that are before (‘backtesting’) or after the sample window represented in the modelling mart. This is likely to be instructive and reassuring but does not carry the guarantee that pure cross-validation does.
- the routine monitoring of the performance of models once they have been implemented may also be considered to be ongoing ‘validation’ and is the first line of defence.
The simplest setting is validation of an individual component, especially PD. Last week’s post touched on the more difficult context of validating that the chain of models PD-EAD-LGD work together correctly.
Aren’t there some aspects of Basel – like long term cycle issues – that defy validation? Or rather, rely on judgement rather than analysis?
Nothing to do with airlines, we speak here of validating expected loss against actual loss.
A point made by Bruce M in recent comments is that there needs to be consistency in the modelling methodology behind the suite of models for the risk components PD, EAD and LGD. One task that should bring this point to the fore is the validation of EL against AL.
The PD (and EAD) models can be easily validated because their predicted outcomes become certain after 12 months. LGD is hard because
- the observation period starts later: if an account defaults in the 11th month of the 12-month outcome window, observation of the actual LGD outcome (i.e. actual loss) can only begin at that point, which is already 11 months later than the sample cohort.
- the observation period may be long
- ideally one needs to wait for the longest AL to resolve, but one can’t know in advance how long this will be
This means that ELs can only be reliably validated against ALs if the sample cohorts are quite far back in time – perhaps 2-3 years depending on product.
Nevertheless an adequate job can be done on more recent cohorts, considering that even on recent cohorts, at least some of the ALs will be known. I recommend a graphic approach showing EL vs AL for many quarterly cohorts simultaneously, with certain ALs in a bold colour, and as-yet-unresolved defaults shown on a possible – probable – worst case basis via suitable graphic clues (e.g. colours, hatching, error bars). Such a display will show a ‘fan’ effect, whereby older cohorts have a more certain EL-AL reconciliation, whereas for more recent cohorts the zone for AL fans out. (EL is a historic fact and is always known exactly)
Carrying out an EL-AL validation is a good way to review the consistency of model approaches and to detect those situations that fall between the cracks.