You are currently browsing the monthly archive for June 2008.

Very good piece in today’s “The Sheet” about the upcoming replacement of CBA’s core banking system. Correctly, it is entitled “Adventures with core systems: part I”. The belief that most banks in Australia need to replace their core systems reasonably soon is a strong one – and in most cases justified. The problem, of course, is also well known.

Core system replacement is expensive, risky and time-consuming. It is a huge change management task, with most of the bank’s staff well trained on the old one. For example – I dropped into my bank to close an account a few days ago. Sitting down with my “client adviser” she opened a web browser to check on my balances and see what I wanted to do. To do the bit she would need to show me she left it in the browser. However, as soon as she wanted to actually do anything she opened up a terminal emulator. I peered around and asked why she had not done it in the browser.

Her response was simple – the browser allowed her to do it but she was much faster on the terminal. Essentially, although the terminal emulator was lousy to look at it was effective and fast.

At that bank, and almost every one I have ever been into the story is the same – bank staff are comfortable with the old systems. Despite the fact they are built on technology that was outdated 20 years ago they still work. Staff are familiar with them. Anyone seeking to replace the system not only has to make it work from a technological viewpoint – but it also has to work in the organisation.

In comments, feel free to add in your favourite banking core system replacement story. Ones from Westpac are particularly invited – the one that was particularly successful a few years ago sounds good. Operational risk events can also be pretty funny – if you are not in the middle of one.

Give “The Sheet” a read too. if you are interested in banking activity in Australia it is worth it.

Drawing together several themes, today’s post recommends how to assemble modelling marts that will be representative for use in Basel context.

Basel context is a cross-sectional context: at some point in time, such as the most recent calendar month end, the bank must assess the risk components (PD, EAD, LGD and hence [or otherwise?] the expected loss EL) for the time exposure of the next 12 months. As the point in time is fixed and the coverage is all at-risk exposures, accounts will be encountered in all stages of credit status (and any MOB): G, I, point-in-time bad B, episodic bad E, plus whatever collections and recoveries statuses may obtain.

PD models for this context would primarily be behavioural models, built to predict a 12-month OW. (BTW earlier posts discuss the transitional use of application PDs for this purpose.)  EAD and LGD models are needed. Several modelling marts are therefore needed. How many, and how assembled in order to be representative when put to work together in Basel duty?

My suggestions below are open to discussion & debate – tell us if you have alternative views or practices.

  1. The underlying sampling frame is to pick a point in time and observe all accounts at that point in time. Because of the need for 12 month OW, this point in time will be at least 12 months before the data horizon (current time)
  2. This sample frame can be overlaid to increase the modelling mart: e.g. take several points in time, a month or a quarter apart. Naturally, the additional information is correlated but that presents no great problem as long as one doesn’t treat it as independent. A limitation is that as ones reaches further back into history, the models become less relevant to the future. 
  3. Plan to segment the fairly extensively; a cross-section will include many diverse animals better handled in their own (albeit small) cages than handled with one cover-all model. “Segmentation” is a popular word but you could also call this “decision-tree”, CART, etc.
  4. Each segment = separate mart = completely separate model 
  5. Segment PD behavioural: at minimum need to segment E from G. Recall that E is an account that is not point-in-time bad but is episodic bad i.e. has not yet re-aged. Further subsegmentation is likely to be sensible, into say the various levels of I (Indeterminate). Naturally, no PD model is required for status B or C,R, etc.
  6. Target variable PD: whether the start of a new bad episode is encountered during the following 12 months. A definitional issue AWML arises as to how to handle segment E.
  7. Segmenting LGD: may leave this for another day  …

Default episodes have varying lengths. This can lead to a bias called related to the statistical issue called “length-based sampling”

For building an LGD modelling mart, a typical approach would be to collect all the bad episodes that impinge on a certain time window. However this introduces a length-based bias, because the longer episodes have more chance to be represented. Longer episodes are, in turn, quite likely to be correlated with non-average losses.

To get unbiased sampling for building a behavioural mart, specify a sample window and only include bad episodes that started during that window. This will exclude accounts that are already in the middle of a bad episodes at the start of the time window.

Continuing the re-aging thread, a note circulated by APRA had a clear grip of the issue, and proposed:

“APRA’s proposed solution is to only allow the recording of a second default event after the loan has been in the non-default status for a period of at least 12 months”

‘Fraid I can’t give a direct reference as I only have an undated photocopy to hand, entitled “Multiple defaults in the retail portfolio” – it would have been about 2004. Please post to the blog any update on these issues that you may know of.

APRA’s concern was to “require the number of observations in bank’s PD and LGD databases to be equal” because of the traps of otherwise having mis-matched bases for PD and LGD. My preferred way of describing this – via “bad episodes” – is semantically different but hopefully faithful to the essence of the problem; it also lends itself to other difficulties that will be met.

Re-capping points from the last couple of posts:

  • recognise that default definition starts with a point-in-time definition but also has a derived episodic dimension: every transition from good to bad at a point in time begins a bad episode which is a relatively long interval of time.
  • the rule which specifies when the bad episode can end is an integral part of the default definition and is called the re-aging rule.
  • these bad episodes will then be relatively few in number and will be the basic units of modelling

‘Relatively long’ and ‘Relatively few’ represent implicit recommendations to choose a re-aging rule that produces few, long, congealed bad episodes rather than the opposite. Technically, you could get out alive with a rule that makes many sporadic episodes but you will get a lot of unnecessary headaches: multiple non-independent episodes, large numbers of zero-loss LGD points, multiplicities within a year, and in general a dilution of modelling power through not aligning model constructs with a sensible grip on reality.

With this understanding, the APRA proposal says that the re-aging rule should allow a bad episode to end after 12 continuous non-bad months have elapsed. This seems a good choice and will produce well-congealed bad episodes. A particular merit is that two bad episodes for any particular account within any 12-month period is never possible. This is helpful because a lot of modelling (e.g. behavioural) has a 12-month OW and the chance of any multiplicity would be a nusiance.

Thinking in database terms, one would have only one source of default information: a table of default episodes, keyed by account and start date. Of course, bad episodes are well behaved constructs being distinct for any account and not overlapping. Depending how one implements the rule there can be a slight wobbly about whether a new episode can begin immediately that the previous one ends – imagine an account with B then 12G then B again – you decide how you like to treat this case – it’s not a showstopper.

For any longitudinal modelling, looking for the first default is equivalent to looking for the first start of a bad episode.

APRA’s concern that number of observations should be equal is trivially met because the table of default episodes is the common data source for either the PD modelling or the LGD modelling.

So does that solve everything? Not quite, just clears some problems so that we can face the more subtle ones standing in the shadows behind, AWML.

PS any corrections or updates on APRA or other regulatory opinions would be most welcome. 

Continuing the re-aging theme: a clear episodic definition of default is important as the basis for LGD modelling.

Whether one thinks of this issue in terms of the re-aging rule, or in terms of default episodes, is two sides of the same coin: re-aging is the rule that determines when the episode ends, and the default episode is the period of time from the initial triggering of the (point-in-time) default definition until that end point. I find it easier to talk in terms of the default episodes (a.k.a. “bad episodes”) because those are the indivisible modelling units.

One has to be able to clearly identify, enumerate and isolate the separate default episodes. If your default definition doesn’t produce this level of clarity, there will be some ugly problems in the LGD modelling phase.

The ideal is a fairly heavily “congealed” approach, that tends to produce few, long, well separated episodes rather than many, potentially short and frequent ones. The motivation is that each episode becomes a modelling unit for LGD. Common sense and business knowledge would suggest that the modelling of LGD issues would be more coherent with a more congealed approach – otherwise one might end up with a larger mart of bad episodes, many of them short and ending in no loss, and many of them correlated and to some extent duplicating each other. 

Also the re-aging rule should be invariant to time granularity – it wouldn’t accord with intuition if a change from monthly to weekly data (for example) could substantially change the number and extent of the default episodes. Hence a rule referring to a re-aging period in absolute time units (e.g. X months) is sensible.

These issues were identified and addressed in an APRA note some years ago AWML.  

Today’s piece in the Herald Sun was interesting. HBOS have put a lot of time and effort into their international expansion over the last decade, so I would have to be skeptical that they would be looking to sell. That said, if they were looking to raise a fair amount of fresh capital, selling all, or only part, of BWA would make some sense.

If we assume that St. George will be sold to one of the big 4 (whether Westpac or not is irrelevant) then BankWest would be the fifth biggest bank in the country – and the only way for one of the others of the big four to increase scale without actually having to do the hard work of increasing business gradually – through processes like increasing customer satisfaction, building the brand etc.

It would also be the only way (except perhaps through a purchase of the ANZ) that a large international bank could gain scale quickly.

If HBOS were looking to sell they could expect a very full price as a result.

The only problem, of course, would be the Bank of Western Australia Act, 1995, and in particular section 23, which mandates that the bank has to be headquartered in WA and, effectively, run from Perth. For one of the big 4 banks this would mean that they would have to accept a subsidiary headquartered in Perth that they cannot fully integrate. This can be got around in some ways (for example the powers of the Managing Director are not specified – a bank teller could carry the title) but it would be tricky and could expose them to legal issues.

This means that the WA government has at least a partial veto over such a change of ownership – and one they can be expected to wield if required. This would reduce the benefits of an Australian bank buying it – and therefore reduce the chances of this occurring. I would be interested, though, to see if (WA Premier) Alan Carpenter has any meetings with senior members of the management of any of the big four over the next few weeks.

My favourite option, then, would be (if it were on sale) a foreign one buying it – but it would have to be well cashed up as BWA has always been a bit weaker in the deposit side, although that has been partially addressed recently with the help of HBOS.

Personally, I think the ANZ is the most likely to be bought – but the new federal treasurer may have other ideas.

Chris has long been one of my favourite bloggers on banking – the problem has always been working out where his blog posts are appearing. This one, though, is a pearler and he is blogging on one of my favourite themes:

… regulators do not make the markets safer. If anything, regulators make financial markets less safe.

Give it a read.

Maybe time to bring up a subject that contains more difficulties than one would expect: re-aging. When an account has gone into default – at some point in time – how long can it be before the account can again be considered ‘good’, and under what circumstances.

Re-aging needs to be part and parcel of the default definition. The default definitions in typical context are really point-in-time default definitions, easy to relate to if one imagines an account running along longitudinally in a good status, and then at some first point in time triggering the default definition, whatever that is (something like 90DPD on an amount of at least $100).

But the difficulties are, what happens next? Suppose the customer makes some partial or full payment, such that in the next grain of time (e.g. the next month) their point-in-time status is not in default. Perhaps they are fully current (=zero DPD), or perhaps their partial payment has pulled them back to a 30DPD or 60DPD status. How does this affect modelling and other activities?

It does not affect application PD modelling, which is longitudinal from the start of the account (MOB=0), and the modelling target is “went bad ever within a certain OW”; as soon as any account first triggers default, it has established its target status as “went bad” and what happens beyond doesn’t matter for the PD model.

It’s a more complicated story for the LGD model AWML.

The first step is to recognise that besides the point-in-time aspect of default, there is also an episodic aspect, which is the interval of time until the account can be considered good again. Why is this episodic definition needed? Can’t we manage just with applying the point-in-time definition at each successive point in time? The problem is that, depending on the granularity of time (e.g. monthly), it would then be possible to have many separate bad episodes for an account within a fairly short time window such as a 12-month window. An account’s status might go something like GGBGGBBGGGB. This patchy pattern then causes headaches for any cross-sectional analyses, and particularly for the basis of the LGD modelling.

The common-sense feeling is that the above pattern represents one extended bad episode, not three separate bad points (months) separated by good points. In banking language there needs to be a re-aging rule that says the account can’t be considered G immediately that the point-in-time default conditions don’t hold. Instead, there is a new status which is “not in default but still in a re-aging period”.

My preferred terminology is to call this situation “not bad but still in a bad episode”: and to use “E” as the code for any such time grains. Thus the above pattern would be GGBEEBBEEEB (if there is a re-aging rule that says an account must be good for several successive months before it can be fully G again.




Usefull links

We get older

Some Rights Reserved


Get every new post delivered to your Inbox.

Join 388 other followers