You are currently browsing the monthly archive for March 2008.

Credit risk analysis needs to comprehend two modes of analysis: longitudinal, and cross-sectional.

Longitudinal means following each individual account along its own timeline, typically expressed as “months on books”. If your x-axis is MOB, you are doing longitudinal analysis.

Cross-sectional means looking across the whole portfolio at a particular point in time. If your x-axis is Date, you are doing cross-sectional analysis.

The modelling of application scorecards is longitudinal; a set of application records is selected, and the default performance of each individual account is analysed to establish (for example) whether it defaulted before 24 MOB. True, the selection of the records may happen to have a date component, such as selecting all the accounts that opened in 2005Q1. OTOH one may select or segment on other criteria. 

The “24 MOB” above is an example of an “outcome window” AWML.

Basel calculations of risk components for each exposure (account) is a cross-sectional activity: every account that exists at 31/3/2008 must be assessed for inter alia its probability of default within the following 12 months i.e. for the time window 1/4/2008 to 31/3/2009. This applies irrespective of what MOB each account has reached, and indeed a wide distribution of MOBs will exist in a portfolio, depending on its nature.

Regular monitoring of credit risk statistics would need to reflect both modes of analysis, and typical graphical or tabular displays have both dimensions (MOB and date) in evidence somehow.

Monitoring that was only cross-sectional would tend to be too crude to be usefully interpreted: for example the simple cross-sectional fact that as of 31/3/2008 a certain portfolio has 10,000 accounts of which 100 are in default, doesn’t convey much without a detailed understanding of the composition of the portfolio and certain other issues (like collections and write-off procedures). It might be a very alarming result if this were a new product, introduced within the last year, and with a portfolio that was growing fast and still dominated by very young accounts. Conversely, it might be a good result if the portfolio were well “seasoned” and collections activities were lengthy.

Monitoring that was only longitudinal would show the overall default profile but not indicate how that profile was changing over time.

Hence good monitoring incorporates both logitudinal and cross-sectional modes, typically showing how the default MOB-profile is evolving as real time (date) advances.

One issue that can make a big difference to discussions about default analytics is the granularity of the time-based data that analysts are working from. Modellers often work with monthly data, but depending on context other credit risk analysts might be working with daily, or perhaps annual, data. Given this range, some issues that are difficult for one analyst may be trite or non-existent for another.

One extreme of the time granularity spectrum is probably annual data. This might apply to non-retail exposures, where meaningful updates to the risk information (for building, say, behavioural models) might arise only from annual financial statements. This would be a data-poor environment, placing more weight on banking expertise and less on credit risk analytics.

For retail banking, monthly granularity is the common warehousing level for data that will be the the prime source for credit risk analytics. For HLs & PLs, this might take the form of monthly snapshots, being summaries compiled at month-end of relevant data fields. CCs, however, have specific statement and repayment cycles not based on calendar month ends. So, although CCs basically have monthly granularity, they might not fit comfortably in a month-end snapshot warehousing approach, but one way or another will have some monthly data summarisation and warehousing. 

For the AIRB purposes of using a few years of history to build (and use) retail banking models, monthly granularity is the typical basis and will be the assumption unless stated otherwise. Readers are asked to provide examples from their own environments, as these details can often make a big difference to the discussions.

At the shortest extreme, naturally much credit risk monitoring happens at daily granularity, but IIUC not many modellers would be analysing substantial extracts from raw data sources at daily granularity.

Intra-grain: what happens within the month? The warehouse would no doubt record risk variables at the end-of-month, but perhaps also for the worst level reached during the month. If not, there might be an account recorded as 111DPD at 31March and as 0DPD at 30April and no way to tell whether or not the account reached 120DPD during April. Then, a default flag built on this data would be a series of calendar-month-end default tests, rather than a “bad-ever” flag. This is not so much a problem as a difference – OK as long as you know what you are dealing with. IIUC in the old days, when computers were only mainframes, this kind of intra-grain issue could be substantial and even apply to an entire outcome window, such that an account was only determinable as good or bad at window end. These examples illustrate how time granularity plays a role in credit risk data.

Intra-grain: exposure for less than a month? Accounts open randomly throughout a month, so if credit risk data is summarised and warehoused on any regular schedule (like month-end), this will mean that accounts are exposed to risk for only half-a-month on average during their first “month on books”. Does anyone out there worry about this kind of issue? It may not sound important, but it might mean that your 12-month outcome window is really an 11.5-month outcome window. Perhaps data based on payment cycles, rather than regular snapshots, can avoid this issue, although IIUC it will pop up in other ways. BTW the issue in this paragraph applies to application modelling, rather than behavioural modelling, because of the intrusion of the account open date.    

Nomenclature can be a minor stumbling block for credit risk workers with different backgrounds and environments. A typical example would be those graphs and tables that go by various short-hand names, meaningful internally but maybe not to outsiders.

For the most technical stuff I prefer the language of statistics, which is likely to be the most universal. Most are comfortable with “probability” and “odds”, but moving on to “Gini coefficient” there already turn out to be several names, plus variations in the technical details. Where these issues seem relevant to Ozrisk, we will tidy them up.

Of particular use is the nomenclature of the statistical area called “survival analysis”. This has useful technical concepts like “longitudinal” (and “cross-sectional”), and “hazard”, which we will cover in due course, at the same time linking them with the various banking industry terms that might apply. However, no term does quite the job of “hazard”, so is would be good Ozrisk practice to stay with the universal statistical nomenclature in this case. This would keep our discourse as accessible as possible to fellow professionals such as actuaries. BTW, actuaries and demographers use survival analysis for the estimation and study of mortality (or sickness) hazard and the composition of populations of people, work which is closely related to default analytics.

On the other hand, nomenclature like “roll rates” seems to be so widespread in the banking industry that one would prefer it to statistical equivalents (“transition probabilities”).

In discussions that follow, please advise in cases where you know of alternative nomenclature, so that we can keep discussions as inclusive as possible.

For credit risk threads in Ozrisk, let’s develop a list of abbreviations.

The aim is to keep the list manageably short: a compromise between efficiency and being readily understood by Ozrisk readers.

It is not the aim to provide definitions here.

Please consider this a work in progress and make suggestions. The list below is enough to be getting on with as regards material I plan to cover, but I’d be pleased to hear, and conform, if there are some broad based conventions in other environments.

A select few Basel favourites:

  • AIRB    Advanced Internal Ratings Based
  • PD, EAD, LGD   The risk components Probability of Default, Exposure at default, Loss given default
  • EL        Expected loss

Default analytics:

  • NDA, NMA Number of days (months) in arrears
  • DPD             Days past due
  • PDO             Number of scorecard points to double the odds

Modelling:

  • NTU            Not taken up
  • TTD            Through-the-door population
  • KGB            Known good or bad modelling datamart
  • SW, OW     Sample (Outcome~) window
  • OOS, OOT Out-of-sample (~time)
  • K-S             The Kolmogorov-Smirnov test (statistic)

Miscellaneous:

  • HL             Home Loan
  • PL              Personal Loan
  • CC              Credit Card
  • SAS            The global software vendor
  • ETL            Extract-transform-load: generically the process of going from raw data sources to a modelling mart

Usenet conventions:

OK to save some typing? We don’t want alphabet soup but a few are very useful:

  • BTW              By the way
  • IIRC, IIUC   If I remember (understand) correctly
  • AWML          About which more later

 The names of well known financial institutions need no listing, but for the situation of referring to a particular bank without wishing to identify it:

  • NNB             No-name Bank

A big thanks to Andrew for starting Ozrisk and being the mainstay – I hope you will still be in the wings and willing to share your top-level views of risk issues.

I have volunteered to contribute, although not to follow in Andrew’s footsteps: for my banking experience is not deep. Instead, the turf I intend to plow is in a specific corner of the park (?metaphors), drawing on my technical input to a ground-up Basel II AIRB project across the years 2003-2007. As the Bio warns, this may get pedantic and detail oriented in places.

<Insert anecdote> Early in the project, in a meeting about scorecard cut-offs, the GM happened to ask what our ‘bad rate’ for Home Loans was, and I was temporarily flummoxed as one could imagine a dozen different contexts within which the question could be answered. However, that was not the forum to start getting technical about variations of the default definition, materialities, modelling marts versus backtesting, monitoring, re-aging, exclusions, cohorts and time windows, closed goods in versus out, marginal versus average, and all those other details that the actual data hackers and model builders have to wade through before delivering the superficially simple answers. <End anecdote>

Since that early flummox, it has become clear that many “default analytics” issues are subtle and can lead to imperfect communication, and lack of reconciliation, amongst different departments and committees even within the same bank, let alone with regulators and other external parties. So, I started collating mostly mental notes on topics that deserved a clearer exposition. These notes could fall under the general heading of “Default Analytics”, with sub-headings as below:

  • Default definition
  • Default metrics
  • Default prediction (building models)
  • Default monitoring (actual performance of accepts)

The imagined audience are credit risk analysts, and people who use their outputs. Many silent readers of Ozrisk are expert in this space, and I hope your role will be to correct or to embellish the material as you see fit.

My plan, then, is to proceed with some expository posts along the above lines (the first couple of posts may be waffly and administrative). No encouragement is required, as I am likely to carry on for a while (unless Andrew asks me to stop!). Time will tell which threads strike useful chords, and with your contributions we can hopefully create a helpful resource for this corner of the risk industry. Maybe this is the forum to get technical about those items listed above!

I am happy to say we have picked up a new author, so I am unsuspending Ozrisk.

Following on from my post below I have one volunteer to act as administrator for the site, but no new authors. As a result, I will be putting ozrisk into hibernation. If you would like to join as an author please contact me on the email address on the “Authors” page and we can have a chat.

I will leave the contents of the blog up for an indefinite period in the hope that someone wants to take it over and run with it. The current 200+ visitors a day also seem to be finding the content useful.

Otherwise, thanks to you all for reading ozrisk over the last (more than) two years and  farewell.

Google Advertisement

We get older

Some Rights Reserved

Follow

Get every new post delivered to your Inbox.

Join 388 other followers