One issue that can make a big difference to discussions about default analytics is the granularity of the time-based data that analysts are working from. Modellers often work with monthly data, but depending on context other credit risk analysts might be working with daily, or perhaps annual, data. Given this range, some issues that are difficult for one analyst may be trite or non-existent for another.

One extreme of the time granularity spectrum is probably annual data. This might apply to non-retail exposures, where meaningful updates to the risk information (for building, say, behavioural models) might arise only from annual financial statements. This would be a data-poor environment, placing more weight on banking expertise and less on credit risk analytics.

For retail banking, monthly granularity is the common warehousing level for data that will be the the prime source for credit risk analytics. For HLs & PLs, this might take the form of monthly snapshots, being summaries compiled at month-end of relevant data fields. CCs, however, have specific statement and repayment cycles not based on calendar month ends. So, although CCs basically have monthly granularity, they might not fit comfortably in a month-end snapshot warehousing approach, but one way or another will have some monthly data summarisation and warehousing. 

For the AIRB purposes of using a few years of history to build (and use) retail banking models, monthly granularity is the typical basis and will be the assumption unless stated otherwise. Readers are asked to provide examples from their own environments, as these details can often make a big difference to the discussions.

At the shortest extreme, naturally much credit risk monitoring happens at daily granularity, but IIUC not many modellers would be analysing substantial extracts from raw data sources at daily granularity.

Intra-grain: what happens within the month? The warehouse would no doubt record risk variables at the end-of-month, but perhaps also for the worst level reached during the month. If not, there might be an account recorded as 111DPD at 31March and as 0DPD at 30April and no way to tell whether or not the account reached 120DPD during April. Then, a default flag built on this data would be a series of calendar-month-end default tests, rather than a “bad-ever” flag. This is not so much a problem as a difference – OK as long as you know what you are dealing with. IIUC in the old days, when computers were only mainframes, this kind of intra-grain issue could be substantial and even apply to an entire outcome window, such that an account was only determinable as good or bad at window end. These examples illustrate how time granularity plays a role in credit risk data.

Intra-grain: exposure for less than a month? Accounts open randomly throughout a month, so if credit risk data is summarised and warehoused on any regular schedule (like month-end), this will mean that accounts are exposed to risk for only half-a-month on average during their first “month on books”. Does anyone out there worry about this kind of issue? It may not sound important, but it might mean that your 12-month outcome window is really an 11.5-month outcome window. Perhaps data based on payment cycles, rather than regular snapshots, can avoid this issue, although IIUC it will pop up in other ways. BTW the issue in this paragraph applies to application modelling, rather than behavioural modelling, because of the intrusion of the account open

date.