Something that might be appreciated by the ozrisk community is a series of book reviews. Amazon reveals a couple of dozen books particularly relevant to credit risk analytics, and Basel. Would any of you readers out there like to offer a review or at least an opinion on books you have used?
I don’t currently have any books to hand, but recall favourable impressions of Lyn Thomas’s book and Naeem Siddiqi’s work in the shape of SAS training materials.
Credit Scoring and Its Applications
by Lyn C. Thomas , David B. Edelman , Jonathan N. Crook
Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring
by Naeem Siddiqi
6 comments
28 May, 2008 at 20:11
ZK
Clive,
I really like the idea of having a thread on book review. On the other hand, is there any chance of having a thread for the good journal article or working papers review? Perhaps I may be able to start a few classical papers for that thread. Ta.
ZK
29 May, 2008 at 00:05
Clive
Hi ZK,
Yes, indeed it might work better for an article because it would be easier to get hold of the content.
I don’t plan to buy the books just in order to review them!
Welcome your offer to start a thread. Please choose one, preferably available online.
Cheers,
Clive
29 May, 2008 at 05:34
JonasR
The best book ever :
David Hand: Principles of Datamining
Must-have-one, really. Not directly about scorecard development, but I am absolutely sure, that the most usefull one for socecard-development, or any kind of modelling :D
I have read the second one, it is on the self… Our “Internal Scorecard development regulation” has quite the same table of contents, I wrote it. :)
The book is not extremly usefull, it contains the basics. It gives a good overview, or concept, of scorecard development. Gives a clue about a way of procedure, not nothing special.
Prof. Hand’s works help more I think.
29 May, 2008 at 13:25
ZK
I separate a few classical papers into different eras. It may be a painstaking task to list every good article in this thread. I will just start from a few and keep the ball running. As this thread may be too long for most people in the ozrisk community to read (due to our busy working lifestyle), I will limit the review for any papers before 1990 in this thread. Next thread will extend to the papers around 1991 – 2000. Last ones (possibly may be separated into a few) will cover papers published after 2000.
The reviews are just my personal opinions. I would definitely welcome any constructive criticism. Moreover, I would like to see the views from others for the papers that I have listed.
Here it goes with the classical papers:
Before 1990:
Some of the papers that I listed for the pre 1990 era are not really for the risk management rules, policies, strategies or decisions, but understanding how to estimate volatility may be one of the stepping stones to provide the information of making a sound risk management decision.
1) Mandelbrot, B. (1963) “The Variation of Certain Speculative Prices” Journal of Business, XXXVI, 392-417
One of the papers that I enjoyed the most when I was in the uni. The idea may violate lots of finance literatures, as there is a tendency for the assumption of the existence of second order moments in the distribution of financial returns. However, this may give us some indications of second order moment might not be an ideal (or even possible) measure of volatility. This brings me some concepts of using the density forecast (i.e. estimating the range of low-high for every density interval or the function of the probability distribution cruve in short), like the range of high-low in the 95% interval.
This is the first paper that I learnt. After reading this paper, I started challenging what I have learnt from the uni. What I have learnt from the uni may be a mean of understanding but not the end of understanding. Lots of stuff that I have learnt and used based on the assumptions of normal distribution. By reading between lines from this paper, there is a better way to estimate volatility, like range of 95% or 90% or any % of interval as a measure of risk. Moreover, other idea taken from this paper is we really need to understand the data structure, behaviour and charactertics before (perhaps) building a model and making a decision in that misleading model.
I remember what Sir Ronald Aylmer Fisher mentioned that statistical modelling is just like using a cannon to shoot a bird …… but it misses (this is just from my memory, correct me if it is not from Fisher). The purpose of using normal distribution is trying to believe that we are “roughly correct” (statistical consistent), but when the assumption of the existent of second order moment is violated, we are not really “roughly correct” at all (misses the bird).
2) Garman, M; and Klass, M. (1980) “On the Estimation of Security Price Volatility from Historical Data” Journal of Business 53(1), 67-78.
This is one of my first volatility measure papers that I have encoutered when I was in the uni. What I understand about this paper is that this paper is an improved version over Parkinson (1980). The approach of using High, Low, Open and Close to estimate volatility is statistically consistent and much more (statistically) efficient than using Close-to-Close or High-Low approaches.
Reading this paper really reminds me of looking at a baby photo of myself. There are lots of foundations from this classical paper (like the facial structure of my baby photo) may set the direction of the literatures in volatility. This could be one of the sparking points for the search of different and the best volatility estimation approaches.
Like most of the conventional papers, even an undergraduate students (like myself in the past) who are getting their hands in volatility found this paper is reasonably easy to grab all the essences of the ideas.
3) Chen, K.H.; Shimerda, T.A. (1981) “An Empirical Analysis of Useful Financial Ratios” Financial Management, 10(1) 51-60.
Perhaps people may be getting tired of me throwing too many volatility measures and understanding the data characteristics papers. This paper is one of the survey on the risk factors used in the structured default risk model.
As this paper was published in 1981, this survey may not reflect the latest literatures. However, I really like the idea from the authors of summarising all the existing literatures on default risk modelling. Since this thread is for any papers in the pre 1990 era, I might just throw this in and in hoping that someone might write or find some survey like this to share with us in the ozrisk community.
4) Engle, R. (1982) “Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Knigdom Inflation” Econometrica 50(4) 987-1007.
I really find this paper has a curse to myself. I have been reading this paper for (perhaps) zillion times. Another simple but powerful paper. I highly recommend to some people who are just start getting a hand in estimating volatility and data properties, especially for the data involving social behaviour.
I realise that everytime when I read this paper, I found something new from there. The more that I read this paper, I more I feel like I don’t understand anything in time series (talking about my knowledge just before I did). Reading between lines is really necessary if people would like to fully understand all the ideas. Even I have done lots of studies and works in ARCH, I still believe that I missed some ideas from this paper.
31 May, 2008 at 00:45
Clive
Thanks ZK for this pointer to important and relevant literature, in these cases showing a “time series” flavour which could relate to point-in-time vs through-the-cycle debates – perhaps also to stress testing?
28 June, 2008 at 01:37
Bruce M
I have attended a number of presentations by Lyn and they are always worthwhile.
Good at cutting through to the important stuff.