Friday 12 July 2013

Conventional Ratings: Why The Abysmal Performance?




Why are ratings so unreliable? In a recent article which appeared on 19-th July, 2012 on Thomson Reuters News & Insight, one reads: 

"This is how the role of the credit rating agencies was described by the Financial Crisis Enquiry Commission in January 2011: 

The rating agencies were essential to the smooth functioning of the mortgage-backed securities market.  Issuers needed them to approve the structure of their deals; banks needed their ratings to determine the amount of capital to hold; repo markets needed their ratings to determine loan terms; some investors could buy only securities with a triple-A rating; and the rating agencies’ judgment was baked into collateral agreements and other financial contracts.1 

The performance of the credit rating agencies as essential participants in this market has been abysmal.  In September 2011 Moody’s reported that new “impairments,” that is, non-payments, of principal and interest obligations owed to investors through these structured-finance products soared from only 109 in 2006 to 2,153 in 2007 to 12,719 in 2008 and peaked at 14,242 in 2009, but with still more than 8,000 new impairments in 2010. Thus, more than 37,000 discrete investment products defaulted in that time period.
..........

Substantial evidence has suggested that this epidemic of ratings “errors” was not the product of mere negligence, but rather was the direct and foreseeable consequence of the credit rating agencies’ business models and their largely undisclosed economic partnerships with the issuers that paid them for their investment-grade ratings."


Setting aside incompetence, the conflict of interest, the "special relationships", etc. we wish to concentrate on what is probably the most fundamental reason for the "epidemic of rating errors" - the underlying flawed mathematical approach. Yes, the mathematics behind conventional risk rating is flawed not only from a purely mathematical and philosophical perspective, it also opens the doors to numerous means of manipulating the results. There are lies, damn lies and statistics. The tools offered by statistics - extremely dangerous if in the wrong hands - are the main enabler. In particular let's look at the concept of correlation, the most fundamental quantity in anything that has to do with risk, ratings, VaR, its assessment and management. 


A correlation measures how two parameters are related to each other as they vary together. Let us see a few significant cases:


An strong linear correlation: R² = 0.83





The above situation is quite frequent in textbooks or computers. In reality, this is what you encounter most often:







The problem becomes nasty when you run into situations such as this, in which R² = 0, but which, evidently, convey plenty of information:






The evident paradox is that you have a clear structure and yet stats tells you the two parameters are independent. A clear lie in the face of evidence.

And what about cases like this one?









It is easy to draw a straight line passing through two clusters and call it "trend". But in the case above it is not a trend we see but a bifurcation. A totally different behavior. Totally different physics. Two clusters point to a bifurcation, N clusters could point to N-1 bifurcations.... certainly not to a trend. 

And finally, how would one treat similar cases?







The data is evidently structured but correlation is0. How do you deal with such situations?

The key issue is this: when looking at portfolios composed of thousands of securities, or other multi-dimensional data sets in which hundreds or thousands of variables are present, correlations are computed blindly, without actually looking at the data (the scatter plots). Who would? There are hundreds of thousands of correlations involved when dealing with large data sets. So, one closes an eye and just throws straight lines on top of data. Some false trends are captured as such, significant trends are discarded just because they don't fit a linear model. What survives this overly "democratic" filtering goes to the next step, to create more damage. Imagine, for example, the MPT (Modern Portfolio Theory), developed by Markowitz and which hinges on the covariance matrix. Covariance is of course related intimately to correlation and the flaw propagates deeply and quickly. Think of all the other places where you plug in a covariance or a standard deviation, or a linear model. Think of how many decisions are made trusting blindly these concepts. We must stop bending reality to suit our tools. We have become slaves of our own tools.



www.rate-a-business.com



No comments:

Post a Comment