Model Risk and Financial Charlatanism

But when we modeled it, the brakes worked!

But when we modeled it, the brakes worked!

One of the hot buzzwords in financial risk management these days is “Model Risk”, as if this concept is in some way new. Unfortunately, it’s only recently – a full 6 years after the onset of the Global Financial Crisis (GFC) – that the idea of Model Risk is getting wide coverage. The concept is rather simple: in essence, it says that a model is, well, just a model. It’s not reality. But since we quants / financial engineers / “rocket scientist” types tend to put things a bit more quantitatively than that, the notion of Model Risk includes what’s called “goodness of fit” or measures that assess the appropriateness of a given model for a given situation. They help you understand when the model may need tweaking, may no longer be appropriate or if it’s predictive value may have fallen too low to use. I take pride that Investor Analytics was the very first risk management specialist firm on Wall Street to actively share the results of our Model Risk measures. It wasn’t easy: for many years before the GFC people thought we were a bit nuts to call attention to the limitations of models in our industry.

Read more of this post

What Goes Up Must Come Down?

LinearRegression-S&P500INDEX-0%Percentile

Relationship between one year’s returns and the next. Essentially, there’s not much of a relationship except in cases of extreme losses, which are often followed by a better year.

Investor Analytics just published the fifth in a series of articles in a new column I have in Risk Magazine’s Hedge Fund Review, which you can find here. The topic for this article is both simple and profound: since 2013 was a great year for stocks, chances are that 2014 will be bad so that the stock market maintains its long-term average. The phenomenon is call “reversion to the mean” and is the underlying logic behind thinking that a sports player is “due” (a fallacy) and for the notion that a tall parent is more likely to have a shorter child (a truth).

We looked at the returns of the S&P 500 over the past 86 years and constructed rolling 1-year windows to generate over 20,000 data points to examine in our hunt for signs that if you have a “good year” that the next year has an increased likelihood of being a “bad year”. It turns out that it’s just not so. You can read the article for the details, but it’s very clear that having a good year really doesn’t change the odds of the next year being good or bad. The average return of the next year is slightly lower than usual, but the range of returns is tremendously wide. Specifically, the overall average for the S&P500 is 7.5%, with a volatility of 20%. That means that for any given year, at the 95% confidence interval, the stock market gives a return somewhere between -25% and +40%. But following a year like 2013 (up 30%), the market returns on average 4.7% with a volatility of 17.5% which translates to a 95% confidence interval between -24% and 33.6%. See the big difference? Neither do I.

The plot in this post shows the overall relationship between two subsequent years: the first year on the horizontal axis, the second on the vertical. The large blob in the middle represents about 98% of the data, which essentially shows that one year tells you next to nothing about the next year.

The graphs we published showed a striking lack of relationship between one year’s returns and the next, except in the most extreme cases. Our conclusion is simple: your risk is not really changed from last year, and this year is has just a good chance of being good as it does of being bad. It’s up to you to make the most of it.

Doctor, Heal Thyself!

A client recently emailed me this link to an article about JP Morgan having discovered an error in their firm-wide calculation of Value-at-Risk, the industry standard measurement used to quantify risk.  His email concluded with:

“They are using a spreadsheet!!!”

You read that correctly.  JP Morgan – the firm that famously invented Value-at-Risk in the 1990’s – is apparently using a spreadsheet for this calculation.  This revelation is simply astounding.  If true, it would mean they really are sitting on a house of cards.  The article quotes the JP Morgan Task Force on VaR: “the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a  factor of two and of lowering the VaR…. It also remains unclear when this error was introduced in the calculation.”

Read more of this post

What are Credit Ratings Good For?

On the front page of today’s Wall Street Journal (well, the on-line version anyway), there’s an article about how the major credit rating agencies ‘failed to see defaults coming.’  You can read some of the article on their public site, but you’ll need an on-line subscription for the entire text.  The point of the article is that a given country’s or company’s official credit rating usually severely underpredicts the real probability of default.  This is not news in the industry.

A few years ago as the credit crunch was getting underway, several of our business partners asked if we could model corporate bond credit risk using the official ‘big three’ credit ratings as inputs.  While this initially sounded pretty straightforward, my crack team of financial engineers showed me why it really doesn’t work.  Basically, there is supposed to be a relationship between the credit rating and the chance of default.  S&P claims that a rating of ‘single B’ means there is a 2% chance of default within 1 year.  But if you look at 100 different companies rated ‘B’, far more than 2 of them defaulted in the coming year.  Given how poorly the ratings predict what they are supposed to, my quants emphatically refused to build a system that used credit ratings as the basis of default.  But at the same time, many of our competitors were selling exactly that type of system – one that estimated how much money a fund could lose based on the credit ratings of their investments.  And these systems were selling very well.  The only problem was that they, just like the big credit ratings, severely underpredicted the real risk.

Read more of this post

Learning from Mistakes

The Economist has a nice piece about how European authorities did it right this time with volcano ash.  My previous two posts were about how last year’s closures were done without proper testing and my second post was about an article that seemed to justify those closures because of a detailed study of volcano ash.  I was afraid that it would lead to European authorities taking an even more conservative approach because of this study, overreacting without proper testing.  As I mentioned in the previous posts — I’m not arguing that the airspace shouldn’t be closed.  I’m arguing that it shouldn’t be closed without testing the actual ash – every time.  This new Economist article supports that view and states “The actual eruption of Grimsvotn a little over a month after the April 12 exercise shows that airlines, air traffic controllers, and governments appear to have learned the lessons of 2010’s Eyjafjallajokull-driven chaos.”

Now THIS is risk management.  Appropriate reactions by authorities to this new eruption is much more data-driven than the previous eruption.  An evidence-based approach to risk (heck, to everything) is a much better approach.  Arguably, when the last eruption started there was little evidence to go on.  In my view, that means it became imperative to collect the information as soon as possible.  This time around, it seems things are being done in a proactive way.  If it turns out that the ash really poses a risk, I’ll be the first one to support massive airport closures.  But massive airport closures without simultaneous verification should be criticized loudly.  I for one am glad they’re doing it right this time.

Bravo!

Chocolate or Vanilla?

As always, comments are welcome.

Risk Measures basically comes in two flavors: chocolate or vanilla.  Really.  Sometimes it’s described as two styles: “Impact” or “Probability”, but I think of them as flavors.

One school of thought is that the best way to estimate a portfolio’s risk is to identify what drives each security’s value and then stress all those factors.  This tells you what impact you can expect on your portfolio for each of those different stresses.  Let’s call this Vanilla.

The chocolate school says that the best way to estimate a portfolio’s risk is not to treat each asset class separately like the plain vanilla people, but rather to estimate probabilities of various losses. Read more of this post

How Do You Know It Works (part 3)?

In this 3rd part of  ‘How Do You Know It Works” I’m going to cover three different popular ways to measure risks and show how to tell if they work or not (I know, I know, that part 2 cartoon is a cheap way of making this a trilogy-post but I’ll take what I can get).

What we’re talking about is much wider than financial risk, but it’s also at the very heard of financial risk management:  How do you know if your risk measure is worth the paper it’s printed on or the computer it runs on?  Here are some things to look for in evaluating a particular forward-looking risk analytic:

  1. It can be calculated from available information and isn’t tied to a proprietary piece of data.
  2. The analytics’s accuracy can be quantitatively determined in some way.
  3. The assumptions going into the analytic can be clearly communicated.
  4. Users of the risk analytic can have an indication of when it is applicable and when it is not.

I’m just going to concentrate on the second feature: evaluating the quantitative accuracy of the analytic.  Let’s consider three different analytics:

Worst Case Loss
Investor Analytics
is sometimes asked to calculate a firm’s proprietary risk model which falls into the category of “worst case loss.”  I’ve heard it described many different ways, using many different formulas / techniques / methodologies, and it usually starts with a set of reasonable inputs but boils down to a series of alchemy-like adjustments that render the thing useless.

Read more of this post

How Do You Know It Works (part 2)?

xkcd is a great cartoon

This pretty well sums up what I meant by the previous post.  It’s really easy to be fooled by seemingly ‘real’ things.  This is another good test of whether or not something actually works:  If something works, chances are good (but not perfect) that someone will be taking advantage of it.

Here’s a link to xkcd, where this is from.  Enjoy.

How Do You Know It Works?

I participated in a panel yesterday at a hedge fund conference in NYC.  What made this panel a little different was that I wasn’t the first person to talk about how important it is to know the limitations of risk models.  In fact, I was the THIRD person.  The question was posed “what makes an outstanding risk manager?”  The first person to respond included in his answer something along the lines of “an outstanding risk manager asks the question – how do I know the model works?”  Bingo!

How do you know if any prediction works?  Over the ages, people have tried all sorts of methods.  Let’s take a look a few of them.

Method #1: you believe it if an authority figure tells you it’s true. Like the village elder.  Or the shaman.  Or a celebrity.  Or Congress.  Assessment: inconsistent results at best.  Let’s try something else…

Read more of this post