Model Risk and Financial Charlatanism

But when we modeled it, the brakes worked!

But when we modeled it, the brakes worked!

One of the hot buzzwords in financial risk management these days is “Model Risk”, as if this concept is in some way new. Unfortunately, it’s only recently – a full 6 years after the onset of the Global Financial Crisis (GFC) – that the idea of Model Risk is getting wide coverage. The concept is rather simple: in essence, it says that a model is, well, just a model. It’s not reality. But since we quants / financial engineers / “rocket scientist” types tend to put things a bit more quantitatively than that, the notion of Model Risk includes what’s called “goodness of fit” or measures that assess the appropriateness of a given model for a given situation. They help you understand when the model may need tweaking, may no longer be appropriate or if it’s predictive value may have fallen too low to use. I take pride that Investor Analytics was the very first risk management specialist firm on Wall Street to actively share the results of our Model Risk measures. It wasn’t easy: for many years before the GFC people thought we were a bit nuts to call attention to the limitations of models in our industry.

One very simple way to assess a model of any type (risk or otherwise) is to run it under a range of inputs. Think of what weather people show in the days leading up to a hurricane or typhoon: they show a number of different lines representing possible trajectories for the storm. Each of those lines could be a different model or they could be the same model run under different inputs (temperature sensitivity, wind speeds, etc.). The dispersion in the results – the differences in the lines – gives an indication of the accuracy of the models. After Hurricane Sandy hit New York City two years ago, I attended a workshop put on by the Santa Fe Institute about risk where Marcia McNutt, the Director of the US Geological Survey, explained that hurricane models have gotten very good at predicting the path of the hurricane but are still quite limited in predicting the intensity upon landfall. In other words, they knew pretty darn well 2 or 3 days in advance that it would strike within 20 miles of NYC, but they didn’t know until just a few hours beforehand if it would strike as a tropical storm, a category 1 hurricane, or possibly even worse. That’s a form of Model Risk.

A client recently pointed out a form of model risk I find a bit insidious. He pointed me to an article that the authors cleverly titled “Pseudo-Mathematics and Financial Charlatanism” (the full academic article can be downloaded free here). I was intrigued because this type of model risk is somewhat hard to detect for non-experts and the model actually appears to be mathematically well supported. The idea is simple: an astute portfolio manager scours data over the past 20 years for investments that, had he invested, would have led to the best returns with the smallest amount of risk. He established the strategy that would have done the best and sets out to get investors in his new fund. But until this strategy is tried on new investments, without knowing a-priori what the results will be, it’s not truly back-tested. And you probably shouldn’t invest. When he did his “backtest” on the strategy, he used the full data set to determine the parameters that optimize the results. What looks like an ‘optimized portfolio’ can be little more than an In-Sample configuration that leads to little if any Out-Of-Sample benefits. This type of analysis doesn’t just lend itself to financial modeling – it’s do-able in all sorts of fields from business to baseball to big data (you can also do it in fields whose names don’t start with ‘b’). In the financial example, if the number of possible portfolios that the manager considers is large compared to the number of years over which he does the strategy optimization, the resultant “best”portfolio is nothing more than the one that’s in the extremes of the dataset, and EVERY dataset has such extremes. He’s cherry picking. Technically, he has overfit his data and merely picked out the historical winner. Something like selling “previously winning lottery numbers.”

It’s quite similar to the behavioral economics lesson that we humans are quite poor at discerning random patterns from non-random patterns. If you take 4,000 investment managers who do nothing more than the classic Wall Street Journal exercise of throwing darts at the stock pages to pick your investments – meaning that each is totally random – and you monitor their results over several years, the number of sustained winners may shock you. After 5 years of investing randomly, about 125 of those funds would have a perfect record of being up every single year! After 8 years, 15 of them would have a perfect track record, and after 10 years 4 of them would. That’s right – by totally randomly choosing your investments, you have a 1 in 1,000 chance of being perfect 10 years in a row. In other words, if only 1 in 1,000 managers does have a perfect record after 10 years, then you really can’t tell if they’re skillful or lucky. The fact that a fund has been up several years in a row doesn’t tell you anything about their quality unless you also know how many other managers were trying to do the same thing, and that’s rarely known. This is also a model – a model of investing based mainly on historical performance. And it is full of major league model risk.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s