The paper isn’t paygated so check it out – it’s only 6 pages so definitely accessible. Don’t worry about the couple of typos in the paper, bizarre as it may be to find them in a paper that presumably was reviewed, the ideas are still good.
The key idea is that prediction markets usually focus on binary events. Will Person Y win the election? Will China invade Taiwan? These outcomes are relatively easy to predict and circumvent important challenges of extreme outcomes and Taleb’s Black Swans.
A quote from the paper, itself quoting Taleb’s book, Fooled By Randomness, sums up the problem of trying to live in. Binary world when the real world has a wide range of outcomes.
In Fooled by Randomness, the narrator is asked “do you predict that the market is going up or down?” “Up”, he said, with confidence. Then the questioner got angry when he discovered that the narrator was short the market, i.e., would benefit from the market going down. The trader had a difficulty conveying the idea that someone could hold the belief that the market had a higher probability of going up, but that, should it go down, it would go down a lot. So the rational response was to be short.
Any model is a simplification of reality. If it isn’t, then it isn’t a model as rather is the reality.
A MODEL ISN’T REALITY
Any simplified model I can imagine will also therefore not match reality exactly. The closer the model gets to the real world in more scenarios, the better it is.
Not all model parameters are created equal
Part of the approach to getting a model to match reality as closely as possible is calibration. Models will typically have a range of parameters. Some will be well-established and can be set confidently without much debate. Others will have a range of reasonable or possible values based on empirical research or theory. Yet others will be relatively arbitrary or unobservable.
We don’t have to guess these values, even for the unobservable parameters. Through the process of calibration, the outputs of our model can be matched as closely as possible to actual historical values by changing the input parameters. The more certain we are of the parameters a priori the less we vary the parameters to calibrate the model. The parameters with most uncertainty are free to move as much as possible to fit the desired outputs.
During this process, the more structure or relationships that can be specified the better. The danger is that with relatively few data points (typically) and relatively many parameters (again typically) there will be multiple parameter sets that fit the data with possibly only very limited difference in “goodness of fit” for the results. The more information we add to the calibration process (additional raw data, more narrowly constrained parameters based on other research, tighter relationships between parameters) the more likely we are to derive a useful, sensible model that not only fits out calibration data well but also will be useful for predictions of the future or different decisions.
It’s chock-full of analysis, numbers, tables and charts showing how as much as things change, the scope for financial crises changes very little. The comparison of Developed and Emerging Markets is particularly interesting in that the differences, while they do exist, are far smaller than stereotypical views. Emerging Markets do tend to have more ongoing sovereign defaults, but the frequency of banking crises is little different. Weirdly, some aspects of Emerging Market crises (such as employment impacts) are less than average for the Developed World.
It isn’t really the book’s fault, but this was one of the few books that I struggled with on my kindle – the graphs and charts and captions to figures were particularly difficult to read. Perhaps they would look better on the Kindle DX (the larger model) or even an iPad or something.
Although the book doesn’t focus on the current (still-happening, if you weren’t paying attention) financial crisis, there are several chapters dedicated to it with an analysis of the economic indicators leading up to the crash. Now it’s incredibly easy to predict an event after it’s happened, but I’m still hopeful that the results can be useful in predicting future problems and potentially impacting economic policies and regulations for the better.
Some key conclusions from the book for predictors of financial crises:
markedly raising asset prices (yes, and in particular house prices given the likely co-factor of increases in debt levels)
This is not the best way to start serious analysis of models versus markets in the prediction space, but given that I’m writing an exam tomorrow I thought I should put the links out there now. I’ll address this topic again in the future.
I have a clear strategy for how not to lose money playing the Make a Million competition. As I explain it, you may come up with some smart tactics to win the competition and enhance your returns, but you’re on you’re own there.
So, how does one not lose money with the Make a Million competition?
You are overwhelmingly like to lose money if you enter this competition. I’ve said this before, and I’ve been right before. I’m right again.
Looks like my money is safe – Reserve Bank cut rates as predicted. Thinking about trying to predict for each MPC meeting then tracking my performance over time so I can be held accountable. Will mull over this first I am not that sure I’ll be sufficiently confident to stick my neck out in future!
Lightstone have a trick up their sleeves. Their raison d’être iscollecting, analysing, understanding and packaging data for themselves and others to use to understand past, current and future property valuations.
Their housing price index is more robust (and more independent) than those of the banks based off their own data and target markets. Rather than consider only the average price of houses sold in that particular month (which is a function of house price growth / decline but also how the type, condition, size and location of the houses sold that month differ from the prior month and year) they consider repeat sales where the same property has been bought and sold more than once.
This data is combined or “chain-linked” to provide a continuous measure of house price inflation over time.
The result of all of this data, best-in-class methodology and analysis? When Lightstone says “opportunities abound in local market” I actually listen. Since their business model is to sell information, I’m more likely to trust what they say.
I don’t disagree. It’s just that almost every meaningful decision ever made is made without all the information.
Unknowns can be categorised a hundred different ways. One way is to think about:
Unknown past information
Uncertainty around the current situation or position
Unknown future outcomes
Even a game like chess, where the past history of the game is easily known by good players, the current position is clearly visible and all the possible moves are knowable, it is not possible have all the information about how your opponent will react to your move.
How to deal with decision making under uncertainty – part 1
Tyler suggests that gut-based decision making can be effective much of the time – and it can. It there genuinely is no time for anything more than an instinctive reaction, you probably are best going with your gut.
Even if you have plenty of time, listening to your guy to formulate an idea is a great idea. Insight comes partly from experience and the reinforced neural pathways of our learning brain. If you stop with the gut though, you are missing out. There is a tremendous amount of research showing how ridiculously badly our instincts perform in many areas, particularly those relating to uncertainty and complexity! Continue reading Most decisions are made without all the information→