Category Archives: modelling

Open mortality data

The Continuous Statistical Investment Committee of the Actuarial Society does fabulous work at gathering industry data and analysing it for broad use and consumption by actuaries and others.

I can only begin to imagine the data horrors of dealing with multiple insurers, multiple sources, multiple different data problems. The analysis they do is critically useful and, in technical terms, helluva interesting. I enjoyed the presentation at both the Cape Town and Johannesburg #LACseminar2013 just because there is such a rich data set and the analysis is fascinating.

I do hope they agree to my suggestion to put the entire, cleaned, anonymised data set available on the web. Different parties will want to analyse the data in different ways; there is simply no way the CSI Committee can perform every analysis and every piece of investigation that everyone might want. Making the data publicly available gives actuaries, students, academics and more the ability to perform their own analysis. And at basically no cost.

The other, slightly more defensive reason, is that mistakes do happen from time to time. I’m very aware of the topical R-R paper that was based on flawed analysis of underlying data. Mistakes happen all the time, and allowing anyone who wants to have access to the data to repeat or disprove calculations and analysis only makes the results more robust.

So, here’s hoping for open access mortality investigation data for all! And here’s thanking the CSI committee (past and current) for everything they have already done.

The importance of verification

It’s amazing to test this yourself. Hold down the button of a garage door opener and try to use your vehicle lock / unlock button. It doesn’t work. Simple as that. The signal from the garden variety garage door opener blocks the signal of most (all?) vehicle remotes.

Smart criminals are increasingly using this to stop owners locking their doors and stealing goodies from cars.  This seems to be getting worse in Cape Town CBD at the moment.

So why I am talking about verification?

Well, to prevent the problem requires either a high tech revamp of the vehicle remote industry, or for owners to actually observe or check that their doors are in fact locked. Listening for the clack, watching the hazard lights flash or even physically testing that the door is locked.

These methods work perfectly and are only the tiniest bit of extra effort.

Relying on models to simply give the right answer without checking that it makes sense is likely to get your credibility burgled. Form high level initial expectations, hypothesize plausible outcomes, check sensitivity to assumptions, reproduce initial results and run step-wise through individual changes, get a second opinion and fresh eyes.

This all takes a little time in the short term and saves everything in the long term.

How not to calibrate a model

Any model is a simplification of reality. If it isn’t, then it isn’t a model as rather is the reality.

A MODEL ISN’T REALITY

Any simplified model I can imagine will also therefore not match reality exactly. The closer the model gets to the real world in more scenarios, the better it is.

Not all model parameters are created equal

Part of the approach to getting a model to match reality as closely as possible is calibration. Models will typically have a range of parameters. Some will be well-established and can be set confidently without much debate. Others will have a range of reasonable or possible values based on empirical research or theory. Yet others will be relatively arbitrary or unobservable.

We don’t have to guess these values, even for the unobservable parameters. Through the process of calibration, the outputs of our model can be matched as closely as possible to actual historical values by changing the input parameters. The more certain we are of the parameters a priori the less we vary the parameters to calibrate the model. The parameters with most uncertainty are free to move as much as possible to fit the desired outputs.

During this process, the more structure or relationships that can be specified the better. The danger is that with relatively few data points (typically) and relatively many parameters (again typically) there will be multiple parameter sets that fit the data with possibly only very limited difference in “goodness of fit” for the results. The more information we add to the calibration process (additional raw data, more narrowly constrained parameters based on other research, tighter relationships between parameters) the more likely we are to derive a useful, sensible model that not only fits out calibration data well but also will be useful for predictions of the future or different decisions.

How not to calibrate a model

Scientific American has a naive article outlining “why economic models are always wrong”. I have two major problems with the story: Continue reading

Multi-tasking lowers productivity

I have to multi-task. I am the bottle-neck for too many problems already and my team needs input from me before they can continue in many areas.

But I know it doesn’t make me efficient. Switching between tasks takes time. You forget important details, and struggle to get the depth of understanding and focus required for complex issues. The cross-pollination of ideas and solutions doesn’t come close to making up for these drawbacks.

From the research:

Descriptive evidence suggests that judges who keep fewer trials active and wait to close the open ones before starting new ones, dispose more rapidly of a larger number of cases per unit of time. In this way, their backlog remains low even though they receive the same workload as other judges who juggle more trials at any given time.

Did you read those magic words? “…backlog remains low…” I don’t know anyone who doesn’t wish for the luxury of a shorter backlog of work.

The paper itself is fairly complex, analysing theoretical models of human task scheduling.  You should probably add it to your pile of things to read in the middle of other work.

Property investment – the value of data over opinions

Lightstone have a trick up their sleeves. Their raison d’être is collecting, analysing, understanding and packaging data for themselves and others to use to understand past, current and future property valuations.

Their housing price index is more robust (and more independent) than those of the banks based off their own data and target markets. Rather than consider only the average price of houses sold in that particular month (which is a function of house price growth / decline but also how the type, condition, size and location of the houses sold that month differ from the prior month and year) they consider repeat sales where the same property has been bought and sold more than once.

This data is combined or “chain-linked” to provide a continuous measure of house price inflation over time.

House Price Inflation 2010
House Price Inflation 2010 source: lightstone.co.za

The result of all of this data, best-in-class methodology and analysis? When Lightstone says “opportunities abound in local market” I actually listen. Since their business model is to sell information, I’m more likely to trust what they say.

Most decisions are made without all the information

Tyler Reed blogs about entrepreneurs having to make decisions with limited information.

It’s almost all unknown

I don’t disagree.  It’s just that almost every meaningful decision ever made is made without all the information.

Unknowns can be categorised a hundred different ways. One way is to think about:

  1. Unknown past information
  2. Uncertainty around the current situation or position
  3. Unknown future outcomes

Even a game like chess, where the past history of the game is easily known by good players, the current position is clearly visible and all the possible moves are knowable, it is not possible have all the information about how your opponent will react to your move.

How to deal with decision making under uncertainty – part 1

Tyler suggests that gut-based decision making can be effective much of the time – and it can. It there genuinely is no time for anything more than an instinctive reaction, you probably are best going with your gut.

Even if you have plenty of time, listening to your guy to formulate an idea is a great idea. Insight comes partly from experience and the reinforced neural pathways of our learning brain. If you stop with the gut though, you are missing out. There is a tremendous amount of research showing how ridiculously badly our instincts perform in many areas, particularly those relating to uncertainty and complexity! Continue reading

5 Things to Learn from Monopoly

I haven’t played Monopoly in a while (preferring Settlers of Catan, Carcasonne, Tigris and Euphrates and even Cranium), but after a recent conversation I started thinking about the game dynamics. There is surprisingly much that is relevant to the current story of our economy.

1 The Competition Commission is necessary

Monopolies serve to increase prices for consumers. In Monopoly, the “rents” charged are instantly higher as soon as a player has a monopoly on property in a certain area.

Worse than the increase in prices and decrease in supply, the additional profit for suppliers is not equal to the cost to consumers from higher prices, resulting in an overall “dead weight loss of monopoly” or an overall cost to society. Continue reading