Claims analysis, inflation and discounting (part 2)

This is part 2 of a 3 part series. Part 1 is here.

Non-life claims reserves are regularly not discounted, for bad reasons and good. This part of the series looks at the related issue of inflation in claims reserving. (You’ll have to wait for part 3 for me to talk about the analysis that prompted this lengthy series.)

In many markets, inflation is low and stable. Until a decade ago, talk of inflation wouldn’t have raised much in the way of deflation either. That’s still sufficiently unusual to put to one side.

Low, stable inflation means that past claims development patterns are mostly about, in approximate descending order of importance (naturally depending on class and peril) Continue reading “Claims analysis, inflation and discounting (part 2)”

Current and future state of bancassurance in SA

Bancassurance, says the oracle or finance definitions online (aka Investopedia) is :

…is an arrangement in which a bank and an insurance company form a partnership so that the insurance company can sell its products to the bank’s client base. This partnership arrangement can be profitable for both companies. Banks can earn additional revenue by selling the insurance products, while insurance companies are able to expand their customer bases without having to expand their sales forces or pay commissions to insurance agents or brokers.

Bancassurance has been a major part of European and Asian insurance markets and, for a time, was presumed to be the future of insurance distribution in most countries around the world.

What happened was different. Bancassurance has not taken off in all markets the same as it did in the early success stories. Some of this has to do with the reversal of trust relationships between banking and insurance. Continue reading “Current and future state of bancassurance in SA”

CT Water: News come 4 October?

Not much of value in this shoddily wording reporting on CT’s water shortage, except that we will hear an official update on the water disaster plan on 4 October.

This is a topic of direct personal and business relevance, but also of a technical forecasting and measurement perspective. Very little I’ve seen so far gives my confidence in the forecasting, which is either because of poor forecasting or from very limited communication.

I don’t know which bothers me more.

31, 151 and what comes next

This is my first new post in over two years. There are many reasons for that, and I may get into that in a future post.  As to why I’m restarting – a conversation with an old friend last night combined with a lunch discussion with an actuarial student a couple of weeks ago has inspired me to attempt to, temporarily at least, restart my blog.

I’m going further than just restarting, I’m committing to a new blog post each day for October. Now the reasons for having stopped blogging haven’t suddenly changed, so it’s likely that some of these posts will be short. (And similarly, some of them long.) Since the decision was made last night, I also haven’t though through anything like a full plan for the month.  I invite you along to see how it goes.

I’m probably not alone in being slightly more jaded, slightly less optimistic than I was two years ago. A summary of the two years might make its way into another post, more to help me collect my thoughts than anything else.

Cape Town is experiencing an intense, multi-year drought and there is a real possibility of the city running out of water before next winter. I will definitely be blogging more about the vacuum of credible communication and forecasting on this front in a later post. For now, a single-purpose website  is currently proclaims (they update weekly, I think, based on updated weekly reports of dam levels) that we have 151 days of water left and will run out of useable water on 1 March 2018.

For now, the claims of cholera in Puerto Rico have not been proven, but it does feel like it’s only a matter of time. Anyone fretting over drinking water in Cape Town should probably bump diseases such as cholera up their list.

The official position of the City of Cape Town is still “we won’t run out of water”, but there are reasons to doubt this and be concerned. I’m keen to work out objectively what the level of risk is. To that end, it would have been useful to be able to dissect the  methodology to understand how credible their forecast is. This is the entire disclosure of their methodology:

Using our recent consumption as a model for future usage, we’re predicting that dam levels will reach 10% on the 1st of March, 2018.

I’m not losing sleep over their forecast. So for now, sleep.

Modest data

I’m as excited as the next guy about the possibilities of “Big Data!” but possibly more excited about the opportunities presented by plain old “Modest Data”. I believe there is plenty of scope for useful analysis on fairly moderate data sets with the right approach and tools.

I’d go as far as to say that many of the “Big Data!” stories and analysis currently performed is really plain old statistical analysis with a few new touches from the ever-expanding list of R libraries.

For example, it seems that papers with shorter titles get more citations by other researchers.  Although the research considered 140,000 papers, there is nothing especially “Big Data!” about the analysis. The paper and authors suggest several possible causes related to the quality of the journal, period of time etc. Disappointingly, they don’t seem to have modelled these possible effects directly to understand whether there is any residual effect.

There is scope for great analysis without “Big Data!” and plenty of scope of poor analysis with all the data in the world.

Foreign land ownership

Foreign person? Foreign company? Foreign trust? Local company owned by foreigners? Local company owned partly by foreigners? Foreign company owned by locals? Local company owned by locals with debt finance from foreigners?  Local bank with foreign shareholders and repossessed properties? Local insurance company issuing policies to foreigners? BRICS bank? Foreigner married in community of property to local? Local living permanently overseas?

You don’t even need to look at this proposal being counterproductive, populist silliness.

Defining things and Star Wars

This reminded me a little of trying to come up with the “right” risk taxonomy or risk management framework. There are so many different ways to draw boundaries, none which is perfect and many which are acceptable and useful.

Also, Star Wars.

Actuarial sloppiness

An actuary I know once made me cringe by saying “It doesn’t matter how an Economic Scenario Generator is constructed, if it meets all the calibration tests then it’s fine. A machine learning black box is as good as any other model with the same scores.” The idea being that if the model outputs correctly reflect the calibration inputs, the martingale test worked and the number of simulations generated produced an acceptably low standard error then the model is fit for purpose and is as good as any other model with the same “test scores”.

This is an example of actuarial sloppiness and is of course quite wrong.

There are at least three clear reasons why the model could still be dangerously specified and inferior to a more coherently structured model with worse “test scores”.

The least concerning of the three is probably interpolation. We rarely have a complete set of calibration inputs. We might have equity volatility at 1 year, 3 year, 5 year and an assumed long-term point of 30 years as calibration inputs to our model. We will be using the model outputs for many other points and just because we confirmed that the output results are consistent with the calibration inputs says nothing about whether the 2 year or 10 year volatility are appropriate.

The second reason is related – extrapolation. We may well be using model outputs beyond the 30 year point for which we have a specific calibration. A second example would be the volatility skew implied by the model even if none were specified – a more subtle form of extrapolation

A typical counter to these first two concerns is to use a more comprehensive set of calibration tests. Consider the smoothness of the volatility surface and ensure that extrapolations beyond the last calibration point are sensible. Good ideas both, but already we are veering away from a simplified calibration test score world and introducing judgment (a good thing!) into the evaluation.

There are limits to the “expanded test” solution. A truly comprehensive set of tests might well be impossibly large if not infinite with increasing cost to this brute force approach.

The third is a function of how the ESG is used. Most likely, the model is being used to value a complex guarantee or exotic derivative with a set of pay-offs based on the results of the ESG. Two ESGs could have the same calibration test results, even calculating similar at-the-money option values but value path-dependent or otherwise more exotic or products very differently due to different serial correlations or untested higher moments.

It is unlikely that a far out-of-the-money binary option was part of the calibration inputs and tests. If it were, it is certain that another instrument with information to add from a complete market was excluded. The set of calibration inputs and tests can never be exhaustive.

It turns out there is an easier way to decreasing the risk that the interpolated and extrapolated financial outcomes of using the ESG are nonsense: Start with a coherent model structure for the ESG. By using a logical underlying model incorporating drivers and interactions that reflect what we know of the market, we bring much of what we would need to add in that enormous set of calibration tests into the model and increase the likelihood of usable ESG results.