I have felt like this before. Check out XKCD for more of this.
A few years ago, as South African life insurers were experiencing sharp spikes in lapse rates as a result of the GFC, some analysts and actuaries raised concerns about the impact of selective lapsation on mortality experience.
The principle is sound: Policyholders who know their health is very poor are less likely to lapse than policyholders who have no concerns about their health. Thus, those who lapse are likely to be healthier than those who remain. The mortality (and morbidity and disability) experience of those who remain will be worse than expected based on past experience.
The higher the lapses, the more significant this impact becomes. As an example, let’ say we have the following mix of policyholders by underwriting class (best to worst)
|underwriting class||Mix of population||mortality experience as % of best|
I’ve approximated the experience here based on some past CSI presentations on mortality experience – could definitely be fine-tuned.
This would give rise to experience on average of 125% of “best underwriting class experience”. If 10% of policyholders lapse and we assume these are all “best class” lives, the average experience increases by just 2.8% to 127.8%. A pretty modest impact even assuming 100% of lapses are very healthy.
In practice, some lapses would be driven by affordability and other issues so it wouldn’t be as dramatic as this. If only half the lapses were selective, the impact drops to 1.4%.
If lapses rise to 25%, then experience might be 8.3% worse, which is only just larger than the compulsory margin. And again, this is likely a worst case scenario. The “50% selective impact” is still only 4.2%.
So where does all the fuss come from? At a recent conference in the US I discovered some products where lapse rates at certain durations are as high as 90%. This relates to points where the premiums jump up dramatically after an extended period of being level and continue to rise from thereon out.
With a 90% lapse rate, assuming the best 90% of policyholders lapse, results in future mortality experience of 175% higher than before or 300% of the best class mortality experience. The impact is still a nearly doubling of experience under the 50% selective scenario.
So yes, selective lapsation can be a genuine risk, but the lapse rates where this becomes a major issue are higher than most insurers will experience under normal scenarios.
Last postscript here is that none of this would have an impact if the insured population were completely homogenous – I’ll have another post on the need to dig into populations and understand how combining heterogeneous populations is dangerous when I discuss the misuse of SA85/90 “combined”.
An actuary I know once made me cringe by saying “It doesn’t matter how an Economic Scenario Generator is constructed, if it meets all the calibration tests then it’s fine. A machine learning black box is as good as any other model with the same scores.” The idea being that if the model outputs correctly reflect the calibration inputs, the martingale test worked and the number of simulations generated produced an acceptably low standard error then the model is fit for purpose and is as good as any other model with the same “test scores”.
This is an example of actuarial sloppiness and is of course quite wrong.
There are at least three clear reasons why the model could still be dangerously specified and inferior to a more coherently structured model with worse “test scores”.
The least concerning of the three is probably interpolation. We rarely have a complete set of calibration inputs. We might have equity volatility at 1 year, 3 year, 5 year and an assumed long-term point of 30 years as calibration inputs to our model. We will be using the model outputs for many other points and just because we confirmed that the output results are consistent with the calibration inputs says nothing about whether the 2 year or 10 year volatility are appropriate.
The second reason is related – extrapolation. We may well be using model outputs beyond the 30 year point for which we have a specific calibration. A second example would be the volatility skew implied by the model even if none were specified – a more subtle form of extrapolation
A typical counter to these first two concerns is to use a more comprehensive set of calibration tests. Consider the smoothness of the volatility surface and ensure that extrapolations beyond the last calibration point are sensible. Good ideas both, but already we are veering away from a simplified calibration test score world and introducing judgment (a good thing!) into the evaluation.
There are limits to the “expanded test” solution. A truly comprehensive set of tests might well be impossibly large if not infinite with increasing cost to this brute force approach.
The third is a function of how the ESG is used. Most likely, the model is being used to value a complex guarantee or exotic derivative with a set of pay-offs based on the results of the ESG. Two ESGs could have the same calibration test results, even calculating similar at-the-money option values but value path-dependent or otherwise more exotic or products very differently due to different serial correlations or untested higher moments.
It is unlikely that a far out-of-the-money binary option was part of the calibration inputs and tests. If it were, it is certain that another instrument with information to add from a complete market was excluded. The set of calibration inputs and tests can never be exhaustive.
It turns out there is an easier way to decreasing the risk that the interpolated and extrapolated financial outcomes of using the ESG are nonsense: Start with a coherent model structure for the ESG. By using a logical underlying model incorporating drivers and interactions that reflect what we know of the market, we bring much of what we would need to add in that enormous set of calibration tests into the model and increase the likelihood of usable ESG results.
I don’t know if this is a function of ubiquitous IP geolocation or some metaphysical reward for travel, but I’ve been discussing some unfamiliar music and artists in multiple ways during this trip.
I know some of you will be amused at me “discovering” these artists only now.
Moneybox, a Slate blog I still read from time to time, had an interesting off-the-wall story about Sweden’s new postage stamps. Apparently there is an artist called Robyn. Also, “brev” means letter in Swedish which sounds much like “brief” to me, so perhaps Afrikaans has some value outside South Africa after all.
i didn’t think any more until I found another “first play” album on iTunes Radio (this is not a paid ad for Apple, promise) by a band called “Royksopp”. Enjoying the album, electronic music across a wide range of underlying genres. Turns out Royksopp and Robyn have performed together and produced music together.
The US, despite its nascent recovery, is learning some hard lessons about the differences between short-term unemployment and long-term unemployment.
South Africa should have some lessons to teach here. We have enough experience to expect some insights. Sadly, I’m not sure we’ve done much except become accustomed to it.
Long-term unemployed – one definition is those who have been unemployed for longer than 27 weeks – increasingly find themselves unemployable with stale skills, stigma and demotivation driving them out of the labour force.
A Brookings Institute paper talks to the greater problem that even on returning to work, the long-term unemployed often find themselves unemployed again.
only 11 percent of those who were long-term unemployed in a given month returned to steady, full-time employment a year later.
The paper and related videos and infographics are worth a read.
Another telling statistic is, that for a given month (between 2008 and 2012), of those who were “long term unemployed” a third were not considered part of the labour force 15 months late. They’d stopped actively looking for work. Barely more than a third work employed and many of these didn’t have regular, full time work.
Meanwhile, short term unemployment rates in the US have been falling as the steady jobs created figures trickle in month after month. Short term unemployment at 4.0% is actually lower than average. This relatively low short-term unemployment rates mean that at some point, wages will likely increase adding to inflationary pressures (a way off still, but it will come modestly eventually) which just increases the cost of living for those long-term unemployed who still don’t have work.
If you’re South African, this should sound horribly familiar. Even with our idling economy, those unemployed with skills and recent employment will probably find a job fairly soon. There are plenty of areas where filling positions is hard. We have upward pressure on wages across the board for skilled, semi-skilled and even unskilled workers. A combination of imported inflation, pressures and intentions to decrease wage gaps, powerful unions and restrictive labour laws mean that the long-term unemployed are left well by the wayside even while wages tick higher.
In the US, long-term unemployment rates are coming down but are well above their long-term averages. Part of this decrease also reflects exits from the labour force as unemployed stop looking for work. The graph above shows the scale of the problem in Greece and Spain and the fairly remarkable recovery for Germany.
From the Brookings paper by Alan B. Krueger, Judd Cramer, and David Cho of Princeton University:
Past research has found that the longer a worker is unemployed, the less time they spend searching for a job, the fewer job applications they submit, and the less likely they are to be called in for an interview for the jobs to which they do apply.
Again, this perfectly describes a South African between the ages of 18 and 25 without matric or even with matric and no further education struggling to find work. Too many of these are being supported by parents and grand-parents.
The situation in the US is similar – those with less than a “college degree” are far more likely to be unemployed and far more likely to be long-term unemployed. Interestingly though, those aged 16-34 are more likely to be short-term rather than long-term unemployed, whereas for older ages the reverse is true.
So we’re back to familiar ground. Education drives job opportunities. Youth without skills struggle to find work, become long term unemployed and steadily become less employable, less likely to find regular work. Let’s just hope our “austerity budget” recognises the critical importance of educations, skills-training and youth employment projects before it sets us further back on our two-decade journey to economic decrepitude.
Brief personal note. I’m off to the Milliman Life Consultants Forum in the US this week. Looks like an amazing programme covering ORSAs, Solvency II vs EV reporting and many of their software tools. Great to have such a relevant programme to me and for us in South Africa.
I’m writing a blog post on unemployment (up soon!) while listening to the brand new Foo Fighters album, Sonic Highways, courtesy of iTunes Radio. Looks to be a fantastic album – might just become the soundtrack of my trip.
Michael Lewis, of Liar’s Poker and The Big Short fame, has written a book on the world of High Frequency Trading. The booked is called Flash Boys and it’s more or less as entertaining has Lewis’s recent books. That’s a good thing, even if it doesn’t quite reach the highs of Liar’s Poker (which you absolutely must buy if you haven’t read it before.)
In some ways I’m surprised more hasn’t come of Flash Boys – when it was published there was some noise about investigations and hearings into the allegations in the book. I’m not aware that any of that led to anything. However, my reaction should give you an idea of how explosive the allegations in the book are. I went from misunderstand HFT as a algorithm based approach to trading based on patterns in the data to (probably still misunderstanding) HFT as a parasitic arms race against “regular investors” who are potentially losing money to at least some of the HFT firms without any tangible benefit.
Read the book just to open your mind to the possibilities and for a new appreciation of the speed of light through glass.
Jeffrey Robinson, the author of the well known book “Laundrymen” that I’m now reading, has written an engaging story about The Satoshi Faithful (as he calls them) supporters of Bitcoin and where their Faith is leading them stray.
The book is called BitCon: The Naked Truth About Bitcoin and it doesn’t pull punches in deriding the would-be currency. If you don’t know anything about Bitcoins, it may skip over some of the introductions necessary to hold your own in conversation. This isn’t a primer on Bitcoins or crypto-currencies, but it also doesn’t spend chapters on involved technical details so you won’t be completely lost.
I described the book as “engaging”. For me, already very sceptical of the long-term chances of success for Bitcoin and specifically critical of its suitability as real “currency”, it had me nodding in agreement with many sections. Frankly, I don’t know how persuasive it would be to a fervent supporter (not that much anything would be).
I did enjoy the insights into some of the personalities behind Bitcoin and the histories of different supporters and how this has changed over the short time Bitcoins have been around. I learnt more about the Dark Web than I knew before, gaining a new appreciation for how dark the underbelly of the web and Bitcoins are.
Robinson ignored what I think is a key limitation on Bitcoin. Supporters claim its value derives in large part from the limited supply, but without any intrinsic value, other crypto-currencies are near-perfect substitutes. I’ve blogged about this before and was looking forward to seeing another take on it.
I enjoyed the book, reading through it fairly quickly and without wanting to switch to something else, suggesting Robinson hit the target with length and balance of information vs entertainment.
Go grab a copy from Amazon – BitCon: The Naked Truth About Bitcoin.