Category Archives: measurement

SA85-90 “combined” and more actuarial sloppiness

I know of far too many actuaries who think that the “average” SA85/90 table is an appropriate base for their insured lives mortality assumption.

It’s not.

It’s also a good example of “actuarial sloppiness”.

To be specific, it is equally inappropriate if your current experience is a reasonable fit for the combined SA85/90 table.

SA85/90 was graduated based on South African insured lives data from 1985 to 1990. This period is important because it’s generally felt to be the last period in South Africa where HIV/AIDS would not have had a significant impact on mortality. (Estimates differ, but 1985 is often taken as the starting point for the HIV epidemic in South Africa and even though there might have been some deaths within the first five years, it is inconceivable to have affected a significant portion of the population.)

SA85/90 came in two version, “light” and “heavy”. Somewhat disappointingly, no distinction was made between males and females. Light mortality reflected the typical, historical, insured life characteristics which was pretty much white males. If I recall correctly, “Coloured” and “Indian” males were also combined into the light table. “Heavy” mortality reflected the growing black policyholder base in South Africa.

For all the awkwardness of this racial classification, the light and heavy tables reflect the dramatically different mortality in South Africa based on wealth, education, nutrition and access to healthcare. Combining the results into a single table wasn’t reliable since there were significant differences in mortality AND expected changes in the proportions of the heavy and light populations in the insured populations into the future.

A combined table was still created at the time. I suspect Rob Dorrington may have some regrets at having created this in the first place or at least in not having included a clearer health warning directly in the table name. The combined table reflects the weighted experience of light and heavy based on the relative sizes of the light and heavy sub-populations during the 1985 to 1990 period. I think a safer name would have been “SA85/90 arbitrary point in time combined table not to be used in practice”.

There is no particular reason to believe that the sub-population that you are modelling reflects these same weights. Even for the South African population as a whole these weights are no longer representative. The groups, at least in the superficial sense we view any particular citizen as coming from distinctly one group, will fairly obviously have experienced different mortality but will also have experience different fertility and immigration rates.

Our actuarial pursuit of separating groups of people into smaller, homogenous groups should also indicate that in most cases the sub-population you are modelling will more closely reflect one or the other of these groups rather than both of them.

But even if, just for the sake of argument, your sub-population of interest does reflect the same mix at each and every age as baked into the combined SA85/90 table, then it would still be entirely inappropriate to use the table for all but the crudest of tasks. After all, there a reason for our penchant for homogenous groups. If you model your sub-population for any length of time, the mix will surely change as those exposed to higher mortality die at a faster rate than those with low mortality.

The first order impact would be that you would be modelling higher mortality over time than truly expected. Due to the relative mortality between the two populations differing by age, the actual outcome will be somewhat more complex than that and more difficult to estimate in advance. This is particularly important with insurance products where the timing of death is critically important to profitability.

So, just because you can get a reasonable fit to your experience of an age- or percentage-adjusted SA85/90 combined table does not mean you have an appropriate basis for modelling future mortality. It may not vastly different from a more robust approach, but it’s just sloppy.

The virtual irrelevancy of population size to required sample size

Statistics and sampling are fundamental to almost all of our understanding of the world. The world is too big to measure directly. Measuring representative samples is a way to understand the entire picture.

Popular and academic literature are both full of examples of poor sample selection resulting in flawed conclusions about the population. Some of the most famous examples relied on sampling from telephone books (in the days when phone books still mattered and only relatively wealthy people had telephones) resulting in skewed samples.

This post is not about bias in sample selection but rather the simpler matter of sample sizes.

Population size is usually irrelevant to sample size

I’ve read too often the quote: “Your sample was only 60 people from a population of 100,000.  That’s not statistically relevant.”  Which is of course plain wrong and frustratingly wide-spread.

Required Sample Size is dictated by:

  • How accurate one needs the estimate to be
  • The standard deviation of the population
  • The homogeneity of the population

Only in exceptional circumstances does population size matter at all. To demonstrate this, consider the graph of the standard error of the mean estimate as the sample size increases for a population of 1,000 with a standard deviation of the members of the population of 25.

Standard Error as Sample Size increases for population of 1,000
Standard Error as Sample Size increases for population of 1,000

The standard error drops very quickly at first, then decreases very gradually thereafter even for a large sample of 100. Let’s see how this compares to a larger population of 10,000. Continue reading

Birdy statistics

I’m not sure about this fascinating article on birds evolving to avoid cars in the US.

The story is that fewer cliff swallows are being killed on the roads AND those birds killed have longer than average wings. The argument here is that longer wings make for less agility, making the birds more likely to be killed by cars.

So far so good. But then:

The authors of the study found that over a 30 year period, annual cliff swallow roadkill has declined steadily from 20 birds per season in 1984 and 1985 to less than five birds per season during the last five years. Over the same period, traffic volumes remained the constant and the overall bird populations increased.

I am not an ornithologist or evolutionary expert, but I just can’t see how between 20 and 5 birds killed per season will create enough selection pressure to change the wingspan.

The original research summary is far more persuasive than the article. It shows graphs and statistical test results for decreasing average population wing size and increasing average road-kill wing size over time.

The explanation of why the average wingspan for cliff swallows killed be vehicles should increase is left unexplained. It does rather suggest potential measurement or confirmation bias from the research team – once the hypothesis starts looking interesting it would be very easy to unintentionally bias the measurements.  Measuring wingspan accurately to within a few millimetres is fraught with risks of subjective error.

Further, it looks like around 3 data points contribute significantly to the low p values of the tests and I would be very curious to know how robust the results were to removal of these influential points. It looks like the trends might remain, but without anything close to the significant suggested by the original research.

Finally, the clustering of wing measurement points in certain years suggests different levels of care and accuracy in measurement and potential “anchoring and adjustment bias”. It’s very hard to apply the same measurement protocols over 30 years.

So, fascinating research, interesting conclusion, but I’m left somehow unconvinced. It’s a pity the statistics applied weren’t a little more robust and the obvious criticisms weren’t addressed.

 

Fixing SA education – political will not (only) money

I blog from time to time about education in South African and its frightening link to unemployment and all the societal ills that go along with that. I also point out that as a nation we spend a fair amount of money on education with very poor results.

This story about absenteeism amongst South African teachers goes some way to explaining the problem.

Teachers in our public school system took an average of19 days of sick leave per year. I also blog about the dangers of averages. For every teacher that doesn’t take sick leave (and I’m sure there are many) there are teachers taking more than 19 days of sick leave per year.

What’s interesting here is that not only is this an astonishingly high number, it’s also clearer more than the 10 days per year on average on a rolling 3 year basis that is allowed under the Basic Conditions of Employment Act.  Let’s also not forget that while teachers should probably be paid more in an ideal world, they do also get vastly more annual leave than most already.

I’d also like a four day week every second week thank you.

How exactly are these teachers allowed to take so much sick leave? Well unfortunately the answer is the same as why our education system is in such a sorry state. Poorly trained, poorly motivated teachers without a culture of pride in their work, overly strong unions and no political will to do anything about it.

Credit Suisse annual update on market performance

Credit Suisse has for several years now put out an annual Credit Suisse Global Investment Returns Yearbook 2013 is out now.

It’s worth reading in its entirety for the insights. I don’t agree with everything there, and I certainly don’t agree with the widely held view (not among the authors) that the universe of countries included in the survey is supposed to be somehow representative of the world.

The countries chosen have an absolutely clear bias in their selection. They are successful economies with successful financial markets. They are included by virtue of their long-term success and capital growth and returns for investors.

The authors know this, but many readers don’t.  The returns per this survey are an overly rosy view of possible future returns.

The Perfect Storm Part 1 – IFRS reporting under SAM

A client recently mentioned that they were concerned about the implication that the adoption of Solvency Assessment and Management (SAM) would have on insurance accounting under current IFRS4.

The apparent concern was that measurement of policyholder liabilities for IFRS reporting would change to follow SAM automatically.

Let me start out by saying this is categorically not the case. The adoption of SAM should not change IFRS measurement of insurance liabilities. In this post I’ll cover some of the technical details and common misconceptions of IFRS4 to demonstrate why this conclusion is so clear. Continue reading

Does being richer make you feel better than being cooler?

Economists (and actuaries) like to measure things.

The easier to measure and the more reliable the measure, the more we like to measure it. This is not unlike the drunk looking for his keys under the street lamp because that’s where the light is even if it isn’t where he dropped the keys.

Sometimes the most important things to measure are very difficult to measure reliably. Happiness is one of these things.  Economists have been trying to measure this for decades with interesting, counter-intuitive and sometimes contradictory results.

Recent research suggests that maybe money does make people happier after all.

The Perfect Storm – Part 0

The world of financial reporting for insurers has never been this close to the edge.

There is more change brewing now even than when Europe adopted “European Embedded Values” and later “Market Consistent Embedded Values”. The irony is that Embedded Values may well fall away as a result of the latest change.

So what is changing?

  1. Solvency Assessment and Management (SAM) is still planned for 2015 in South Africa. SAM will change the calculation of actuarial reserves, or Technical Provisions as they are now known, for regulatory reporting purposes. Solvency II in Europe is now likely to follow rather than precede SAM by a few year, but with nearly identical implications.
  2. IFRS4, the accounting standard covering insurance contracts, is due for a radical change effective in 2016/2017, although this is years later than originally planned. IFRS4 “Phase 2″ as it is referred to throws out most of what we’re used to in terms of profit recognition, financial impact of assumption changes, impacts of asset and liability mismatches and may very well push insurers to value their assets on a different basis.
  3. IFRS9, a new standard replacing IAS39 and covering financial instruments, whether these are assets or liabilities, will poke and prod insurers into different decisions now and possibly before knowing exactly how IFRS4 will pan out.
  4. Finally, although this part is still speculative, Embedded Value reporting may fall away as SAM and Solvency II achieve much of the objections of Embedded Value.

This post is the first in a series covering important aspects if the change in financial reporting standards, covering news of the developments as it emerges as well as the likely implications for financial reporting, product design, ALM, financial reinsurance and others. I’d encourage you to post comments or questions on this or later posts and I’ll try to answer those through the series.

  • Part 1 – IFRS reporting under SAM
  • Part 2 – EV in a SAM/Solvency II world
  • Part 3 – Apocalypse! – SAM as the tax basis
  • Part 4 – Acquisition accounting under IFRS4 Phase II – a little speculation