Downwards counterfactual analysis

Stress and scenario testing are important risk assessment tools.  They also provide useful ways to prepare in advance for adverse scenarios so that management doesn’t have to create everything from first principles when something similar occurs.

But trying to imagine scenarios, particularly very severe scenarios, isn’t straightforward. We don’t have many examples of very extreme events.

Some insurers will dream up scenarios from scratch. It’s also common to refer to prior events and run the current business through those dark days. The Global Financial Crisis is a favourite – how would our business manage under credit spread spikes, drying up of liquidity, equity fall markets, higher lapses, lower sales, higher retrenchment claims, higher individual and corporate defaults, switches of funds out of equities, early withdrawals and surrenders and increased call centre volumes?

Downwards counterfactual analysis is the: Continue reading “Downwards counterfactual analysis”

Credit Life regulations and reactions (2)

In part 1 I discussed the implications of basing premiums on initial balance or declining balance for profitability and the threat of substitute policies.

In this post I want to discuss substitute policies again, talk about cover for self-employed persons and definitions of waiting periods.

What is a substitute policy

Substitute policies are one of the few drivers of real potential competition and therefore competitive markets for credit life in South Africa. That’s probably not the definition you were expecting but nevertheless it is true.

With some exceptions, credit life is not sold in a competitive or symmetrical environment and customers have little or no bargaining power.

 

A substitute policy is a policy from another insurer (not connected to the lender) that covers the same or similar benefits and legally must be accepted as a substitute for the cover required by the lender under the terms of the loan.

Historically, the rate of substitute policies was tiny. Often less than 1%. Lenders and their associated insurers weren’t exactly incentivised to make it an easy process. For smaller loans and therefore smaller policies, the incremental acquisition costs can be prohibitive.

Substitute policies are gaining momentum

I am aware of several players specifically targeting existing credit life customers and aiming to switch these customers to their own products.

This has been enabled through:

  • standardising of credit life policies
  • bulking of many different small credit life policies into a larger one that is more cost effective to acquire and administer
  • technology (digital / online especially but also call centres) that can moderate costs
  • the growing awareness of how profitable these policies often are for a standalone insurer, even at the various caps imposed.

Lenders may need to supplement revenue on high risk customers because interest rate caps apply, but the stand alone insurer is focussed on a reasonable underwriting result, not the level necessary to offset costs elsewhere.

What counts as a substitute policy / minimum prescribed benefits

A substitute policy simply needs to cover the minimum benefits from section 3 of the credit life regulations. This covers death, permanent disability, temporary disability and unemployment or loss of income.

These regulations can be difficult to interpret, but ultimately are clear: Continue reading “Credit Life regulations and reactions (2)”

Systemic risk primary poll

Is Systemic Risk in Insurance:

View Results

Loading ... Loading ...

Behavioural economics and XKCD

This is premature for a book review as I’m only on page 3 or something. The book:Insurance and Behavioral Economics: Improving Decisions in the Most Misunderstood Industry has been recommended to me a few times. (Okay, twice by the same person, but that also actually says something.) I’ll work my way through it and give you my views.

But this XKCD reminded me of the ubiquity of behavioural economics everywhere in life.

SA85-90 “combined” and more actuarial sloppiness

I know of far too many actuaries who think that the “average” SA85/90 table is an appropriate base for their insured lives mortality assumption.

It’s not.

It’s also a good example of “actuarial sloppiness”.

To be specific, it is equally inappropriate if your current experience is a reasonable fit for the combined SA85/90 table.

SA85/90 was graduated based on South African insured lives data from 1985 to 1990. This period is important because it’s generally felt to be the last period in South Africa where HIV/AIDS would not have had a significant impact on mortality. (Estimates differ, but 1985 is often taken as the starting point for the HIV epidemic in South Africa and even though there might have been some deaths within the first five years, it is inconceivable to have affected a significant portion of the population.)

SA85/90 came in two version, “light” and “heavy”. Somewhat disappointingly, no distinction was made between males and females. Light mortality reflected the typical, historical, insured life characteristics which was pretty much white males. If I recall correctly, “Coloured” and “Indian” males were also combined into the light table. “Heavy” mortality reflected the growing black policyholder base in South Africa.

For all the awkwardness of this racial classification, the light and heavy tables reflect the dramatically different mortality in South Africa based on wealth, education, nutrition and access to healthcare. Combining the results into a single table wasn’t reliable since there were significant differences in mortality AND expected changes in the proportions of the heavy and light populations in the insured populations into the future.

A combined table was still created at the time. I suspect Rob Dorrington may have some regrets at having created this in the first place or at least in not having included a clearer health warning directly in the table name. The combined table reflects the weighted experience of light and heavy based on the relative sizes of the light and heavy sub-populations during the 1985 to 1990 period. I think a safer name would have been “SA85/90 arbitrary point in time combined table not to be used in practice”.

There is no particular reason to believe that the sub-population that you are modelling reflects these same weights. Even for the South African population as a whole these weights are no longer representative. The groups, at least in the superficial sense we view any particular citizen as coming from distinctly one group, will fairly obviously have experienced different mortality but will also have experience different fertility and immigration rates.

Our actuarial pursuit of separating groups of people into smaller, homogenous groups should also indicate that in most cases the sub-population you are modelling will more closely reflect one or the other of these groups rather than both of them.

But even if, just for the sake of argument, your sub-population of interest does reflect the same mix at each and every age as baked into the combined SA85/90 table, then it would still be entirely inappropriate to use the table for all but the crudest of tasks. After all, there a reason for our penchant for homogenous groups. If you model your sub-population for any length of time, the mix will surely change as those exposed to higher mortality die at a faster rate than those with low mortality.

The first order impact would be that you would be modelling higher mortality over time than truly expected. Due to the relative mortality between the two populations differing by age, the actual outcome will be somewhat more complex than that and more difficult to estimate in advance. This is particularly important with insurance products where the timing of death is critically important to profitability.

So, just because you can get a reasonable fit to your experience of an age- or percentage-adjusted SA85/90 combined table does not mean you have an appropriate basis for modelling future mortality. It may not vastly different from a more robust approach, but it’s just sloppy.

The virtual irrelevancy of population size to required sample size

Statistics and sampling are fundamental to almost all of our understanding of the world. The world is too big to measure directly. Measuring representative samples is a way to understand the entire picture.

Popular and academic literature are both full of examples of poor sample selection resulting in flawed conclusions about the population. Some of the most famous examples relied on sampling from telephone books (in the days when phone books still mattered and only relatively wealthy people had telephones) resulting in skewed samples.

This post is not about bias in sample selection but rather the simpler matter of sample sizes.

Population size is usually irrelevant to sample size

I’ve read too often the quote: “Your sample was only 60 people from a population of 100,000.  That’s not statistically relevant.”  Which is of course plain wrong and frustratingly wide-spread.

Required Sample Size is dictated by:

  • How accurate one needs the estimate to be
  • The standard deviation of the population
  • The homogeneity of the population

Only in exceptional circumstances does population size matter at all. To demonstrate this, consider the graph of the standard error of the mean estimate as the sample size increases for a population of 1,000 with a standard deviation of the members of the population of 25.

Standard Error as Sample Size increases for population of 1,000
Standard Error as Sample Size increases for population of 1,000

The standard error drops very quickly at first, then decreases very gradually thereafter even for a large sample of 100. Let’s see how this compares to a larger population of 10,000. Continue reading “The virtual irrelevancy of population size to required sample size”

The importance of verification

It’s amazing to test this yourself. Hold down the button of a garage door opener and try to use your vehicle lock / unlock button. It doesn’t work. Simple as that. The signal from the garden variety garage door opener blocks the signal of most (all?) vehicle remotes.

Smart criminals are increasingly using this to stop owners locking their doors and stealing goodies from cars.  This seems to be getting worse in Cape Town CBD at the moment.

So why I am talking about verification?

Well, to prevent the problem requires either a high tech revamp of the vehicle remote industry, or for owners to actually observe or check that their doors are in fact locked. Listening for the clack, watching the hazard lights flash or even physically testing that the door is locked.

These methods work perfectly and are only the tiniest bit of extra effort.

Relying on models to simply give the right answer without checking that it makes sense is likely to get your credibility burgled. Form high level initial expectations, hypothesize plausible outcomes, check sensitivity to assumptions, reproduce initial results and run step-wise through individual changes, get a second opinion and fresh eyes.

This all takes a little time in the short term and saves everything in the long term.