Book Review: Loss Coverage – Why Insurance Works Betters with Some Adverse Selection

In his book, Loss Coverage: Why Insurance Works Better with Some Adverse Selection, Guy Thomas propose an interesting point that adverse selection may not be as harmful as many actuaries believe. They actually go further and suggest that, at least from a policy perspective, adverse selection may be a good thing.

This is particularly relevant given the ambition of some InsurTech players to hyper select risks or price on many more factors than are traditionally used in order to gain a competitive advantage.   Thomas doesn’t argue that it will be individual insurers’ interests to allow adverse selection, but if these companies are successful it may then have implications for policy makers.

Incidentally, there are some interesting reasons for insurers themselves (with commercial interests) to be wary of selecting too well, counterintuitive as that may seem, but more on that for another time.

The book itself is a mixture of qualitative arguments and gentle reasoning with enough maths to keep you interested if you are that way inclined. I struggled to finish the book and found the points belaboured after a while.

The main reason I struggled with the book though is that I believe from the start it assumes its conclusion.- that “increased loss coverage is universally a good thing”.  I’m going to explain a little bit of that here. If you are going to read the book, it might be useful to have these thoughts in mind going into it and see whether you agree.

The premise of the book

Loss Coverage is defined as the expected amount of claims covered by insurance. They demonstrate that, under certain pricing and behavioural assumptions, more adverse selection leads to higher loss coverage.

The rationale is that without accurate pricing, some premiums will be set a level too high and some too low for the specific risk. Some good risks who feel the premium overstates their will decline cover. Since more of the higher risk customers are getting a good deal and recognise the good deal, they will retain cover. Since some of those high risk customers will have felt an accurate price was too high, there could be an increase in high risk customers with cover. They contribute disproportionately to loss coverage given their higher probability of claim and therefore overall loss coverage can go up.

These assumptions are not particularly robust and the book even deals with examples “taken to the logical extreme” that show no increase in loss coverage and a substantial decrease in the number of lives covered.

A limited view on the value of insurance

An increase in the loss coverage measure means that more claims are expected to be covered by insurance as a result of allowing some adverse selection. However, this is at the cost of fewer individual risks being covered by insurance in total.

The argument thus neglects part of the value of insurance. Having insurance, even if one is fortunate enough not to claim, allows a less anxious existence, and the ownership and use of precious assets that would be irrational without having transferred the risk to an insurer.

This not merely a “peace of mind” value. The ability to optimise one’s risk budget by reducing certain risks and taking on others allows for risk-taking and economic growth. Insurance exists in the first place because it is not efficient for individuals to bear their own risks without pooling or transfer.

Questionable measures on ultra high probability claims

Why is the “risk greatest for those with the highest probability of claiming”? Under certain definitions of risk that is the case. But it’s not universal. If the probability of claim is 100%, I’d argue there is no “risk” at all. That might be a trivial case, but what about where the probability is 75%?

This reminds me of a product idea I never liked – insurance for taxi tyres. Taxi tyres are a consumable item that are replaced sometimes two or three times per year. The risk of having to replace them slightly earlier than planned hardly feels like a risk worth insuring. The probability of claim is too high and the cost of the claim too low. Would optimal loss coverage have all of these tyres insured?

And the book?

The book should have been a paper. There isn’t enough there to warrant the length or the price, nor is it sufficiently interesting or amusing to want me to finish it.

Published by David Kirk

The opinions expressed on this site are those of the author and other commenters and are not necessarily those of his employer or any other organisation. David Kirk runs Milliman’s actuarial consulting practice in Africa. He is an actuary and is the creator of New Business Margin on Revenue. He specialises in risk and capital management, regulatory change and insurance strategy . He also has extensive experience in embedded value reporting, insurance-related IFRS and share option valuation.

Join the conversation


  1. Hi,

    Thanks for your interest in my book. Although you are critical, this is welcome sign that somebody has actually read the book (well some of it, anyway).

    Re “reasons for insurers themselves to be wary of selecting too well”:
    I do touch on this on page 55, where I note that maximising loss coverage = maximising premium income. So it could be optimal for insurers, if profit loadings are roughly proportional to premiums. Another possibility is that you add more classifications, you may think you are classifying better, when in fact you are just creating more opportunities for customers to game the classifications.

    Re “limited view of the value of insurance”:
    I do discuss insurance as a “probabilistic good” versus “reassurance good” on pages 52-53. However it is true that apart from that page, the book focuses solely on the probabilistic framing, which I think is the better framing for public policy, for the reasons given on p53. I note that you are arguing for a third concept – insurance as a “portfolio optimisation good” – which is different again. It would be useful to have a commonly agreed terminology around these three concepts (I had to make up the terminology in the book).

    Re “should have been a paper”:
    Well it was a 2008 paper in the Journal of Risk and Insurance – and several more recent follow-ups – but the trouble is nobody reads them! All the papers are here….

    Thanks again for the interest.

    Guy Thomas

    PS There should be a review in The Actuary next month.

    1. Thanks Guy for the response and food for further thought. I assume Google Alerts is part of your toolkit too.

      I revisited the sections you highlighted to see whether I missed anything. I have the kindle version so I hope the page numbers are consistent with your references.

      • On page 52 where you differentiate between probabilistic and reassurance goods, you don’t, as far as I can tell, take the value of reassurance beyond intangible feel good into real world, hyper relevant for public policy of portfolio optimisation good. (I use your terms, they seem to do the job well and I’m also not aware of generally accepted terms here.)
      • On page 54/55:

        Maximising loss coverage is equivalent to maximising premium income. If profit loadings are proportional to premiums, maximising loss coverage could be a desirable objective for insurers. Even from the insurer’s perspective, adverse selection is not always a bad thing! This might help explain why insurers often appear rather slower to make use of every scrap of marginal information for risk classification than economic theory predicts, even when information is observable at zero or at negligible cost and apparently relevant to the risk.

        • It seems optimistic to assume profit is a constant proportion of premiums since profit will often be the result of the balance between selection and anti-selection and the varying self-assessment of risk and risk appetite of prospective policyholders. With limited risk selection, the actual profit margin will depend on the mix of business within the broad risk categories allowed. I suppose this could stabilise at an equilbrium level, although experience of non-underwritten medical insurance coverage in at least two markets I’m aware of shows this to be a major change to prevent spiralling costs and premiums.
        • Further though, given Solvency II (and SAM in South Africa) capital requirements for underwriting risk are broadly going to be proportional to premium volumes, and with modern pricing factoring in the cost of capital and a hurdle rate, the waters are further muddied and possibly in favour of lower premium products from an economic value added perspective.
        • However, I have a simpler explanation for the practical under-utilisation of information by insurers – practical constraints such as quoting engines, desired simplicity for distribution channels, consistency with aggregator requirements, and in some cases, regulatory restrictions on risk selection(!). I think we may have similar thoughts on the complexity of fine-tuned risk ratings providing more opportunities for policyholders to game or exploit imperfections, but I want to do a more thorough post on that another time.
        • On a related note to the annuity post code impact, I read an article in a recent The Actuary magazine evaluating further geographical differences in mortality across England and Scotland. In South Africa, we have very marked differences in mortality across provinces, likely mostly a proxy for other underlying issues. This information isn’t used for non-underwritten products, even though it would be trivial to adjust prices based on distribution branch location. An interesting question has arisen whether this “known but not used” information will require insurers to separate these policies out into different cohorts under IFRS17 and therefore show many more policies as onerous from inception. If so, this might be another push towards more refined pricing, less cross subsidisation and maybe lower loss coverage. If your views are correct, this would be a push in the wrong direction.

      I enjoyed engaging with your ideas. I hope my coverage and our discussion here may prompt some more interest from others. If you remember, for your next book, throw another comment in here to let me know and you’ll have at least one repeat customer.

  2. Hi again,

    I agree the “portfolio optimisation good” is just not there in the book. To account for this you presumably need to get into individual utility maximisation, which would not lead to either loss coverage or coverage as a criterion. I’m not very enthusiastic about utility maximisation as a basis for public policy because the necessary assumptions (about utility functions) seem unknowable, but yes that is more the way an economist might think about it.

    On further reflection I’m not discouraged by your p = 0.75 example, for which my interpretation is: to “break” the construct of loss coverage, you have to extrematise so far that you “break” the construct of insurance itself (i.e. loss coverage makes little sense at p = 0.75, but nor does insurance itself).

    I don’t think that the scenario on Table 3.3 on p46, with severe adverse selection, itself “breaks” the construct of loss coverage. The main point of the book, as per the sub-title, is that insurance works better with SOME adverse selection, not with ANY AMOUNT of adverse selection. On this point, scenarios like Table 3.3 aren’t counter-examples; they just substantiate the word “some”.

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.