Menu
Home
Log in / Register
 
Home arrow History arrow A History of British Actuarial Thought
Source

Risk Theory (1954-1971)

British actuarial thought on general insurance over the one hundred years from 1850 to 1950 was primarily focused on attempting to apply the actuarial techniques then in successful use in life assurance to fields of general insurance. These attempts were continually stymied by the complexities found in general insurance business: in particular, by the heterogeneity in its claims experience and by the lack of stability engendered by its greater sensitivity to unpredictable social and economic change.

Risk theory offered a different analytical approach. Traditional actuarial techniques in life assurance and pensions were predicated on the idea that insurance risks were diversifiable and that insurers wrote business in such a way that these risks were indeed diversified away. Thus actuarial work in life assurance and pensions was traditionally set within a deterministic, riskless framework. Risk theory started from a different premise: it applied stochastic modelling techniques to model the random occurrence and impact of individual general insurance risks. This explicitly captured the variability that could arise from one claim to the next due to fluctuations in both claim size and frequency. It employed sophisticated mathematical models that went far beyond the quantitative techniques used in British actuarial science in midtwentieth-century life and pensions.

The germs of risk theory—which we could loosely define as the stochastic modelling of individual insurance claim outcomes—were developed in Scandinavia and particularly in Sweden during the early decades of the twentieth century.[1] This initial work was focused on the modelling of mortality risk in life assurance, but it was found to have no practical use there as its application to the large sizes of homogeneous policies in life business left little random fluctuation—risk theory essentially produced the same results as the standard deterministic actuarial models. Following the end of the Second World War, actuarial researchers began increasingly to consider risk theory in the setting of general insurance. This largely occurred outside the UK. Researchers were again very active in Scandinavia during this period, most notably in Finland.[2] Somewhat later, technical actuaries such as Hilary Seal further developed and applied these ideas in the USA.[3]

The essence of the post-war general insurance risk theory framework can be summarised as follows. Individual claim events and individual claim sizes were each modelled as random variables. Claim events were typically assumed to be generated by a Poisson process. This could be generalised into a mixed Poisson process, where the Poisson rate was itself assumed to be a random variable. The claim size was typically assumed to have a skewed probability distribution such as the lognormal or gamma distribution. In individual risk theory, each policy in the portfolio had its own specified probability distribution for claim frequency and claim size. In collective risk theory, the models and calibrations were specified at the portfolio level without distinguishing which policies gave rise to the claims (but individual claim events were still modelled).

This analytical framework could provide many theoretical insights. At the most basic level, it was one way of calculating the expected level of claims, and hence the pure premium for insurance. It could also be used to model claims with and without reinsurance and to set the pure premium for a particular form of reinsurance. For a given level of premium and starting capital, the evolution of the ‘risk reserve’ could be modelled—that is the stochastic projection over time of assets and premiums less claims and expenses. The modelling of this reserve could be used to estimate probabilities of ruin over various time horizons for a given starting reserve. Equivalently, it could be used to establish the starting level of reserve required to support a given probability of ruin.

This analysis of ruin probabilities was fundamentally very similar to the approach pioneered by Sidney Benjamin and adopted by the Maturity Guarantees Working Party for unit-linked investment guarantee reserving during the 1970s. In maturity guarantee reserving, the ruin probability modelling focused on the impact of stochastic variation in non-diversifiable (financial market) risk. Risk theory focused on the stochastic impact of what might be called under-diversification of risks that were theoretically diversi- fiable—that is, the impact of random fluctuations in claims processes that were usually assumed to be independent (claims may also be highly correlated due to exposure to a common event or risk factor). As we shall see below, Benjamin also played a significant role in developing British actuarial thought in general insurance.

Whilst the theoretical framework of risk theory was intuitive and powerful in the context of general insurance business, it did not necessarily provide a solution that was any less challenged than traditional actuarial techniques by the age-old practical general insurance modelling issues of data, heterogeneity and stability. Could the models of risk theory be reliably calibrated such that it could produce dependable quantitative output?

Risk theory never reached the mainstream of the British actuarial profession. Its primary British actuarial exponent during the post-war decades was Robert Beard. He liaised extensively with international actuarial colleagues from the late 1940s onwards, especially the leading Finnish thinkers in the discipline, to apply the subject to British general insurance business. Beard’s interest in risk theory was mainly stimulated by his experience in quantitative operational research during the Second World War rather than by his actuarial training.[4] He appealed to the Institute of Actuaries as early as 1948 to engage more actively in general insurance, but he received little encouragement. Faced with a less-than-enthusiastic institute, Beard helped to establish ASTIN—Actuarial Studies in Non-Life Insurance—as a section of the International Actuarial Association.[5] Its journal—the Astin Bulletin—carried much of Beard’s substantial research output of the 1950s and 1960s.

Beard did manage to get a couple of papers on risk theory and its application to general insurance published in the Journal of the Institute of Actuaries in 1967. He also had a greater number published in the perhaps more liberally minded Journal of the Institute of Actuaries’ Student Society. Beard also co-wrote a book,[6] first published in 1969, on risk theory with two of the leading Finnish actuarial thinkers on risk theory, Professor Pentikainen and Dr Pesonen. The book was widely used in European actuarial university courses and subsequent editions were published in the 1970s and 1980s.

A notable Beard paper was published in the Journal of the Institute of Actuaries Students’ Society in 1954.[7] Beard gave British actuarial students an overview of risk theory and demonstrated its application in general insurance with an example based on a large US fire insurance claims dataset. The paper showed that the distribution of claim size could be well-fitted with a log-normal distribution, except for a handful of exceptionally large claims out of a dataset of more than a quarter of a million. It also showed how an excess- of-l oss reinsurance treaty impacted on the insurance portfolio’s net claims probability distribution. He went on to analyse how the probability of ruin behaved over various time horizons for differing levels of starting reserves, and how the excess-of-loss treaty impacted on these results. It was an accessible and comprehensive practical case study on the application of risk theory to general insurance reserving, solvency assessment and risk management.

Beard was not a mere theorist. He was a working actuary who recognised the practical limitations of the mathematical framework and the data that was used to fuel it. For example, he noted that other US statistics showed that higher claim frequencies tended to be experienced in years of economic depression, and less in boom years, and that this could be an important source of heterogeneity and non-stationarity. He also noted that claims would tend not to be independent due to geographical exposures. Nonetheless, he held a deep conviction that there were significant practical insights for general insurance premium-setting and reserving that could be obtained from a sensible utilisation of available data within the sophisticated mathematical modelling of risk theory.

The editors of the Journal of the Institute of Actuaries permitted Beard a full seven pages in its September 1967 edition.[8] He argued that the risk theory approach of modelling claim frequency and claim size as separate statistical processes could make it easier to detect changes in patterns of claims experience and thus allow for faster premium rate adjustments to be made by the insurance office. This was a topical issue for the general insurance industry at the time, especially in motor insurance, where a deterioration in experience had occurred over a number of years without premium bases adequately reacting to avoid a sequence of multiple years of loss.

Although it was not accepted into the mainstream of the profession’s thinking at the time, Beard’s work of the 1950s and 1960s laid the foundations and provided the inspiration for broader quantitative research in general insurance by the British actuarial profession. Meanwhile, progress started to be made in the collation of relevant claims data in some general insurance lines, most notably motor insurance. In 1967, the British Insurance Association established the Motor Risk Statistics Bureau to pool claims experience data for several member insurers. This data, whilst challenged by differences in rating structures and policy types across member firms, provided a new source for the application of statistical techniques to pricing and reserving.

The 1970s saw more actuaries become engaged in general insurance practice. As noted above, the Institute eventually added general insurance to its examination syllabus in 1978. Some new British actuarial thought-leaders emerged in the 1970s who were seeped in practical experience in general insurance business. G.B. Hey was one such example and he helped to break new ground in 1971 when he and PD. Johnson wrote the first ever paper on motor insurance to be published in the Journal of the Institute of Actuaries.[9] Their paper was not as mathematical as typical risk theory works, but it was strongly influenced by Beard and his work in ASTIN.

Johnson and Hey analysed the efficacy of ‘experience rating’ in motor insurance—that is ‘a system by which the premium of the individual risk depends upon the claims experience of this same individual risk’.[10] In British motor insurance, this was known as the No Claims Discount (NCD) system, and it was a well-established feature of that market by the mid-1960s. The individual risk data used in an experience rating system still faced the same challenge as a risk rating approach based on broader historical claims experience: a longer period of history was required to effectively differentiate the risk heterogeneity amongst different policyholders, but older data may be of less relevance for the projection of future claims behaviour. For example, in NCD systems, the advances in the policyholder’s driving skills that would typically accompany increasing age and driving experience could render earlier claims experience irrelevant to expected future claims behaviour.

A typical UK NCD system of the time allowed the maximum claims discount to be obtained after four or five consecutive claim-free years. Johnson and Hey analysed how effective this chosen length of period was in differentiating different risks (under the assumption the risks were indeed stationary). They considered a theoretical pool of policyholders where 75 % of the group had a claim frequency of 0.1 and the remaining 25 % had a higher claim frequency of 0.25. Their analysis showed that a policyholder in the low- claim-frequency group had a two in three chance of obtaining the maximum discount rating after four years, whilst the high-claim-frequency group policyholder had a corresponding probability of two in five. NCD systems made some contribution to distinguishing policyholder heterogeneity, but they were a blunt and limited tool that could only be highly effective with a long history of stationary data. This analysis highlighted that the source of the poor motor insurance industry experience in the years preceding the paper may not only have been due to high claims inflation as was generally suspected (and which was partly driven by changing judicial treatment of third-party injury claims). Another factor may have been that the average policyholder who earned the maximum NCD discount was not actually as good a risk as they had been assumed to be.

Aside from this theoretical analysis, Johnson and Hey’s paper focused on analysing the data made available through the Motor Risk Statistics Bureau (though they only considered the experience of a single member firm to avoid complications with comparability and consistency). They fitted an eight- factor regression model of claims frequencies using a least squares optimisation (factors were intuitive motor insurance risk factors such as age of policyholder, car rating group and NCD category). They found that the NCD category was a significant variable in the regression, even in the presence of the other factors, and hence concluded it was still an important element in the premium rating process, despite its noted limitations.

Despite such attempts to illustrate the practical application of risk theory ideas and statistical methods, it was generally viewed through the 1950s,

1960s and even 1970s as too theoretical and quantitative to be of much practical value to the British actuary. This perspective is well-represented, though perhaps a little overstated, by the following passage in a Journal paper by Ryder published in 1976:

Risk theory is a rather esoteric branch of actuarial theory which has been extensively developed by the more theoretical continental actuarial tradition. The practical actuary, however, finds that he hardly ever uses this theory.[11]

The application of risk theory, however, did become an established element of British actuarial practice in general insurance in the 1980s and 1990s—a period when the profession developed a more prominent role in the sector and the actuarial approaches employed there became increasingly technical.

  • [1] See Lundberg (1909) for example.
  • [2] See, for example, Pentikainen (1952).
  • [3] See, for example, Seal (1969).
  • [4] Beard in Discussion, Plackett (1971), p. 355.
  • [5] Beard in Discussion, Abbott et al. (1974), p. 277.
  • [6] Beard et al. (1969).
  • [7] Beard (1954).
  • [8] Beard (1967).
  • [9] Johnson and Hey (1971).
  • [10] Johnson and Hey (1971), p. 202.
  • [11] Ryder (1976), p. 71.
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel