Graduation of Mortality Tables (1825-1867)

The use of pooled mortality data increased the sample sizes available for use in the development of mortality tables. But statistical noise was inevitably still found in mortality rate estimates, and this was especially true when rates were estimated as a function of both age of policyholder and duration of policy. Visual inspection of the Carlisle table highlighted that it could also be true for the contemporary tables based on population data. Some back-of-the- envelope calculations could easily highlight why. A total of 1,840 deaths were observed over the nine-year span used in the Carlisle data set. 173 of those deaths were at ages between 60 and 70.[1] So there were less than 20 deaths observed at age 60: a single observation of one more death of a 60-year-old within the nine-year period would result in a 5 % proportional increase in the mortality rate estimate. In the Carlisle table, the mortality rate of 60-70-year- olds increases by an average proportion of 4 % per year of age. So the ‘noise- to-signal’ ratio was high. Data samples of this size would unavoidably result in material irregularities in the observed progression of mortality rates as a function of age.

Actuarial thinking on the smoothing, or graduation, of age-dependent mortality rate estimates started to develop in the 1820s. Broadly speaking, two types of approach emerged: a non-parametric smoothing approach that set the mortality rate of a given age as a weighted average of the ‘raw’ mortality rates estimated across a range of ages centred around the given age; and a parametric approach that specified a functional form for how mortality rates varied as a function of age (and potentially policy duration), and then fitted the parameters of that function to the observed mortality rates. These two approaches were focused on the same objective, but were philosophically different. The latter approach aimed to provide an explanatory ‘law of mortality’ that was consistent with the data, whilst the former merely tried to remove the ‘noise’ in the statistical samples of mortality rates. In a wider context than actuarial thought, this was an era of exploration in the application of statistical approaches to social science, and the notion of scientific ‘laws’ that could explain social phenomena was in keeping with the zeitgeist of the first half of the nineteenth century. Perhaps partly for this reason, the explanatory approach caught the actuarial imagination and became the dominant method. As we shall see below, it also had more grounded actuarial advantages.

To gain an understanding of these approaches and how they were implemented, we need to recall the overall state of development of statistics during this era. As we know from Chap. 2, the method of least squares had been developed and placed in a statistically rigorous context by Legendre, Laplace and Gauss a decade or so earlier. But this method had not yet been used to develop a best fit for a function that describes how a dependent variable (in this case, the mortality rate) behaves alongside an independent variable (in this case, age). Perhaps surprisingly in retrospect, it took another halfcentury before a statistical theory of regression was developed and applied when Francis Galton considered the hereditary dependencies of plants.[2] And so, lacking a statistical framework to consider how information from neighbouring estimates could statistically improve the estimate of a given point, the methods proposed for fitting the ‘smoothed’ rates were inevitably somewhat ad hoc. There was also some debate about what should be smoothed. Efforts were focused on the smoothing of the mortality rates, but some thinkers of the time argued that it would be preferable to apply the smoothing process directly to the ultimate variable of interest: the value of a life contingency. Writing in 1838, the influential Augustus De Morgan argued:

The events of single years are subject to considerable error, and generally present such varieties of fluctuation, that it has become usual to take some arbitrary and purely hypothetical mode of introducing regularity. This practice cannot be too strongly condemned, since the tables thereby lose some of their physical facts, without any advantage ultimately gained. For if by using the raw result of experiments, tables of annuities were rendered unequal and irregular, it would be as easy, and much more safe, to apply the arbitrary method of correction to the money results themselves, than to introduce it at a previous stage of the process.[3]

This perspective did not prevail: virtually all actuarial thinking of the time focused on smoothing the observed mortality rates. John Finlaison, in his parliamentary report of 1829 on government annuity pricing, employed a couple of non-parametric smoothing formulae in his experience mortality tables, such as:

where Px is the observed probability of a life aged x surviving one year.

In Finlaison’s approach, each smoothed mortality rate is set as a weighted- average of the ‘raw’ mortality rate observations found for ages up to four years older and younger than the age of the rate being estimated. The weights used vary inversely with the distance of the age of the observation from the age being estimated. The above formula and its logic are quite similar to a modern-day local or non-parametric regression method.

In 1839, W.S.B. Woolhouse, in the early years of a long and distinguished career as an actuary and mathematician, suggested another non-parametric smoothing approach in his paper on the observed mortality rates of the Indian army.[4] Woolhouse’s approach made adjustments to the observed mortality rates such that the time progression of the lives remaining in a closed population would have a regular pattern. Statistically, he aimed to ensure that the pattern of the fourth order of differences in the progression of lives remained as smooth as possible. He devised an iterative arithmetic process that could be implemented to obtain this objective.[5] This approach entailed more arithmetic manipulation than Finlaison’s smoothing approach, but had a more explicit statistical objective.

Finlaison and Woolhouse appeared satisfied with the performance of these methods. The Woolhouse method had a greater impact on actuarial practice and it was applied in the development of the Seventeen Offices tables of 1843. But others had less success with these approaches. In his Amicable experience analysis of 1841, Galloway attempted to use a smoothing formula of the kind proposed by Finlaison but found that ‘considerable anomalies remained’.[6] He therefore reverted to the use of a parametric function, and this increasingly became the standard actuarial practice.

The parametric function approach had another advantage over the nonparametric smoothing methods. Since the time of de Moivre, it was recognised that specifying a particular mathematical form for the behaviour of mortality could provide an annuity pricing formula that involved significantly less arithmetic operation than the explicit calculation of expected cashflows directly from a given set of mortality rates. De Moivre’s assumption of arithmetic decrements in deaths could be viewed as the first parametric form of mortality graduation. For de Moivre, the object of this assumption was entirely motivated by the improvement in the efficiency of the annuity pricing calculation. One hundred years later, enough progress had been made in arithmetic computation to make full calculation of single-life annuity prices accessible. But the calculation of the prices of annuities that were written on two or three lives was still very challenging. So in the early nineteenth century actuaries such as Gompertz hoped that parametric functions for mortality rates could kill two birds with one stone: providing an appropriate way of smoothing out the noise in sampled mortality rates, and providing efficient formulae for the pricing of complex annuities.

Gompertz started a revolution in graduation thinking in 1825 when he developed a parametric function that was intended to be consistent with the fundamental characteristics of how mortality should behave as a function of age (a ‘law of human mortality’)[7]:

It is possible that death may be the consequence of two generally co-existing

causes; the one, chance, without previous disposition to death or deterioration;

the other, a deterioration, or an increased inability to withstand destruction.[8]

This suggested there were two forms of exposure to mortality: one that was constant across all ages; and the other that increased with age. To model the behaviour of the age-varying component of the mortality rate, he assumed ‘the average exhaustions of a man’s power to avoid death were such that at the end of equal infinitely small intervals of time, he lost equal portions of his remaining power to oppose destruction which he had at the commencement of those intervals’.[9]

The above statement implied that the age-dependent component of the mortality rate increased exponentially with age. Gompertz then considered how well this assumed mortality behaviour could fit to standard mortality tables such as Price’s Northampton table and Milne’s Carlisle table. Curiously, when Gompertz came to the application of his formula, he chose to omit the age-independent mortality component that he had earlier described. Hence a ‘Gompertz function’ only includes an age-dependent exposure, even though he expressly identifies a constant age-independent source of mortality (‘chance without ... deterioration’).

Gompertz originally expressed his function in terms of the number living at age x, lx, rather than the force of mortality. T.R. Edmonds, writing a few years later in 1832,[10] defined the force of mortality, px, as the continuously compounded rate of mortality and expressed the formula in those terms. The modern form of the Gompertz function would typically be written:

It was implemented by Edmonds without recourse to the exponential function as:

Today, such a function would typically be fitted by finding the parameters that minimise the squared errors between the observed and fitted mortality rates. As discussed above, this regression-style approach to function fitting had not yet been applied to statistical problems, and Gompertz used a more heuristic approach. He considered the change in mortality rates that applied over the ten-year gap between ages 15 and 25, and over the ten-year gap between ages 45 and 55. These observations could be used to solve two simultaneous equations that uniquely determined the two parameters of the function.

Whilst the parameters would fit exactly to those equations, they would not produce an exact fit to all the mortality rates of the table. However, he was pleased with the quality of fit he obtained when he compared his function to the observed rates of the Northampton table between ages fifteen and 60, noting: ‘This equation between number living and the age is deserving of attention, because it appears corroborated during a long portion of life by experience’.[11] Figure 3.4 shows a Gompertz function fitted to the Northampton data for ages 15-60.

The above chart illustrates how the Gompertz function can provide a very useful graduation of the Northampton table over this range of ages. However, the limitations of the function become apparent when a broader range of ages is considered. Figure 3.5 fits to the 15-80 age range and again shows the fitted function values and observed mortality rates.

Clearly, the two-parameter function is unable to provide an adequate fit over this wider age range. The same conclusion was reached when Gompertz applied the function to other standard mortality tables. Nonetheless, the Gompertz law of human mortality was widely celebrated by actuaries and medical thinkers of the time. De Morgan wrote of Gompertz: ‘As this ingenious paper contains a deduction from a principle of high probability, and terminates in a conclusion which accords in a great degree with observed facts, it must always be considered a very remarkable page in the history of the enquiry before us.’[12]

It was clear, however, that as a practical actuarial tool, improvements were required. William Makeham, another senior actuary of his generation (and also a noted mathematician), proposed a natural generalisation of the Gompertz law in papers published in 1859[13] and 1867:44 he suggested including the age-independent parameter that Gompertz had described but chosen to ignore in his formula. The force of mortality could then be written as:

Gompertz mortality function fitted to Northampton table (ages 15-80)

Fig. 3.5 Gompertz mortality function fitted to Northampton table (ages 15-80)

This was, of course, exactly in keeping with Gompertzs original description of the effects of mortality on age. We can only speculate as to why Gompertz did not proceed directly to this functional form in his paper of 40 years earlier. It clearly would have resulted in a more complicated fitting process, but his heuristic fitting approach could be extended to the three-parameter case in a straightforward way. Makeham found that the additional degree of freedom permitted significantly more accurate fits to mortality tables, as illustrated below in Fig. 3.6.

Whilst the additional degree of freedom naturally resulted in better fits to mortality data, Makeham was at pains to emphasise that his extension was not an arbitrary additional degree of freedom but rather something that captured the characteristics of a fundamental law of mortality. His extension, he wrote, ‘in no way interferes with the philosophical principle upon which Mr Gompertz has shown his theory to be based: a feature which distinguishes his formula from all others which have hitherto been proposed, and which doubtless accounts for the favourable reception it has met with from the highest scientific authorities’.[14]

The three-parameter Makeham function could provide a good fit to mortality data within the range of ages of primary importance to life assurers,

Makeham and Gompertz mortality functions fitted to Northampton table (ages 15-80)

Fig. 3.6 Makeham and Gompertz mortality functions fitted to Northampton table (ages 15-80)

but Makeham and Gompertz both recognised that such functions could not provide a satisfactory fit across the entire spectrum of ages in mortality tables, particularly at very young and very old ages. They each proposed various extensions. Gompertz proposed using piece-wise functions with different parameterisations of Gompertz functions applied to specified age bands.[15] Makeham proposed extending the function further by adding polynomial terms.[16] Elsewhere in Europe, other mathematically inclined actuaries developed similar functions for graduation purposes. For example, the leading nineteenth-century Danish actuary T.N. Thiele proposed a seven-parameter exponential function in 1871.[17]

The significant developments in statistics that occurred in the early twentieth century provided a further fillip to actuarial research in mortality graduation. Elderton[18] and Ogborn[19] each considered the application of Pearson frequency curves to mortality table graduation, but neither was able to demonstrate any significant advance in performance relative to the Gompertz- Makeham framework. It remains today an important and well-used part of the toolkit of mortality actuaries and demographers.

  • [1] Milne (1815), p. 405.
  • [2] See Stigler (1986) Chapter 8.
  • [3] De Morgan (1838), p. 162.
  • [4] Woolhouse (1839).
  • [5] Woolhouse (1839), p. 7.
  • [6] Galloway (1841), p. ix.
  • [7] Gompertz (1825).
  • [8] Gompertz (1825), p. 517.
  • [9] Gompertz (1825), p. 518.
  • [10] Edmonds (1832).
  • [11] Gompertz (1825), p. 519.
  • [12] De Morgan (1839).
  • [13] Makeham (1859).
  • [14] Makeham (1867), p. 333.
  • [15] Gompertz (1871).
  • [16] Makeham (1889).
  • [17] Thiele (1871).
  • [18] Elderton (1934).
  • [19] Ogborn (1953).
 
Source
< Prev   CONTENTS   Source   Next >