Claim Reserving (1974-1996)

Between the years of 1850 and 1970, the British actuarial profession made little headway in establishing itself as an influential and important component of Britain’s highly successful, global general insurance industry. It had tried to import its life assurance methods into general insurance and had learned that they were not adequate for the task of pricing and reserving for general insurance. It had seen and broadly rejected the stochastic approach of risk theory that had been developed and applied by overseas actuaries. Was there a role for British actuaries to play in general insurance? If so, what was it and what kinds of actuarial skills would it employ?

As noted above, over the course of the 1970s, there was an increase in the numbers of British actuaries working in general insurance business. A General Insurance Study Group was formed by the Institute in 1974. It produced many research papers, though most of these were merely deposited at the Staple Inn library and were not deemed worthy of publication in the Journal or discussion at sessional meetings. The profession recognised that there may be an opportunity for actuaries to play a more formal role in statutory reserving for general insurance business. The ‘freedom with publicity’ regime of actuarial life assurance reserving, which had been in place in some form or another since the 1870 Life Assurance Companies Act, provided an explicit, professional and statutory role for the actuary that gave them considerable discretion to exercise their expert professional judgement. No such equivalent role existed in general insurance reserving in the 1970s. General insurance firms had to calculate statutory reserves, but no actuarial role was mandatory in reserving and no detailed disclosure of the methods use in assessing the reserves was required.

The collapse in 1971 of Vehicle and General, a British motor insurance firm, provided a further opportunity for the actuarial profession to press for a more active and statutory role in general insurance reserving. An Institute working party was established in the early 1970s with a remit to consider the statutory reserving regime for general insurers ‘with reference to uniformity and the verification of non-life reserves’.[1] Perhaps unsurprisingly, its report,[2] published in the Journal in 1974, concluded that only professional certification could address the challenges inherent in general insurance reserving. Nonetheless, even amongst actuaries there was scepticism that this could be a cure-all. In the Staple Inn discussion of the paper, J.L. Manches noted:

It was perhaps an oversimplification to believe that actuarial techniques were the best answer to accurate claims reserving. Life valuations were based on the application of established claim probabilities (i.e. mortality tables) to homogeneous groups of policies which generated claims of known amounts. None of that applied to non-life business, and there was bound to be the utmost difficulty in reaching agreement on any standard approach to reserving.[3]

There is certainly a hint of arrogance in the profession’s position that it could improve standards in general insurance reserving when it had done so little over the previous decades to encourage actuarial research and education in the field. However, it is important to note a shift in emphasis. No longer was the profession positioning itself as the leading provider of statistical or analytical methods. By the early 1970s it was clear that advanced techniques in mathematical statistics were emerging faster than the profession’s appetite to apply them. On the contrary, the profession’s argument was increasingly based on its ability to meet the need for sound professional judgement and broader financial acumen in the setting of prudential insurance reserves.

From the early 1970s until his death in the early 1990s, Sidney Benjamin— whom we met earlier as the frustrated pioneer of stochastic risk-based approaches to maturity guarantee reserving—relentlessly argued for a statutory role for actuaries in general insurance reserving that was equivalent to their position in life assurance. This argument was based on the inadequacies in general insurance statutory reserving that he perceived arose from the absence of a professional actuarial role:

Historically, the first job of the actuary is to safeguard the interests of policyholders. The life actuary determines the amount of risk capital which should be set aside to give an acceptable level of safety to the policyholders. In non-life insurance, that does not happen. The amount of backing solvency capital for any volume of business is set vaguely, according to an informed perceived wisdom, with: no scientific justification; no explicit public justification; no published standards of consistency within any one company from year to year; and no apparent standards of consistency between companies in any one year.[4]

Benjamin’s paper, ‘Profit and Other Financial Concepts in Insurance’,[5] appeared in the Journal in 1976 and presented this case with typical Benjamin gusto. He discussed the universal applicability of fundamental actuarial concepts such as the use and disclosure of actuarial bases that provided transparency and consistency to the reserving process; the difference between reserving and premium bases and its implications for new business strain and the emergence of surplus; implicit and explicit reserving margins for prudence; asset mismatching reserves; differences between provisions and reserves; prospective and retrospective reserving approaches. Benjamin argued that these actuarial concepts were as applicable to general insurance as to life assurance, and that they were the unique domain of the actuarial profession.

Benjamin’s paper also offered some specific criticisms of the practices that then prevailed in general insurance reserving. Like other actuaries before him, he viewed reserving by case-by-case estimation as ‘fundamentally subjective ... not inherently stable in the way a fixed basis will be’ and hence ‘an inadequate substitute for a reserving basis’.[6] He was also critical of the standard industry practice of not discounting projected claims cashflows when setting general insurance reserves. Whilst this provided a form of implicit reserving margin, its size was arbitrary and it distorted the stated form and timing of the release of surplus. He also argued that required solvency margins should be risk-based and should be a function of the asset mix of the business, which was again not standard practice at the time.

His most forceful argument, however, was reserved for the use in general insurance of the professional framework of ‘freedom with publicity’ in statutory reserving: the idea that the actuary, as the professional expert, should have the freedom to use his judgement to set appropriate assumptions and methods for the business he is reserving for, providing he disclosed his assumptions sufficiently such that another actuary could reproduce his results and opine on the reasonableness of the approach taken. To Benjamin, this was the fundamental reserving discipline that actuaries could bring which was lacking in British general insurance. Whilst the Staple Inn discussion of the paper was broadly supportive of Benjamin’s thinking, there were some general insurance practitioners who felt the need to highlight that the intrinsic differences in general insurance and life assurance could limit the direct applicability of some of his arguments and that ‘actuarial advice [in general insurance] ... would continue to be sought ... if, and only if, actuaries showed a proper understanding and humility of approach to an industry which had operated successfully without actuaries for many years’.[7]

In the years following Benjamin’s 1976 paper, a flurry of papers appeared in the Journal which did indeed attempt to develop ‘a proper understanding ... of an approach’ to claim reserving. These papers were some of the most technical and quantitative papers ever to appear in the Journal, and were mostly written by actuaries, and in some cases non-actuaries, with PhDs in advanced quantitative fields. They were mainly concerned with developing improvements in methods for the projection of claims from a run-off triangle (which tabulated claims paid in rows of the year of insurance, which was usually referred to as the underwriting year and sometimes as the origin year; and columns of ‘development year’, i.e. the number of years after the origin year when the claim was paid). The concept of a run-off table of general insurance claims had been around for a long time. For example, at the Staple Inn discussion of Penman’s 1911 paper, W.R. Strong noted:

If the claim payments in respect of the business accepted in any given year were traced separately until the claims of the year were finally disposed of, the total volume of payments year by year fell into a somewhat regular sequence of diminishing amounts ... It might perhaps be practicable when a sufficient period had elapsed to construct a table by means of which, given the claim payments up to the end of the year, an estimate might be formed of the ultimate cost of disposing of the liability in respect of the policies of the first year.[8]

The standard method of estimation of the ultimate cost from the run-off data was the ubiquitous chain ladder method. The method assumes that future claims for each underwriting year will accumulate over their outstanding development years in proportion to how claims paid up to that point in the origin year’s development differ from the average observed across all the origin years in the run-off table for which the outstanding development period has been observed. Its greatest limitation is embedded in its basic assumption: it implies that an unexpected claim size in an early year of development impacts proportionally on all future development years for that underwriting year. This sensitivity could result in noisy, unstable estimates for business in its early years of development.

The first of this series of technical papers was written by D.H. Reid and published in the Journal in 1978.[9] Reid began by noting that case-by-case estimation was still the prevalent industry practice for setting outstanding claims reserves in general insurance. He voiced the usual actuarial concerns with this approach, but particularly highlighted that whilst the approach might sometimes have some merit in reserving for claims that had been reported but not yet settled, it was entirely inapplicable to IBNR claims and their required reserves. The reserving approach for these claims must involve some form of statistical method as there was no case-by-case information that could be used in the reserving assessment—by definition, the claim was completely unknown to the insurer at this point in its development.

Reid developed a mathematical framework for the emergence of claims payments over time by specifying a cumulative joint probability function for claims paid and development year. He then fitted this function to the available data for the claims experience from the earliest available underwriting year of the run-off data only. His model was essentially smoothing the observed experience of a single complete sample path for the claims development. Once this function had been fitted, it was then transformed into a function for use in projected future claims via parameters for inflationary changes in claim size and changes in the rate of settlement that were assumed to be experienced between the time of the first underwriting year’s claims and the projected times of future claims. These parameters could be fitted to the claims run-off data for the sequence of subsequent underwriting years for which data was available. Reid applied his framework to example claims data for a variety of lines of business such as employers’ liability, fire and motor insurance. The approach was somewhat aligned to risk theory in that it provided a full probabilistic description of (aggregate) claims. But its formulation was complex, and the model contained a very large number of parameters that needed to be fitted to typically very limited data. His presentation was rather impenetrable for the typical British actuary of the time. Crucially, it was extremely difficult to ascertain from his analysis whether his complex modelling would result in a more accurate or reliable estimate of outstanding claims than a much simpler modelling approach.

D.H. Craighead produced a more accessible paper for the Journal in 1979.[10] Craighead was an experienced actuarial practitioner in the Lloyd’s of London market. His paper gave a broad overview of its business practices and institutional arrangements. It also contained an important section with his views on how to model the run-off of claims and hence establish claim reserves at any point of time. He proposed fitting a formula for the incurred loss ratio of a given underwriting year as a function of the development year. He applied this approach to the proprietary claims data of an anonymous reinsurance company using a three-parameter exponential form of function. This produced fits of varying degrees of quality for different lines of underlying business and forms of reinsurance.

The fitted curves provided a smoothed description of how claims had historically run-off over their development period. It did not provide an explicit statistical predictive model. The fitted parameters could then be used to extrapolate the claims run-off of the underwriting years that were not yet fully developed. This was essentially a parametric form of the chain ladder method and, in similarity with that method, Craighead noted the sensitivity of the reserve estimate to unexpectedly large claims that arose early in the claim development period. In the Staple Inn discussion, this theme was expanded upon by J.P Ryan, a general insurance actuary who made notable contributions to actuarial research in the 1980s and 1990s. Ryan highlighted that the work done in the USA by Bornhuetter and Ferguson might provide the solution to this problem.[11] The Bornhuetter-Ferguson approach reduced the sensitivity of projected ultimate claims to the claims experience data by mixing the estimate implied by the ‘raw’ claims data with some specified prior expectations for the claims development pattern. It was essentially Bayesian in spirit, and its inherent limitation was in the potentially arbitrary specification of the ‘prior’ estimates.

Three further technical papers on statistical methods for general insurance claim reserving appeared in the Journal in 1982 and 1983. The first of these was written by J.H. Pollard.[12] Pollard focused on the run-off behaviour of the aggregate claims of a large book of business. He invoked the Central Limit Theorem to justify assuming the aggregate claims would be normally distributed.

Pollard moved away from the direct manipulation of the run-off triangle and instead set up a matrix algebra to describe how claims paid behaved through their development years: he specified vectors for the mean and variance of the claims paid in each development year, together with a covariance matrix to capture the correlations between claims paid in different development years (for example, the correlation between claims paid in development year 1 and development year 3). This, almost tautologically, allowed the expected claims, and indeed the whole probability distribution of future claims to be determined for a given claims development period to date.

The second useful feature of Pollard’s set-up was that it provided a statistical basis for assessing whether statistically significant changes in claim settlement patterns had arisen—the multivariate normal framework allowed Chi-Square significant tests to be used to assess if the claims development was statistically different to the previously fitted distributions. Pollard’s framework was mathematically elegant and intuitive, but, as ever, it relied entirely on how the model was parameterised and the reliability of the available data for that purpose.

The two claim reserving papers published in the Journal in 1983 were the most statistically complex. The first of these was written by de Jong and Zehnwirth.[13] Their paper applied recent technical developments in the statistical modelling literature to the claim reserving problem, particularly the times series modelling of Box and Jenkins.[14] This involved ‘state-space models’ and the Kalman filter, which was essentially a recursive application of Bayes’ Theorem. This mathematical statistical technology allowed claim reserve estimates to be dynamically updated in a Bayesian way as new data arose. The second paper,[15] by G.C. Taylor, drew parallels with the claim projection problem and ‘invariance problems’ in physics and used this observation to apply variational calculus to the stochastic modelling of run-off triangles. The invariance assumption was that ‘the expected amount of outstanding claims, deflated to current values, is invariant under all variations of future speed of finalizations’.[16] Taylor recognised that such an assumption would not hold if a change in rate of settlement arose, for example, due to a change in negotiating stance of the insurer. But of course, no quantitative method could readily incorporate such factors into claims projections. Neither of these papers were presented at Staple Inn and it is hard to believe they resonated strongly with either general insurance practitioners or the British actuarial profession at large.

A working party was established in 1982 by the Institute’s General Insurance Study Group to formally consider general insurance solvency and ‘the methods and bases used for the valuation of assets and liabilities’.44 This was partly inspired by a recent study of general insurance solvency by Finnish actuaries which was particularly notable for its application of simulation modelling as a solution to risk theory problems.45 The profession had embraced simulation modelling methods in its recently-published and influential maturity guarantee research. It is certainly easy to see why it seemed like a more appealing route for the profession to pursue than further developing the work of Reid, De Jong and Zehnwirth, and Taylor.

The report of the working party was published as a Journal paper in 1984.46 No significant regulatory or professional guidance applied to general insurance reserves at this time other than that valuations should be made in accordance with generally accepted accounting principles or other accepted methods. Statutory reserving required the assessment of two key quantities: the technical provisions (i.e. liability valuation) and the solvency margin. The standard industry practice was for the technical provisions assessed in solvency reporting to also be used as the provisions shown in financial statements. Conceptually, the solvency margin provided an asset buffer over the cost of meeting the liabilities. The regulator had powers to intervene in the running of the insurer in the event that assets were insufficient to cover the solvency margin as well as the provisions.

The working party argued that, for the purposes of solvency reporting, technical provisions should contain a prudent margin over the best estimate of the cost of meeting the liabilities. This margin should be set such that there was a ‘relatively low risk of [the technical provisions] proving inadequate’. Meanwhile, the solvency margin would protect against ‘the more remote adverse contingencies of the run-off.47 This approach was difficult to reconcile with accounting principles for the valuation of provisions (which required the provisions to be a true and fair estimate of the liability value). It also ran counter to the applicable European Commission regulatory framework, which assumed technical provisions were best estimates, with a solvency margin calibrated to produce a risk-of-ruin of one-in-1000 over a three-year horizon. The working party was satisfied with the solvency margin target but suggested that technical provisions should be set at a one-in-200 risk-of-ruin [17] [18] [19] [20]

over the three-year horizon (which was clearly substantially different to the one-in-two risk-of-ruin loosely implied by a best estimate approach).

The working party also advocated an approach similar in spirit to life assurance reserving’s ‘freedom with publicity’ concept: insurers and their actuaries would be free to choose their own methods to determine the technical provisions and solvency margin, but disclosure of those methods and professional certification by a ‘loss-reserving specialist’ would be required. However, in the Staple Inn discussion of the paper, some concerns were raised that this freedom was ‘fraught with danger unless the Department [of Trade and Industry, the insurance solvency regulator of the time] is able to bring to bear a stringent monitoring of the results’.[21]

The working party recommended that the above ruin probabilities should take account of asset-side risks as well as liability risks—in essence, a mismatch reserve should be included in both the technical provisions and solvency margin. It may be recalled that Sidney Benjamin first proposed that general insurers’ solvency margin should include a mismatch reserve in 1976, but it remained a significant departure from prevailing general insurance practice in 1984. Such a change in reserving method could imply a significant increase in reserving levels. However, the working party attempted to water down the ramifications of this proposed change with a caveat:

If asset values fall only temporarily, the problem may be largely presentational, and the supervisor would not need to withdraw the authorization of companies unable to meet the solvency requirements at a particular date if the position had subsequently been rectified. Only with a prolonged shift in market values would the effects be serious.[22]

Quite how the supervisor, or the certifying loss-reserving specialist, was supposed to ascertain at a particular date whether experienced asset value falls were temporary or not was not addressed by the working party.

The working party, again led by Chris Daykin, produced a further paper that was published in the Journal in 1987.[23] Whereas the first paper had dealt mainly with principles and a conceptual framework, the second had a greater focus on implementation and the modelling challenges involved therein. The second paper did, however, make one conceptual U-turn—after professional and industry feedback, the working party now expressed its ambiva?lence on the question of the statistical standard for technical provisions, arguing that it was the total solvency level that was really relevant. This effectively rescinded their earlier proposal that technical provisions should include a margin for prudence and hence accepted a best estimate definition. This allowed closer alignment of provisions with accounting principles and ‘true and fair’ valuation.

The 1987 working party paper proposed that the required solvency margin be calculated using a simulation model that projected asset and liability cashflows over the run-off of the existing business. It adopted a pragmatic approach that did not attempt to reach the heights of statistical ambition that had been explored by Reid, Taylor, de Jong and Zehnwirth over the previous decade. It suggested that variations in the real amounts of claims should be modelled at an aggregate level. As in Pollard’s work, the Central Limit Theorem could be used to support the use of a normal distribution to describe variation in aggregate claims. They suggested that the standard deviation of the claims paid in a given year should be a function of the size of that year’s expected aggregate claim. Variations from year to year in claims paid were assumed to be independent. Asset variation and inflation were modelled using a version of the Wilkie model that was recalibrated with the objective of being more appropriate to the shorter-term horizons of general insurance business than the long-term projections for which the model was originally intended.

This modelling framework could be used to determine the starting amount of assets in excess of the technical provisions that was required to support a given probability of ruin. The working party again suggested that an actuary should write a public report on the financial strength of the company that presented these findings. The paper showed that illustrative calibrations of this modelling framework could generate intuitive assessments of solvency margin—for example, the central case of their example suggested a solvency margin of around 10 % of the (best-estimate) technical provisions would be required. Of course, such results were entirely predicated on their assumptions about the scale of variability in the claims run-off. Their example calibration was developed in a very heuristic way. The paper did not offer any substantial guidance on how these parameters could be robustly calibrated to reflect the specific features of a given general insurance business.

Whilst the Solvency Working Group was developing its vision of a solvency framework and the role of the actuary within it, other actuarial researchers continued with the investigation of quantitative techniques for estimating claim reserves. The most intuitive and accessible paper of the 1980s on general insurance claim reserving methods was a Journal paper written by Sidney

Benjamin and Ian Eagles, published in 1986.[24] Benjamin and Eagles analysed historical claims run-off patterns in Lloyd’s syndicates and found that the ultimate loss ratio tended to have a strong linear dependence on the year one paid claims ratio. This linear dependency relationship had already been alluded to by Pollard’s development year correlation matrix but Benjamin and Eagle’s presentation implied a very simple mathematical rendering that appeared to produce good empirical fits: ultimate cumulative claims, and hence required current reserves, could be estimated by simple linear regression of the cumulative claims of a given development year with the ultimate cumulative claims paid. The empirically observed variation around the line of best fit could also provide some heuristic indication of the extent to which ultimate cumulative claims could deviate from the extrapolated estimate. These regression relationships could be fitted to different business lines and years of development. Naturally, the later the year of development, the more confidence could be had in the ultimate loss estimate. It was classic Sydney Benjamin—avoiding unnecessary statistical niceties, it cut to the chase and delivered powerful practical actuarial insight. In a world increasingly characterised by rocket science and advanced computing technology, Benjamin had a knack of making original and informative use of the back of an envelope.

The working party’s normal distributions and Benjamin’s linear regressions may have led the British actuarial profession to exhale a collective sigh of relief at the potential accessibility of new general insurance reserving methods. The investigation, however, of the use of advanced statistical techniques in claim reserving was far from over. Three further papers of a high statistical ambition were published in the Journal in 1989 and 1990. The first two of these papers were the fruits of a seminar ‘Applications of Mathematics in Insurance, Finance and Actuarial Work’, jointly sponsored by the Institute of Actuaries and the Institute of Mathematics and its Applications. These two papers, by R.J. Verrall[25] and A.E. Renshaw[26] respectively, covered broadly similar statistical ground. They both observed that the ubiquitous chain ladder method could be considered as a form of two-way analysis of variance (ANOVA). From this observation, a similar recursive Bayesian estimation approach as that developed by de Jong and Zehnwirth could be developed. It could also be shown that, under the statistical assumptions of the ANOVA model and an assumed lognormal distribution for claims, the chain ladder method did not produce the maximum likelihood estimates for the expected claims. This more statistically sophisticated analytical approach could also shed some light on the parameter stability (or lack thereof) of the chain ladder method, especially for the most recent underwriting years.

T.S. Wright’s paper, the third of this series of highly technical investigations of advanced statistical techniques for claim reserving, appeared in the Journalin 1990.[27] Like the work of de Jong and Zehnwirth, this again used the Bayesian Kalman filter statistical technology to produce stochastic projections of claims run-off. However, unlike the papers by Verrall and Renshaw, Wright’s approach did not rely on the assumption that claims were log-normally distributed. This was an assumption which Wright regarded as an ‘untenable’ description of general insurance claims distributions. Several technical assumptions about the distributional characteristics of the claims process were still required by Wright’s approach, but it allowed the statistical insights to be placed in a more general family of distributions than the previous research.

Whilst these advanced statistical methods may have had some application in insurers’ internal assessments of claims experience and profitability, they were too exploratory and complex for use in 1990s statutory solvency assessment. As has so often been the case in the history of British actuarial engagement in general insurance, further inspiration was sought from overseas. A group of British actuaries authored a paper reviewing the recent US regulatory developments which appeared in the British Actuarial Journal in 1996.[28] The US regulatory authorities introduced a Risk-Based Capital (RBC) system in the early 1990s. The essential idea was that the capital requirement (similar to the solvency margin in the UK) would be calculated using a series of prescribed factors that were applied to a defined metric of volume of business (such as premiums earned or reserves net of reinsurance). Hence the capital requirement would be determined as some percentage of net reserves, and the percentage would be determined formulaically as a function of the mix of insurance business and asset risks on the balance sheet. The underwriting and claim reserve risk factors were set on a rolling basis to reflect the worst industry experience of the previous ten-year period. This mechanical calibration approach was open to the criticism of being too retrospective for the fast-changing world of general insurance.

The authors highlighted some of the limitations of this simplified one-size- fits-all formula application, most notably its inability to capture the aggregation of exposure to a single underlying risk event. They suggested that a Dynamic Solvency Testing (DST) approach, implemented within a statutory professional framework, would be preferable to a formula-based approach. DST meant the insurer developing their own financial model of their business and using it to project and analyse the health of the business under a range of selected adverse stress scenarios. They also considered the use of dynamic financial analysis within this system, where the stress modelling is replaced with a full set of stochastic scenarios—in essence, what had been proposed by the Solvency Working Party in 1987. The application of the DST approach had since been pioneered in statutory reserving for Canadian general insurance in the early 1990s. The Canadian Institute of Actuaries had been active in developing professional guidance on the professional role actuaries could perform in a DST statutory framework. As might be expected, this call for a broad-ranging role that involved substantial actuarial judgement and freedom was generally welcomed in the Staple Inn discussion of the paper.

Yet the elephant in the room still remained—did the British actuarial profession have the skills and experience required to perform this type of role in general insurance? In the discussion, the influential general insurance actuarial practitioner D.H. Craighead tried his best to take a positive stance: ‘I expressed reservations then [several years ago], but I think that we are now beginning to be ready for such a role’.[29]

  • [1] Abbott et al. (1974), p. 217.
  • [2] Abbott et al. (1974).
  • [3] Manches, in Discussion, Abbott et al. (1974), p. 268.
  • [4] Benjamin in Discussion, Ryan and Larner (1990), p. 658.
  • [5] Benjamin (1976b).
  • [6] Benjamin (1976b), pp. 252—53.
  • [7] Scurfield, in Discussion, Benjamin (1976b), p. 300.
  • [8] Strong, in Discussion, Penman (1911), pp. 137—38.
  • [9] Reid (1978).
  • [10] Craighead (1979).
  • [11] Bornhuetter and Ferguson (1972).
  • [12] Pollard (1982).
  • [13] de Jong and Zehnwirth (1983).
  • [14] Box and Jenkins (1970).
  • [15] Taylor (1983).
  • [16] Taylor (1983), p. 211.
  • [17] Daykin et al. (1984), p. 279.
  • [18] Pentikainen and Rantala (1982).
  • [19] Daykin et al. (1984).
  • [20] Daykin et al. (1984), p. 288.
  • [21] Hart, in Discussion, Daykin et al. (1984), p. 320.
  • [22] Daykin et al. (1984), p. 302.
  • [23] Daykin et al. (1987).
  • [24] Benjamin and Eagles (1986).
  • [25] Verrall (1989).
  • [26] Renshaw (1989).
  • [27] Wright (1990).
  • [28] Hooker et al. (1996).
  • [29] Craighead in Discussion, Hooker et al. (1996), p. 313.
 
Source
< Prev   CONTENTS   Source   Next >