Our discussion of the emergence of key ideas in financial economics has focused mainly on the development of economic theories. That is, a range of theoretical results have been discussed (for example, the Capital Asset Pricing Model) that have been developed deductively from a set of starting axioms (investor risk aversion and non-satiation, and so forth). In all cases, these results were subject to various forms of empirical testing, and such testing has consistently formed a substantial part of financial economics’ research output. But this final section is somewhat different in that it is related to a stream of work that is intrinsically empirical: it is focused first and foremost on how well ‘real-life’ financial markets work—not in theory, but in practice. In particular, this stream of financial economics considers the informational efficiency of financial markets’ prices. Pricing efficiency in this context refers to how well market prices reflect relevant information and how quickly prices react to new information. Its empirical nature and its implications for the possible lack of usefulness of large swathes of financial services practitioners have made it one of the most contentious areas of financial economics. This was true many decades ago and it remains true today, particularly as later research has painted a more complex and nuanced picture of real-life market behaviour than that implied by market efficiency’s major research results of the 1960s and early 1970s.
We noted some detailed empirical studies of stock price behaviour in the discussion of option pricing theory—in particular, Osborne’s 1959 research that provided an empirical basis for the use of geometric Brownian motion as a reasonable model of stock price behaviour. There are also some earlier examples of empirical research that date back to the first half of the twentieth century. But improvements in the collation of security price data and growing computing power stimulated a new wave of empirical analysis of security prices in the 1950s.
Besides Osborne, another important example of this empirical work was provided by Maurice Kendall, who was director of research techniques at the London School of Economics. Kendall presented a detailed empirical study of the time series behaviour of financial market prices to the Royal Statistical Society in 1952. The study provided the most detailed statistical analysis to date of the time series behaviour of stock prices. Kendall considered UK equity market behaviour over the ten-year period between 1928 and 1938, and wheat prices on the Chicago Board of Trade between 1883 and 1934. In both cases he could find little evidence of statistically significant serial correlations at any time-lag. As a trained economist with a faith in rational market responses to the business cycle, this lack of trend or apparent signal in the price process confused and alarmed Kendall:
At first sight, the implications are disturbing ... it seems that the change in price from one week to the next is practically independent of the change from that week to the week after. This alone is enough to show that it is impossible to predict the price from week to week from the series itself ... The series looks like a “wandering” one, almost as if once a week the Demon of Chance drew a random number from a symmetrical population of fixed dispersion and added it to the current price to determine the next week’s price.
Professor R.G.D. Allen, in his vote of thanks, shared Kendall’s despondency, noting that the paper’s results were ‘a very depressing kind of conclusion to the economist’. However, some speakers from the floor had a different economic interpretation for the lack of trend or predictability in the price data. A Professor Champernowne commented that ‘the low serial correlation coefficients found in this particular series may reflect the success with which the professionals are doing their job’, whilst Professor Paish elaborated: ‘It seems inevitable that where prices are based on expectations markets are as likely to go down as up. If the markets thought they were more likely to go up they would have gone up already.’
A decade after Kendall’s empirical study, Paul Samuelson published a theoretical paper, ‘Proof that Properly Anticipated Prices Fluctuate Randomly’.
This paper provided some mathematical formality to the intuitions of Professors Champernowne and Paish: in well-functioning markets, the no-serial-correlation results of Kendall were exactly what should happen. But Samuelson was not the first to argue that well-behaved prices should exhibit random fluctuations. Louis Bachelier’s arithmetic Brownian motion process was almost certainly the first no-serial-correlation model of the stochastic paths of financial market prices. In his (translated) words: ‘the mathematical expectations of the buyer and the seller are zero’.
In 1965, Eugene Fama, then a young assistant professor at the University of Chicago Business School, published a research paper, ‘The Behaviour of Stock Market Prices’. As an empirical study of the statistical properties of stock market prices, this covered some similar ground to Kendall, but was more comprehensive and wide-ranging. Like Kendall, Fama found that serial correlations in daily stock returns were generally very low (Fama tested the returns of US stocks in the Dow Jones Industrial Average over the period 1957-1962). He also performed some other forms of statistical tests such as runs tests to provide further evidence of statistical independence of returns through time.
Fama’s paper covered a couple of other important topics that would become increasingly relevant in future years. First, Fama analysed the shape of the distribution of returns. He concluded that there was strong statistical evidence that daily stock returns had fatter tails than those implied by a normal distribution. Second, as well as considering the behaviour of individual stocks, he also analysed empirical data on the returns of mutual funds (he considered 39 mutual funds over the period from 1950 to 1960). His analysis led him to two conclusions: mutual funds, as a whole asset class, did not beat the equity market over the period; and no mutual fund consistently outperformed the others year-on-year through the ten-year period.
In his 1965 paper, Fama argued that the observations of statistical independence of returns through time and the inability of any mutual funds to consistently outperform the market or each other were both forms of evidence in support of what had become known as the Random Walk Hypothesis— the idea that market prices varied randomly and unpredictably from one period to the next. In a further paper, published in 1970, he developed these ideas further. This paper, ‘Efficient Capital Markets: A Review of Theory and Empirical Work’, is one of the most famous and influential financial economics papers ever published. As its name suggests, it was a review of the by-then abundant empirical analysis of stock market behaviour that had accumulated over the previous 15 years. But it was more than a review. Fama took those various threads of analysis and wove them into a clear body of evidence in support of the notion of efficient markets, which he defined as where ‘security prices at any time fully reflect all available information’.
The theory of market efficiency was concerned with how prices responded to information. In Famas crystallisation of efficient markets, he proposed three levels of market efficiency that corresponded to three different information sets: weak-form efficiency, where efficiency meant prices fully reflected all information in historical price movements; semi-strong efficiency, where efficiency meant prices fully reflected all publically available information (a semistrong-form efficient market therefore must also be weak-form efficient as historical prices were public information); and strong-form efficiency, where efficiency meant prices fully reflected all information, both public and private (so a strong-form efficient market was also semi-strong and weak-form efficient). His paper reviewed the empirical evidence that had been published in relation to each form of informational efficiency.
The evidence for weak-form efficiency was naturally found in the statistical testing of historical price data. These tests took two broad forms: testing for statistical independence of returns through time (mainly by serial correlation testing such as that done by Kendall and Fama); and testing the profitability of mechanical trading rules (the idea being that any ‘excess’ profitability of such rules would not be consistent with efficient markets). Famas 1965 paper included some analysis of these mechanical trading tests, and he published a paper in 1966 with Marshall Blume with further testing of such rules. Fama’s review of the evidence relating to weak-form efficiency allowed him to conclude that ‘the results are strongly in support’.
The empirical evidence for semi-strong market efficiency was largely based on analyses of how market stock prices reacted to major public announcements of relevant information such as earnings statements and stock splits. The basic idea was that if the market was efficient the price impact of these announcements would be immediate, and subsequent expected returns would therefore be unaffected by the announcement. Fama again concluded that these studies invariably provided support for the semi-strong form of the efficient market hypothesis. The implications of the semi-strong hypothesis are the most provocative to investment professionals as it implies that active fund management cannot be expected to outperform the market except by luck. This is consistent with Famas survey of mutual fund performance in his 1965 paper. Such studies also have a longer history—in 1933, the US economist Alfred Cowles published a paper which showed that buying-and-holding would tend to outperform the recommendations of stock market forecasters. Evidence in support of the semi-strong hypothesis can provide an intellectual basis for market indexing or passive investment management—a form of investing that has rapidly grown in popularity since the 1980s.
In considering the strong form of market efficiency, Fama conceded that there was evidence that corporate insiders have monopolistic access to information that is not in the share price. But even in this case he argued that the investment community is unable to access and use such information to outperform the market. Overall, Famas conclusion was emphatic: ‘In short, the evidence in support of the efficient markets model is extensive and (somewhat uniquely in economics) contradictory evidence is sparse.’
Inevitably perhaps, however, empirical evidence contradicting the efficient markets hypothesis quietly started to accumulate in the decade following Famas emphatic declaration of efficient markets victory. The watershed moment arrived in 1978 when Professor Michael Jensen, a leading financial economist of the period, edited a special edition of the Journal of Financial Economics that was dedicated to reviewing this stream of research. Amongst other studies, this research included several analyses of the returns on a diverse range of mechanical trading strategies (including securities such as investment trusts and exchange-traded stock options). If strategies were identified that earned statistically significant excess risk-adjusted returns (after trading costs), this would be regarded as evidence inconsistent with weak-form market efficiency. Unlike in Famas mechanical trading tests of the 1960s, the edition tentatively concluded that several such strategies could deliver excess risk-adjusted returns. But this type of study raised an interesting question: did excess risk- adjusted returns look good for these strategies because markets were mispricing assets or because the theoretical models for assessing the risk-adjusted required returns were wrong (or were missing some features that are important to these complex strategies)? This left some ambiguity in the conclusions which helped to shape the future direction of financial economics research.
Jensen’s special edition helped to create an environment within the financial economics profession where challenge to the accepted wisdom of perfectly functioning financial markets was an accepted part of academic orthodoxy. It ushered in a new era of empirical research in financial economics where the identification of potentially irrational market behaviour was suddenly highly in vogue.
In 1981, Robert J. Shiller published a provocative paper where he argued that the volatility of stock market returns was much, much higher than could be explained by changes in rational expectations for levels of future dividend pay-outs. Using a dividend discount model for equity market valuation, he showed how, under some assumptions about the stochastic properties of the dividend pay-out process, a relationship between the year-on-year volatility of dividend pay-outs and year-on-year volatility of equity price changes could be established. Shiller’s long-term empirical analysis of dividend pay-outs and stock market volatility in the USA implied that market volatility was ‘five to thirteen times too high to be attributed to new information about future real dividends’. In deriving this analysis, the dividend discount model he used assumed a constant real required return. He inverted the analysis and considered how volatile the real discount rate would need to be to generate the observed level of market volatility. He found it would need to have an annual standard deviation of 4-7 %, which he dismissed as economically unfeasible.
Shiller’s work generated considerable academic controversy and prompted a notable response from Robert Merton, one of the financial economics profession’s established leaders of the period. In a paper with Terry Marsh published in 1986, the authors argued that Shiller’s conclusions were highly dependent on his assumed form of stochastic process for dividends. Marsh and Merton’s key point was that firms’ managers liked to smooth dividend pay-outs as much as possible. But, as was shown by Modigliani and Miller decades earlier, the rational or intrinsic value of the firm should be determined by the performance of the firm’s underlying assets, and not by its dividend policy. If investors understood that managers preferred to smooth dividends over time, then they would be more sensitive to changes in dividend pay-outs (if the firm still had to reduce dividends even though management prefer to pay stable dividends, this signalled things must be pretty bad). Their general point was that inferring rational levels of return volatility from observed dividend policy was very difficult because dividend policy did not necessarily have a direct relationship with the true value of the firm. To prove this, they showed that the opposite statistical conclusion could be reached from Shiller’s data when they specified an alternative form of stochastic process for dividend pay-outs (which they argued fitted better to empirical dividend pay-out behaviour).
Despite Merton’s protestations, the genie was out of the bottle. Other leading financial economists followed Shiller’s lead and produced further analysis to support the argument that volatility in stock market returns was inexplicably high. Richard Roll, in his presidential address to the American Finance Association in 1987, presented an empirical analysis that argued that, even with the benefit of hindsight, 60 % of US equity stock market daily price volatility was inexplicable (in the sense that the price variation in a firm’s stock could not be explained by observable new information relating to the firm, its industry or general economic and market impacts). This was not the order of magnitude of excess volatility that Shiller had reported, but it was perhaps all the more plausible for that. Roll’s address opened the door to possible behavioural explanations: ‘Several authors have suggested that volatility of asset prices can be better explained by psychological factors, fads, etc., than by information. The results above are actually consistent with such a view’.
If Shiller and/or Roll were right that short-term equity volatility was inexplicably higher than could be justified by changes in fundamentals, what did that imply about long-term equity behaviour? If ‘extra’ volatility was continuously feeding into stock returns without any form of self-correction, equity prices would become infinitely dislocated from underlying economic reality. Eugene Fama, the economist more associated with efficient markets than any other, worked with another Chicago economist, Kenneth French, to provide some further insights into the empirical behaviour of longer-term equity returns. Fama and French published two significant papers on this subject in 1988. The first paper, ‘Permanent and Temporary Components of Stock Prices’, identified statistically significant mean-reversion (negative serial correlation) in historical (1926-1985) US stock market returns over three- to five-year horizons. Previous tests of serial correlation in stock market returns such as Kendall’s had used equity data series of a more limited size (Kendall used a total equity data horizon of ten years). Fama and French’s more comprehensive data analysis suggested there was a noteworthy cumulative effect which was highly significant over longer holding periods.
Seen alongside the work of Shiller, Ross and others, Fama and French’s research suggested that short-term equity market volatility was excessively high, and that some of this ‘excess’ or ‘temporary’ volatility was removed over time by a form of correction mechanism in equity market prices (which manifested itself statistically as a material mean-reverting component in the price process). Their second paper of 1988, ‘Dividend Yields and Expected Stock Returns’, took this analysis further: if mean-reversion was an important element of longterm equity market behaviour, was it possible to observe at any given point in time whether this mean-reverting component of returns was above or below its mean level? Fama and French suggested this was possible and indeed trivially easy: dividend yields appeared to be meaningful predictors of long-term equity performance. High dividend yields predicted strong returns over the following two to five years, low dividend yields predicted the opposite.
From a market efficiency perspective, this was a profound challenge to even the weak-form of the efficient market hypothesis. But there were some caveats. First, whilst long-term expected returns did vary with the starting level of the dividend yield, it was unclear whether this was mispricing resulting from fads, bubbles or some other form of irrational behaviour, or whether this reflected rational changes in required returns due to time-variation in the riskiness of equities or in investor risk appetite. Furthermore, an inevitable consequence of analysing longer-term empirical behaviour is that there is a smaller sample size to observe. As the leading twenty-first-century financial economist John Cochrane has pointed out, when dealing with such long-term trends, we may really only have a few observable data points:
What we really know is that low [stock] prices relative to dividends and earnings in the 1950s preceded the boom market of the early 1960s; that the high price/ dividend ratios of the mid-1960s preceded the poor returns of the 1970s; that the low price ratios of the mid-1970s preceded the current boom.
This limited volume of empirical data constrained the degree of consensus reached within the financial economics profession on the topic of long-term security price behaviour, and it continues to do so today. Robert Shiller and Eugene Fama received the 2013 Nobel Prize in Economics for their work in this field (along with Peter Hansen). Their acceptance speeches featured an exchange of views that highlighted how much work remained to be done to find a consensus explanation for their empirical findings.
Whilst it took until the 1980s for the notion of mean-reversion and ‘time diversification’ to gain academic credence, it has arguably been part of investor intuition for as long as equity markets have existed. As custodians of longterm liabilities, mean-reversion in long-term returns doubtless played a role in first attracting life offices and their actuaries to equities as an asset class in the 1930s. For example, in a letter to F.C. Scott, the managing director of the Provincial Insurance Company, in June 1938, John Maynard Keynes wrote:
A valuation at the bottom of the slump tends to bring out an unduly unfavourable result as against an investment policy which on the whole avoids equities; since it allows nothing for the nest egg in hand arising out of the fact that such a valuation is assuming in effect that one has purchased a large volume of equities at bottom prices ... Investment policy which is successful in averaging through time will produce the same good results as insurance policy which is successful in averaging through place [emphasis added].
Time diversification can only arise if a component of the price change process is temporary. As we shall see in Chap. 5, this idea was embedded in how actuaries modelled and measured equity risk in the context of long-term liability business in the late twentieth century. This was clearly inconsistent with the financial economics of the 1960s and 1970s. It was not as inconsistent with the financial economics of the 1980s and beyond as actuaries have sometimes been led to believe.
-  Kendall (1953).
-  Kendall (1953), p. 13.
-  Kendall (1953), p. 26.
-  Kendall (1953), p. 27.
-  Kendall (1953), p. 30.
-  Samuelson (1965).
-  David and Etheridge (2006), p. 28.
-  Fama (1965).
-  Fama (1970).
-  Fama (1970), p. 383.
-  Fama and Blume (1966).
-  Fama (1970), p. 414.
-  Cowles (1933).
-  Fama (1970), p. 416.
-  Jensen (1978).
-  Shiller (1981).
-  Shiller (1981), p. 434.
-  Marsh and Merton (1986).
-  Roll (1988).
-  Roll (1988), p. 565.
-  Fama and French (1988a); Fama and French (1988b).
-  Cochrane (2005).
-  Keynes (1983), p. 67.