Menu
Home
Log in / Register
 
Home arrow History arrow A Brief History of Economic Thought
Source

The New Foundations: General Equilibrium and Expected Utilities

Two elements - one more general, the other more specific - need be stressed here: choice of the problem of individual decisions as the starting point of economics and von Neumann’s role. The first element is probably attributable to the military interest in scientific (objective) formulation of decisional problems; the second to the brilliance of the Hungarian mathematician but also to his varied activities as consultant during and after the war (including his participation in the Manhattan Project for the development of the atomic bomb and in the development of the first computers).

Born in Budapest and an emigrant to the United States in the early thirties, in 1933 John von Neumann (1903-1957) became the youngest member of the Institute for Advanced Studies at Princeton, where Albert Einstein was one of his colleagues. Author of a celebrated model of balanced growth (von Neumann 1937), as from 1940 alongside his consultant activities he worked with Oskar Morgenstern (1902-1977)[1] on a book, Theory of Games and Economic Behaviour (1944), which had a strong influence on developments in economic research in the United States.

This work relied on axiomatic analysis. It provided a systematic treatment of n-person zero-sum games and broad introductory analysis of non-zero-sum games. It also relied on the notion of expected utility, which constitutes an extension of the problem of consumer’s choice between alternative uses of scarce resources. Each act of choice may have not one certain univocal outcome but a multiplicity of possible outcomes; the agent then needs to evaluate both the utility and the probability of each outcome, thus obtaining the expected utility stemming from his/her choice as an average of the utilities of the different outcomes weighted with their probabilities.

In order to analyse expected utilities, von Neumann and Morgenstern (1944, pp. 26 ff.) introduced a system of postulates, which in substance correspond to completeness, continuity and transitivity of the agent’s preferences (if I prefer A to B and B to C, then I also have to prefer A to C) and of the probability of the different outcomes; moreover, each preference relation is considered independent of other events (absence of external effects). The set of postulates ensures that probabilities and utilities - hence, expected utilities - retain the properties of mathematical expectations. Moreover, both utilities and probabilities are considered measurable (numerable).[2] Hence, assuming the agent to have complete information, we can determine the decisions (the solutions of the system) corresponding to a ‘rational behaviour’ that maximizes expected utility.

The game-theory approach, in which each agent tries to take into account other agents’ decisions, enables us to move on from analysis of the isolated agent (Jevons’s Robinson Crusoe) to analysis of the agent’s choices in the presence of other agents, and so to analysis of the general equilibrium of an economy in which agents interact. Under perfect competition, this does not imply substantial differences from the analysis conducted by Walras and his successors; however, von Neumann and Morgenstern attribute great importance to the role of coalitions, i.e. games in which the possibility of cooperation is admitted, while subsequent research focused on non-cooperative games, in particular under conditions of incomplete information. Simplifying, we may say that general economic equilibrium analysis focused on market interdependency, while von Neumann and Morgenstern - and by and large the broad flows of research to which they opened the way - focused attention on a rational economic agent’s choices.

Declaredly in the wake of von Neumann and Morgenstern (1944), an important contribution was provided by Savage with his Foundations of Statistics (1954), which retained the expected utility notion and provided a fully axiomatic treatment of it, integrating it with the subjective view of probability proposed by De Finetti and Ramsey; the Foundations have since been seen as the basis of modern inferential statistics.

The analytical results of this research are important but cannot be considered to constitute the crowning moment of the general equilibrium research programme started by Walras. So much can be learnt from the developments of the traditional Walrasian approach (hence under the assumptions of perfect competition and absence of combinations), based on the axiomatic method and on the use of topology: the results concerning the demonstration of the existence of solutions for the general equilibrium model (Wald 1936, Nash 1950, Arrow and Debreu 1954, Debreu 1959)[3] are accompanied by negative results concerning uniqueness and stability of equilibrium. Reformulation of the problem in terms of decision theory opened the way to circumvent these issues, as well as the limits of the assumption of convex preference sets, the unrealism of which is evident especially when applied to production technologies.5 These limits are in any case ignored in the systematic presentations of economic theory, beginning with the one that constituted for generations the reference point, Samuelson’s Foundations of Economic Analysis (1947).6

Another negative result concerned the impossibility of extending the consistency of the system of choices from the individual to society. Kenneth Arrow (1921-2017, Nobel Laureate 1972) in Social Choice and Individual Values (1951) proposed the impossibility theorem, according to which no decisional procedure exists such as to satisfy simultaneously two requirements: first, to guarantee the transitivity of social choices among three or more alternatives (if A is preferred to B and B is preferred to C, A too is preferred to C); second, to satisfy some requirements of democracy expressed in formal terms: for instance, if one of the alternatives goes up in an individual’s ranking, while all other individuals’ rankings remain unchanged, that alternative cannot go down in ranking for society as

  • 5 Let us recall that increasing returns are incompatible with the assumption of competition. The attempts in recent years to introduce local concavities in production sets correspond to the quest for relatively untried fields of enquiry rather than to a real understanding of the crucial importance of this limit of general equilibrium analysis. For a survey of the results reached in various fields by research on general equilibrium models, cf. Mas-Colell et al. (1995).
  • 6 In a new 1983 edition, the 1947 text was re-proposed without modifications, with the addition of new material at the end. We find there, in the context of an illustration of input-output systems, some references to Sraffa’s analysis and to the capital theory criticisms discussed in Chapter 16. The references in Samuelson’s book, however, imply a misinterpretation of Sraffa’s analysis and a reductive evaluation of the weight of the criticism stemming from it to the marginalist theory of value and distribution. In fact Samuelson, assimilating Sraffa’s analysis to Leontief’s, interpreted it erroneously as a general equilibrium model within which the assumption of constant returns to scale (explicitly ruled out by Sraffa in his 1960 book) allows for the determination of relative prices without consideration of the demand side. Moreover, as far as the criticism is concerned, Samuelson reduced it (following in this a presentation of the criticism by Joan Robinson, 1953, and thus preceding the publication ofSraffa’s book) to a critique of the aggregate notion of capital utilised in aggregate production functions (such as the famous Cobb-Douglas, which constitutes the foundational pillar of Solow’s theory of growth, to be discussed in Section 17.3: ‘the simpliste J.B. Clark parable of a platonic capital stuff’, as Samuelson called it, 1947, p. 568 of the 1983 edition): hence a critique considered valid but not applicable to the ‘general’ marginalist model. Thus the fact that Sraffa’s critique concerned not only and not so much the aggregate notion of capital as also and mainly the impossibility of demonstrating the existence in general of an inverse relation between real wage rate and employment remained out of the scene, though such an inverse relationship is essential for the existence of the marginalist equilibrating mechanism leading to full employment (the invisible hand of the market), which, as we shall see in Section 17.3, remained the foundational pillar of mainstream macroeconomics. From here followed a separation between a ‘lowbrow theory’, which utilises the aggregate supply function, and a ‘highbrow theory’, the general equilibrium one, endowed with internal consistency but devoid of definite results and within which the simplistic parables obtainable through the aggregate production function are out of place.

a whole. In other words, though relying on complete and transitive individual ordering of preferences, we cannot obtain a complete and transitive social ordering of preferences.[4]

Axiomatic general economic equilibrium theory has been considered by many as the frontier of basic research in the field of economics and as a compulsory reference for any economic enquiry, or in other words as a programme for the reduction of the whole of economic theory to a central core: a precise set of axioms from which, with the addition of further assumptions that may change from case to case, we can deduce a series of theorems constituting a complete representation of economic reality or at least of everything in economic reality that is capable of scientific expression. As a matter of fact, the results of this research (multiplicity of equilibriums, non-demonstrability of stability, impossibility to drop the axioms of convexity of production sets) make it impossible to utilize general equilibrium models directly in the analysis of real-world issues. References to general equilibrium analysis commonly cover recourse to simplified - one-commodity, one-representative-agent - models, by now prevailing in mainstream macroeconomics, or, in other fields of enquiry, recourse to the Marshallian ceteris paribus clause opening the way to partial equilibrium analysis. The problem of the contradiction between requirement of realism and requirement of logical consistency thus raises its head once again.

Beginning in the 1970s, research within the general economic equilibrium approach focused on the limits set to the optimal functioning of the market by different circumstances. Thus, the impossibility of fully specifying all aspects of an agreement gave rise to the so-called principal-agent problem, that is, the possibility that the person who accepts responsibility for a certain task (the agent) utilises the margins of freedom of action available in his/her own interest rather than in the interest of the person who entrusts him/her with the task (the principal). A vast literature discusses the problem of designing incentive structures such as to induce the agent to adopt the principal’s interests as her/his own. Analogous is the case of asymmetric information, i.e. the fact that different agents are endowed with different information sets; this is utilised for instance in explaining the mechanisms of adverse selection by which the bad commodity squeezes the good commodity out of the market due to the different availability of information between seller and buyer.[5] Quite often, though, these models fall in the category of partial equilibriums due to the simplifications obtained through ad hoc assumptions. However, without simplifications it is practically impossible to get meaningful results from the analysis. Thus, interest in general equilibrium theory has drastically declined in recent years.

  • [1] Morgenstern had migrated from Vienna to Princeton in 1938 for political reasons.
  • [2] The assumption of a regular (complete, transitive and continuous) ordering of preferences in itself implies ordinal utility functions; von Neumann and Morgenstern obtaincardinal utility functions thanks to the assumption of an arithmetical average of utilitiesweighted with the probabilities of the outcomes (I owe this point to Aldo Montesano).The preference ordering can be obtained (ibid, p. 18 note) by questioning individualagents; such data are held to be reproducible (ibid, p. 24: this means, although the authorsdo not make it explicit, that individual preference systems are stable over time). Von Neumann and Morgenstern (1944, p. 17) consider utility as a natural phenomenon, objectively measurable, following in this the marginalist pre-Pareto tradition: ‘Evenif utilities look very unnumerical today, the history of the experience in the theory of heatmay repeat itself, as happened (in different ways) for the theory of light, of colours and ofradio waves.
  • [3] In the early 1950s, Gerard Debreu (1921-2004, Nobel Laureate in 1983) was a colleagueof Arrow at the Cowles Commission at Chicago and then remained in America asprofessor first at Yale and then at Berkeley. In Debreu 1959 and in other works thegeneral economic equilibrium model is extended to take account of ‘dated’ commodities(a barrel of corn available at a given date is different from a barrel of corn available ata different date) and ‘contingent commodities’ (the same commodity, an umbrella forinstance, is considered as a different commodity depending on the ‘state of nature’, forinstance, whether it is a sunny day or it rains); it is also possible to translate contingentmarkets into markets for insurance certificates concerning the different possible ‘states ofnature’ (assuming that the set of all possible states of nature may be univocally defined,with each state of nature fully specified - an untenable assumption, as recalled inSection 16.4 with reference to Wittgenstein).
  • [4] Notwithstanding Arrow’s negative result, the analytical tools of the theory of rationalagents’ decisions have been utilised to study the behavior of electors, politicians andbureaucrats thus originating the field of enquiry of public choice. The main exponent ofthis stream is James Buchanan.
  • [5] George Akerlof s (b. 1940, Nobel Laureate 2001; cf. Akerlof 1970) example is that of theused-car market: the buyer is unable to evaluate exactly the conditions of the used caroffered for sale, and it is likely that ifthe price demanded is the average one for a car ofthatage, the specific car offered for sale is of an inferior quality compared to the average one.The cases to which this theory is applicable are numerous: from selection among loanapplications to selection among potential insurance clients and selection among workersfor hire.
 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
 
Subjects
Accounting
Business & Finance
Communication
Computer Science
Economics
Education
Engineering
Environment
Geography
Health
History
Language & Literature
Law
Management
Marketing
Mathematics
Political science
Philosophy
Psychology
Religion
Sociology
Travel