The scope of the volume and its contributions

This volume explores fundamental uncertainty in the light of a set of connected research lines that have taken shape in recent years as a result of investigation in a variety of experimental and theoretical fields. Prominent features of the above research scenario are (i) a shift from general to context-specific (or context-specifiable) canons of rationality; (ii) increasing attention to the heterogeneity of cognitive attitudes; (iii) the central role assumed by framing and focusing; and (iv) interest in non-monotonic inferences and discontinuous changes of cognitive state (cognitive jumps). In particular, lack of certainty is examined from both the ontological and the epistemic viewpoints, reliability of evidence is assigned a central role, and similarity judgements are considered a necessary condition of probability judgements. The above setting lends itself to a theory of uncertainty associated with the analysis of concept formation, likeness and distance more than with the inferential structure of probabilistic reasoning.

The essays in this volume address fundamental uncertainty from the points of view of philosophy, information science, decision theory and statistics, and economic analysis. Many themes are recurrent across those disciplines and the chapters of the volume suggest manifold opportunities for cross-fertilization. In particular, the volume highlights fields of interest such as the justification of intersubjective grounding of judgement under uncertainty after Ramsey's criticism of Keynes (see below), the distinction between individual decisions and the properties of the overall system within which those decisions are taken, and the specific features of plausible decisions (that is, defensible but not uncontroversial decisions) in the economic and moral domains under fundamental uncertainty.

In Chapter 2, 'Keynes and Ramsey on Probability', Henry E. Kyburg calls attention to the discussion between Keynes and Frank P. Ramsey on the proper epistemological role of probability. The author argues that neither side understood the other, and even that they failed to understand the issue that separated them. In particular, Kyburg argues that, although Keynes was at times unclear, he was basically right about the methodological issues. His contribution starts from acknowledgment that 'in the philosophical world of the nineteenth century, "intuition" did not carry overtones of arbitrariness or personal whimsy' (section 2.1.1) so that Keynes's appeal to an intuitive conception of probability would be entirely consistent with his view that 'in the sense important to logic, probability is not subjective' (Keynes, 1973 [1921], p. 4; as quoted in Kyburg, 2.1.2). Kyburg builds upon Keynes's analysis of different ordered series of probabilities (Keynes, 1973 [1921], pp. 36-43) a theory of probabilities as forming 'a lattice structure' such that '[u]pper and lower bounds for any probabilities exist [...]—namely 0 and 1' (Kyburg, section 2.1.3). In this connection, the conjecture that probability values could be conceived as intervals is seen as providing an answer to Keynes's problem whether 'the meet and join of any two probabilities exist' (section 2.1.3). After detailed presentation of the exchange between

Keynes and Ramsey, Kyburg goes back to Keynes's interest in partial rankings of probabilities and argues that, from that point of view, 'those probabilities to which Ramsey's arguments apply may constitute a small fraction of probabilities' (section 2.3).

In Chapter 3, 'The Weight of Argument', Isaac Levi examines the role of 'balancing reasons' in inductive arguments and discusses what Keynes called the 'somewhat novel question' of the balance between favourable and unfavourable evidence. In particular, Levi takes up Charles Peirce's criticism of the 'conceptualist' approach to decision making under uncertainty (the approach interpreting terms such as 'certain' and 'probable' as describing degrees of rational belief) and stresses that, according to Peirce, the amount of knowledge relevant to decision making 'cannot be accounted for on the conceptualist view but can on the view that insists that belief probabilities be derivable via direct inference from statistical probability' (section 3.2). Peirce's view presupposes that belief probability can be grounded on statistical probability. Keynes was critical of this assumption while acknowledging that belief probability can itself be indeterminate. Differently from Peirce, Levi thinks that 'one needs to be in a position to make moderately determinate judgements of belief probability without grounding in objective or statistical chance' (section 3.4). He also emphasizes the cognitive-value dimension attached to Keynes's discussion of evidential weight, which 'is in this sense independent of the specific goals of the practical decision problem' (section 3.6). The essay concludes with the proposal of stepping beyond Keynes's own analysis by acknowledging the symmetrical roles of belief and disbelief functions and recognizing that the formal properties of belief and disbelief according to G. L. S. Shackle are closely parallel to the properties that any given argument should have in order to be sufficiently close to proof or disproof'.

Chapter 4 by Vincenzo Fano, 'A Critical Evaluation of Comparative Probability', takes up the discussion of probability judgements as judgements concerning relative (comparative) probability, and outlines an assessment of the Keynes-Ramsey debate starting from the idea that 'it is often possible to establish a comparison between probabilities, but not to determine their quantitative value' (section 4.2). For example, as acknowledged by Keynes, even brokers dealing with disaster insurance 'have to establish only that the probability of the disaster happening is lower than a certain value' (section 4.2; see also Keynes, 1973 [1921], p. 23). However, recognition of widespread use of comparative (not quantitative) probability judgements exposes the epistemological dilemma between circumscribing the treatment of uncertainty to the special cases in which Ramsey's argument applies (and quantitative probabilities are identifiable), and extending it to cases beyond Ramsey's circumscription, that is, to cases in which probability can be only of the comparative type and in which rules governing the updating of probability in view of augmented evidence cannot be established. At this point of his argument, Fano turns his attention to comparability itself, and introduces the distinction between homogeneous probabilities, which have either the same hypothesis or the same evidence, and inhomogeneous probabilities: the former are always comparable, whereas this is not generally true for the latter. Comparative probability is shown to have epistemological advantages, such as the possibility to assess comparative probabilities from relative frequencies. However, comparative probability, too, is marred by the lack of a probability measure making it possible to update rational belief in view of augmented evidence.

The relationship between the ontological and epistemic features of uncertainty is taken up by Roberto Scazzieri in his contribution 'A Theory of Similarity and Uncertainty' (Chapter 5). This chapter starts from the premise that, under most general assumptions, uncertainty entails at the same time a lack of determinacy and imprecise knowledge. The former is an ontological property of the universe under consideration; the latter is an epistemic property of the agents in that universe. Scazzieri conjectures that there may be a trade-off between ontological and epistemic precision, and that the domain of reasoning under uncertainty coincides with the collection of intermediate situations between ontological precision and epistemic precision. The two polar cases point to the existence of intermediate situations in which ontological precision ('circumscription') is sufficiently low to allow identification of partial similarity but similarity itself is not too high, so that occurrences beyond uniformities (that is, novelties) are possible. Following a suggestion in Keynes's TP, Scazzieri examines the analogy between similarity and probability and emphasizes that, like standard similarity judgements, likelihood judgements presuppose a plurality of ordered series in terms of which a reasonable judgement may be expressed. In particular, this contribution highlights the role of crossovers between different serial orders, which may be interpreted as corresponding to situations in which different ontologies coincide at a given point of time. This means that the very plurality of uncertainty dimensions that makes it difficult in general to assess any given situation may turn out to be an advantage when one faces the special circumstances in which the same assessment of the situation in view is grounded in a plurality of different orders of likelihood.

In Chapter 6, 'Generalized Theory of Uncertainty: Principal Concepts and Ideas', Lotfi A. Zadeh outlines a theory of uncertainty in which uncertainty is considered an attribute of information, and information itself is seen as subject to a 'generalized constraint' that determines which propositions, commands and questions can be expressed by means of any given language. Reasoning under uncertainty is thus treated as 'generalized constraint propagation', that is, as a process by which constraints upon the uses of language determine which inferences are possible on the basis of available information (section 6.2). This point of view leads Zadeh to delve into the relationship between the ontological and the epistemic sides of uncertainty and in particular to examine the role of prototypical forms (or protoforms), which are considered as abstracted summaries needed to identify 'the deep semantic structure' of the corresponding objects to which they apply (section 6.14). Prototypical forms lead to the concept of granular structure, in which attention is focused on 'a clump of values [for any given variable X] drawn together by indistinguishability, similarity, proximity or functionality' (section 6.1).8 This approach leads Zadeh to introduce the distinction between probability and possibility and to conjecture that there are manifold kinds of uncertainty: probabilistic uncertainty, uncertainty associated with ontological possibility (possibil- istic uncertainty), and various combinations of those two kinds. In short, information should be considered a generalized constraint, with statistical uncertainty being a special case; fuzzy logic should be substituted for bivalent logic; information expressed in natural language should be assigned a central role. This strategy is considered to be the most effective tool in dealing with real-world constraints, which are mostly elastic rather than rigid and have a complex structure even when apparently simple.

The relationship between information and the nature of uncertainty is also central to Keynes's proposal that the probability of arguments cannot be fully assessed unless we also introduce a measure of our confidence in those arguments (Keynes's 'weight of arguments'). Chapter 7 by Alessandro Vercelli, 'Weight of Argument and Economic Decisions', sets out to clarify the relationship between the 'weight of argument' in Keynes's TP and some crucial passages of The General Theory of Employment, Interest and Money (Keynes, 1973 [1936]) (GT). In particular, Vercelli points out that Keynes's most innovative contribution should be found in the utilization of this concept in interpreting economic decisions. After discussing alternative definitions of the weight of argument in TP and GT, Vercelli emphasizes the need to establish a hierarchical relation between probability and weight of argument: probability is considered a first-order uncertainty measure while ' uncertainty' in the strict sense is associated with second-order uncertainty as measured by the weight of argument. It is nowadays increasingly acknowledged that there are no binding objections that preclude the analysis of different modalities of uncertainty. In particular, Keynes's reaction to Ramsey's criticism should now be reassessed, as Keynes was induced to broaden the scope of non-demonstrative inference that could be seen as relative not only to the premises and background knowledge of arguments but also to their pragmatic and semantic context. Keynes's revised view is central to the treatment of uncertainty and the weight of argument in GT, and explains his growing attention to social psychology. According to Vercelli, Keynes's view that it is impossible to insure against the (negative) effects of a change in the weight of argument provides strong decision- theoretical foundations for his fundamental message that the market may be unable to regulate itself, so that full employment can be restored and maintained only through a well thought-out economic policy.

Fundamental uncertainty raises the issue of whether we may be justified in accepting the principle of i ndifference. In Keynes's words, this principle asserts that 'if there is no known reason for predicating of our subject one rather than another of several alternatives, then relatively to such knowledge the assertions of each of these alternatives have an equal probability' (Keynes, 1973 [1921], p. 45). The principle of indifference entails comparing 'the likelihood of two conclusions on given evidence' (Keynes, 1973 [1921], p. 58) and must be distinguished from a criterion of relevance, according to which we should consider 'what difference a change of evidence makes to the likelihood of a given conclusion' (ibid.). In the former case (likelihood of conclusions versus indifference), we are asking 'whether or not x is to be preferred to y on evidence h' (Keynes, 1973 [1921], p. 58); in the latter case (relevance versus irrelevance), we should evaluate 'whether the addition of h1 to evidence h is relevant to x' (Keynes, 1973 [1921], p. 59). Likelihood of conclusions and relevance of evidence are symmetrical features of inductive knowledge under the assumption of a fundamental regularity in nature and society. Chapters 8 and 9 examine the structure of induction by discussing, respectively, Keynes's concept of 'coefficient of influence' and the implications of lack of regularity under uncertainty due to very large coefficients of variation. Chapter 8 by Domenico Costantini and Francesco Garibaldi, 'The Relevance Quotient: Keynes and Carnap', discusses the issue of relevance (that is, the influence of conclusion b upon conclusion a on hypothesis h) by comparing Keynes's 'coefficient of influence' with the concept of 'relevance quotient' introduced by Rudolph Carnap. The authors introduce a condition of invariance for the relevance quotient close to Keynes's coefficient of influence; they argue that the above condition rules the stochastic dependence of a new observation upon data, and is an important tool in solving inductive problems. In particular, they maintain that 'the notion of relevance quotient [...] cannot be introduced without having at one's disposal a relative notion of probability' (section 8.7), and suggest that their relevance quotient is especially useful in contexts, like physics, biology and economics, where probability can be regarded as ontological. Examples of probabilistic dynamics are discussed and the fact is highlighted that changes of long-term expectations are 'indissolubly tied to a probability which is changing with evidence' (ibid.). The authors also call attention to the central role of the invariance condition ensuring that mean values are unaffected by changes in individual distribution. Finally, Chapter 8 asks whether the probability studied in Keynes's Treatise on Probability is epis- temic or ontic, and concludes by calling attention to the fact that Keynes emphasized his 'fundamental sympathy' with the stochastic approach to biology and statistical physics.

Chapter 9 by Masanao Aoki, 'Non-Self-Averaging Phenomena in Macroeconomics: Neglected Sources of Uncertainty and Policy Ineffectiveness', addresses lack of regularity by examining the behaviour of macroeconomic non-self-averaging models, that is, the behaviour of models in which the coefficient of variation of some random variable (the ratio of standard deviation divided by its mean) does not tend to zero as n tends to infinity. This chapter examines policy effectiveness questions in such models, and shows that in general the larger the coefficient of variation, the smaller is the policy multiplier. There are examples in which policy actions become totally ineffective as the value of the coefficients of variation tends to infinity. It is argued that a particularly important feature of non-self-averaging in macroeconomic simulation is that it can give rise to uninformative or misleading policy results. Specifically, the convergence of non-self-averaging models when simulated using Monte Carlo methods is much slower than in self-averaging models. Policy-effect simulations tend to become uninformative or misleading because a very large number of simulation runs may be required for extreme values to appear to upset the sorts of conclusion based on small numbers of simulations in which only most probable simulation results appear. It is argued that conventional simulations or analysis with quadratic cost criteria are all associated with self-averaging results and do not say anything about behaviour of non-self-averaging models, which points to a serious fault in using representative agents. Aoki's contribution is an important warning that microeconomic exercises leading to

'a better understanding of the dynamics of the mean or aggregate variables' cannot lead to a better understanding of the overall dynamics of the economic system if non-self-averaging fluctuations are considered (section 9.1; see also Aoki, 2002).

Non-regularity in individual behaviour has far-reaching implications for what concerns the analysis of the economic (or social) system as a whole and the method most suitable to that objective. In particular, non-regularity points to the existence of a wedge between the universe of individual decision makers (micro-world) and the system as a whole (macro-world) as traditionally conceived, and calls for innovative theoretical effort to overcome the problem. Thus, in Chapter 10, 'A Critical Reorientation of Keynes's Economic and Philosophical Thoughts', Izumi Hishiyama addresses the above issue by first considering 'the difficult core of Keynesian thought—"the logical justification of inductive methods"' (Keynes, as quoted in section 10.1). The specific route followed by Keynes in order to justify induction leads to the analysis of the epis- temic conditions for inductive knowledge, that is, of the assumptions of 'atomic uniformity' and 'limitation of independent variety'. According to Hishiyama, this point of view may be connected with ethical individualism (as suggested by Keynes himself in TP) but it is explicitly rejected in GT when Keynes deals with the economic system as an organic unit. In particular, the author deals with the methodological assumptions behind GT and takes up Luigi Pasinetti's view that the effective demand principle is 'quite independent of any behavioural relations and thus of any particular adaptation mechanism' (Pasinetti, as quoted in section 10.7). On the other hand, fundamental (non-measurable) uncertainty characterizes Keynes's representation of the micro-world. The dual character of Keynes's thinking calls attention to the whole versus parts issue and leads Hishiyama to look for a way of making Keynes's treatment of uncertainty consistent with the organic approach in the analysis of the economic system as a whole. In this connection, Hishiyama calls attention to Pasinetti's proposal that sectorally differentiated demands should be considered, and to Pasinetti's proposal that one should substitute a general macroeconomic condition (expressed in a multi-sectoral framework) for Keynes's original formulation of effective demand theory. In this way, a criterion for moving back and forth between aggregate and disaggregate levels of investigation is introduced, and fundamental uncertainty is made compatible with a certain degree of determinacy at the macro-level.9

John Allen Kregel and Eric Nasica, in Chapter 11 'Uncertainty and Rationality: Keynes and Modern Economics', also highlight the fact that

'the crucial point for Keynes, as for Knight, is the inadequacy of statistical quantification in the form of a probability for the analysis of uncertainty', but this aspect of rational decisions in condition of fundamental uncertainty has been neglected by orthodox economists (section 11.2.1). According to these authors, the consideration of crucial decisions such as those leading to irreversible and non-repeated actions is the boundary line between the Keynesian and the traditional neoclassical approach of uncertainty. Nor does the 'new' classical theory in the version of the rational expectation hypothesis admit situations of fundamental uncertainty, since this theory assumes that the economic system moves according to a stationary stochastic process which also has the characteristic of an ergodic process. Therefore, according to the theory of rational expectations, decisions in condition of fundamental uncertainty are 'excluded, or classified as non rational' (see section 11.3.2); while 'post-Keynesian analysis develops a theory of the formation of expectations applicable to situations in which the degree of rational belief is less than certain' (ibid.). In fact, Keynesian economists admit a non-ergodic environment, and believe that the traditional conception of rationality has to be reformulated in order to describe situations of 'expectational instability'.

The concluding chapter by Silva Marzetti Dall'Aste Brandolini, 'Moral Good and Right Conduct: A General Theory of Welfare under Fundamental Uncertainty', deals with competing moral systems (often associated with competing social philosophies), and aims to identify which general characteristics a general theory of welfare (GTW) must have from the point of view of rationality when also admitting conditions of fundamental uncertainty. Welfare economics consists of a certain number of theoretical models that may be distinguished according to the conception of moral value on which they are grounded and the right conduct they suggest. As regards moral values, two fundamental conceptions of moral good exist, which justify the existence of different approaches to welfare economics: the ethics of motive, which makes reference to subjective values, and the ethics of end, which also admits objective values (see sections 12.2 and 12.4). As regards right conduct, Bayesian reductionism and rational dualism are two different ways of considering uncertainty about economic phenomena. Bayesian reductionism assumes that agents are able to identify numerical subjective probabilities, and admits the maximization procedure only; while rational dualism also admits non-measurable probabilities and procedures other than maximization (see sections 12.3 and 12.5). Since a rational choice between competing moral systems cannot be made, this awareness leads the author to think in terms of a GTW; and in section 12.6 it is shown that, from the point of view of moral values, a GTW must admit all the possible values in which a society can believe; while from the point of view of instrumental rationality, it must admit not only situations where decision makers have all the information and computing capabilities needed by the maximization procedure, but also situations where they do not have adequate information and behave under conditions of fundamental uncertainty.

 
Source
< Prev   CONTENTS   Source   Next >