Perfect and Ideal Rationality

Notice that, so far as a’s decisions are concerned, decision sequence 1', 2', 3', with its payoffs, is identical to 1, 2, 3 in the intertemporal inconsistency example. The major difference is that it is decision maker b’s belief that a has a weak will, rather than a’s belief that a’s will is weak, that puts Alt2, up, up out of a’s reach. Supposing b to be rational, what basis might he have for that belief? One possibility is that we define rationality as maximization constrained by weakness of will. Then we need only apply common knowledge of rationality to induce b’s belief in a’s weakness of will. I submit that this is indeed the concept of rationality noncooperative game theory and in neoclassical economics. In what follows choices that maximize payoffs subject to the constraint of weakness of will be called perfectly rational choices, not because their outcomes are perfect (as the example shows) but because it is rationality in this sense that defines subgame perfect equilibrium.

But common knowledge of perfect rationality is not the only possibility, and we need to consider others. First consider the possibility that b believes a is dishonest. Then b will not believe any assertions by a that he will choose “up” at decision point A2 and accordingly b chooses strategy 5'. But (1) a’s honesty is of concern to b only if b believes a has strong will. If b believes a has a weak will then b’s decision will not be affected by the further knowledge that a is honest or dishonest. (2) a can benefit by acting dishonestly only if b believes both that a has strong will and that a is honest. (3) Accordingly, we must consider a 4-step game in which a’s decision whether to act honestly or dishonestly is the first stage. If a has a strong will he can commit himself to one or the other and carry out the commitment. (4) However, if b believes a has chosen to act honestly, then a’s best response is dishonesty. (5) Therefore, this first stage requires a mixed-strategy solution. (6) Since b is rational, he will be aware of this and will accordingly estimate the payoffs of strategies 4' and 5' as expected values reflecting the optimal mixed strategy for a, which is to act honestly with probability 2/5. Thus, b’s belief that a will be dishonest with probability 1 is either irrelevant or irrational.

Common sense suggests that rationality, strength of will and honesty are distinct traits and that rational individuals may exist in positive numbers whose will is strong and is weak; and that within each category some are honest and some are crooked. These conditions are also relative and a typical person is more likely to act in an honest and strong-willed way in some circumstances than others. Suppose b believes that a very large proportion of all human beings have weak wills, but has no way to know which type a is. In that case, once again, he would estimate the payoffs of his choices as expected values, using the probabilities based on the frequency of weakness of will in the population and such other evidence as he may have. To fail to do so would be irrational or at best boundedly rational! Perfect rationality is naive on this score, and in what follows decisions based on maximization with estimates of the probability that other agents have strong wills and are honest will be called “sophisticated rationality” to distinguish them from the rationality expressed by subgame perfect equilibrium. (For the concept of sophisticated rationality and some evidence that there are multiple types of decision makers in a real human population, see Stahl and Wilson, 1995.)

It seems that b’s behavior, as assumed in subgame perfect equilibrium theory, can be rational only if b believes that weakness of will is a common trait of all human beings. This in turn can be considered a rational belief only if (1) it is true, or (2) b’s experience has been so idiosyncratic that it seems to b that the belief is true, although b is mistaken. We can eliminate

(2) as inappropriate to be the basis of a general theory, and conclude that: for subgame perfect equilibrium theory, universal weakness of will is a necessary assumption. If both weakness of will and perfect rationality are common human characteristics, then there is little point in distinguishing between them. But the results of such an identification can be rather peculiar. The results of the example of intertemporal inconsistency and of the two-person game from Figure 9.2 can both be stated in the following way: (1) Define rationality as perfect rationality. (2) Suppose decision maker a in fact adopts strategy 2 (or 2') and carries it out. (3) As a result of this choice, decision maker a is better off. (4) Decision maker a has acted irrationally. Stated in just that way, perfect rationality is not a very intuitively appealing concept of rationality.

How would von Neumann and Morgenstern have treated the game in Figure 9.2? In the first instance they would have expected the two players to form a coalition around strategies 2', 4', since the total value generated by that pair, $7371, dominates all other strategy pairs. This will be no difficulty if both have strong wills and are honest.

In Game 9.3, the noncooperative equilibrium is also the assurance value for both players. Unlike Game 6.8, for example, Game 9.3 has no threat strategies. For a game like Game 6.8, von Neumann and Morgenstern seem to envision a negotiating process along the following lines: agent a says “if you adopt strategy D2 I will adopt strategy P, leaving you with 5 rather than 7.” This is a threat designed to increase b’s bargaining power, and for von Neumann and Morgenstern (and Nash to the contrary) all feasible threats are credible. But none of this makes sense unless each agent believes the other has strength of will enough to carry out his threats, even when they are irrational in the sense of perfect rationality. For von Neumann and Morgenstern, a rational agent maximizes his expected utility on the assumption that all agents maximize and all have strong wills. Strength of will is here considered an aspect of rationality. In what follows, rationality in this sense will be called “ideal rationality.”

Notice that the assumption that all feasible threats are credible is central to the definition of the coalition function in von Neumann and Morgenstern, and as most cooperative game theory is based on the characteristic function, we may conclude that the assumption of ideal rationality is characteristic of cooperative game theory. Thus, it is appropriate to distinguish between cooperative and noncooperative game theory by noting that while noncooperative game theory assumes perfect rationality, cooperative game theory assumes ideal rationality.

 
Source
< Prev   CONTENTS   Source   Next >