# Case 2 Indefinite repetitions

Aumann (CTG4, 1959, pp. 287-324) defined a *supergame* for a game Г as an infinite sequence of repetitions of Г. Clearly, the reasoning in the previous case will not apply to a supergame, since the supergame has no basic subgames. Every subgame is a sequence of repetitions of Г indexed as j, j +1, . . ., without limit, so every subgame contains other proper subgames. We will have to use different methods to deal with supergames.

But is this realistic? After all, nothing lasts forever! However, Case 1 seems a little artificial in assuming a *definite* number of repetitions. How likely is it that oligopolists, or others engaged in a repeated game, would anticipate the exact number of repetitions that will occur? Suppose instead that at each repetition of Г, the players can expect that there will be yet another repetition with probability 8, but the probability that there will be no further repetitions whatever is 1 - 8. Let *yj* be the payoff to a player in the jth repetition of the game. Then at repetition t, the player wants to maximize the mathematical expectation *gj _{= t}8yj.* This formula is the same as a formula for the discounted present value of the series of payments at a discount factor 8, and accordingly 8 is referred to as the discount factor.

^{6 }Although the probability of more than M rounds of play approaches zero as M increases without bound, a game such as this has to be analyzed as an infinite game and has no basic subgames.

We may suppose that the players in the game choose behavior strategies for each repetition according to some rule. The “tit-for-tat” rule is an important possibility: begin by playing the cooperative behavior strategy, “high” price in this case, and continue playing it unless the other player plays noncooperatively (“low” price). If the other player plays noncooperatively, then retaliate by playing once noncooperatively on the following round. Notice that the threat of retaliating by playing noncooperatively is credible, since noncooperative play is an equilibrium behavior strategy on any particular round.

Tit-for-tat is called a “trigger strategy,” since noncooperation triggers a retaliatory act of noncooperation. However, properly speaking, the tit-for- tat rule is not itself a strategy.^{7} It is neither a behavior strategy nor a contingent strategy as understood by von Neumann and Morgenstern. Rather, it characterizes an infinite family of contingent strategies or of sequences of behavior strategies for this game. However, because the retaliation is itself Nash equilibrial, each of the contingent strategies in the family is subgame-perfect, provided that the threat is sufficient to deter the other player from choosing the noncooperative strategy. That is the question to which we now turn.

The question is this: supposing Firm A plays according to the tit-for-tat rule, will firm B be deterred from a *single* opportunistic noncooperative play, that is, from playing “low” at round t, taking advantage of A’s cooperative play, and then returning to playing “high, high, high” so long as the game continues. This implies a sequence of payoffs *y _{t}* = 10, y

_{t+1}= 3,

*y*= y

_{t12 }_{t+3}= . . . = 7. The alternative is to play cooperatively on every round, which implies payoffs

*y*y

_{t}=_{t11}= y

_{t+2}= y

_{t+3}= . . . = 7. The expected value of the first sequence is 10 + S

^{3}+ S

^{2}7 + S

^{3}7 + S

^{4}7 + . . .. The expected value of the second sequence is 7 + S7 + S

^{2}7 + S

^{3}7 + S

^{4}7 + . . .. Since only the first two terms differ, the second sequence of payoffs is greater if 7(1 + S) > 10 + 3S. A little algebra tells us that this will be true whenever S > 0.75.

What if Firm B plays noncooperatively again and again? If so, then A will respond by also playing noncooperatively on every turn, in accordance with the tit-for-tat rule. Thus, Firm B’s sequence of payoffs is *y _{t} =* 10, y

_{t}+

_{1}= y

_{t}+

_{2}= y

_{t}+

_{3}= . . . = 4, which can be written as 10 + (S/(1 — S))4. For any S > 0.25, the expected value of this sequence will be less than the expected value for the sequence of payoffs for a single noncooperative play. Conversely, if the threat implicit in tit-for-tat play is sufficient to deter a single round of noncooperative play, it is undoubtedly sufficient to deter systematically noncooperative play. With S > 0.75, playing against tit-for-tat, Firm B will simply find that more noncooperative means lower expected value payoffs.

What we have found is that, if the probability of another round of play is great enough,^{8} in this example, a tit-for-tat rule by one player will make it unprofitable for the other player to deviate from cooperative behavior. If each player plays tit-for-tat, then the play is always cooperative, and neither player can gain anything by deviating from the tit-for-tat rule.

Unfortunately, that is not the whole story. Mutual play of a tit-for- tat rule is only one of many equilibria of an indefinitely repeated social dilemma. In particular, pure noncooperation by both players is always also an equilibrium. There are many others at intermediate levels of efficiency. Nor is the tit-for-tat rule dominant over all other rules by which the game might be played. Suppose, for example, that Firm A plays tit-for-tat while Firm B plays a more “forgiving” trigger strategy rule, tit-for-two-tats. That is, Firm B plays cooperatively unless Firm A plays noncooperatively for two rounds in succession, and then responds with one round of retaliatory noncooperative play. These two rules would lead to cooperation, and Firm B can do no better so long as Firm A sticks to tit-for-tat. But Firm A can do better by deviating from tit-for-tat. In particular, suppose Firm A adopts the rule of alternating cooperative and noncooperative play. Then Firm B never retaliates and Firm A alternates payoffs of 10 and 7, a sequence that dominates the sequence from steady cooperative play. The point is that there are some strategy rules (for example, tit-for-two-tats) against which the tit-for-tat rule does not produce best responses.

The tit-for-tat strategy rule and variants of it, such as a tit-for-two-tats and two-tits for-a-tat (retaliate with two rounds of noncooperative play for one round by the other player) are all *forgiving trigger strategy rules,* which means that the retaliating player will eventually return to cooperative play if the other player does so. A rule that plays cooperatively until the other player initiates noncooperative play and then retaliates by playing noncooperatively on all successive plays is called the *grim trigger.* The grim trigger may deter noncooperative play where tit-for-tat would not. The grim trigger played a key role in warfare in the twentieth century. Poison gas was used as a weapon of war in World War I, and in the Iran-Iraq war of the 1980s, but not in World War II. The use of a weapon such as poison gas may be a social dilemma for the belligerents (McCain, 2014b, pp. 60-61, 360-363). In a long war, with repeated battles, perhaps restraint might be based on fear of retaliation from an opponent playing according to a grim trigger rule. In fact, historical evidence makes it clear that Germany, the United States, and Britain (with pressure from the United States) were following a grim trigger rule with respect to gas (Harris and Paxman, 2002). This example may illustrate the real possibility of cooperation in games of completely opposed interest, but also underscores that there is nothing inevitable about this, and that non-cooperation is always among the equilibria of repeated games.

This discussion assumes a two-person game. The extent to which the results may be extended to games of more than two persons remains a somewhat open question. What is clear is that the relatively simple argument along the lines of the previous example is not applicable to more than two players. Difficulties arise with as few as three players (Fudenberg and Maskin, 1986, p. 543). Allowing for correlated strategies (with public signals) and assuming sufficient diversity in the payoffs to the different players, Fudenberg and Maskin do extend the model to *n* players. Abreu et. al. (1994) follow Fudenberg and Maskin with a more precise characterization of the conditions for cooperation in n-person games. In a working paper, Haag and Lagunoff (2005) find that diversity in subjective rates of time discounting make cooperation less likely, though it grows more likely in larger groups. Nevertheless, it seems widely felt that larger groups are less likely to cooperate, on the basis of experience in the applications to price competition.