CUTTING THE CAKE

As an instance of mechanism design or implementation, we can turn to an idea much older than game theory: cutting a cake. The objective is that the cake should be divided equally - or if not equally, then fairly. Suppose the cake is to be divided between two identical persons, each of whom prefers more cake to less. One of the two is assigned to cut the cake, and the other gets to choose which piece he will take. Then the cake-cutter knows that he will get the smaller piece, and thus has an incentive to make the division as equal as possible. Equal division is incentive compatible, that is, consistent with noncooperative decisions by the two recipients of cake. The rules of the game - one cuts and the other chooses - implement the objective of an equal division of cake.

Of course, the devil is in the simplifying assumptions, as usual. Let us make the problem a little more complex. We suppose the cake contains nuts, and the nuts are not randomly distributed - there are more of them on the right (let us say). Now suppose that the two agents are not alike but are of different types: agent a, who is to choose, likes nuts and might prefer a smaller piece if it had more nuts, while agent b is indifferent with respect to nuts and just wants a bigger piece, regardless of the quantity of nuts. What is a “fair division” in this case? A division may be fair in this sense if it is non-envious (Foley, 1967): each person gets a piece of cake that he prefers to the piece the other person has, rather than vice versa. Now suppose b is to cut and a is to choose and b knows a’s preferences. Then b can cut the cake into two unequal pieces, with more nuts in the smaller piece, such that a will choose the piece he prefers while b is left the piece that he prefers. Thus a fair division (in this sense) is incentive-compatible.

But what if agent b does not know which type agent a is - whether agent a likes nuts or is indifferent with respect to them? (Perhaps he even hates them.) b needs this information to know how to cut, for the benefit of both. b can ask a what type he is - but can he trust the answer? Suppose for a moment that a is, like b, indifferent to nuts. By saying that he likes nuts, a would be able to fool b into cutting unequally. a would then choose the bigger piece, and so be better off than he would be if he told the truth. Thus the rules of the game will have to be designed so that each agent will truthfully reveal what type he is (and that is to say, reveal whatever is relevant to the correct decision). This is the revelation principle. It may be that (for a particular class of games, or in general) there are no “rules of the game” that will do this, and this issue has been central to mechanism design.

 
Source
< Prev   CONTENTS   Source   Next >