An Experiment in Game Theory |
Game Theory provides a formal treatment of well-specified situations in which the outcome depends on the choices of several agents who may have conflicting interests.
Abstractly, a player chooses a strategy, where a strategy specifies the player's move in every situation that may arise in the game. For example, a strategy for white in chess specifies, roughly white's move for every board configuration in which it is his turn. This example is rough because white's play will prevent certain configurations from arising, and his strategy need not provide a move for those unreachable configurations. A game tree is a useful representation for a game in extensive form. I think this definition of a strategy elides important issues of algorithms and computational complexity.
A game in normal form lists the players, the strategies for each player, and the expected payoffs to each player for each combination of strategies (the payoff matrix). Table 1 gives an example for what may be the most famous game designed by game theorists. The first entry in each ordered pair is the payoff to player A when A plays the strategy indicated by the row label and B plays the strategy indicated by the column label. The second entry shows the payoff to player B.
Player A's Strategy | Player B's Strategy | |
Cooperate | Defect | |
Cooperate | (1/2, 1) | (-1, 2) |
Defect | (1, -1) | (0, 1/2) |
Suppose the payoffs, in each entry in the payoff matrix, sum over all players to zero. Then the game is a zero sum game. The prisoner's dilemma is not a zero-sum game.
Consider simple two-person zero-sum games like "Odds and Evens" or "Rock, Scissors, Paper". The best strategy is not to play the same simple strategy over and over, but to randomly mix strategies. This is an interesting insight from game theory - that randomness in economics can come from optimal choices even in games with completely deterministic rules. The probabilities that the players should choose depend on the payoff matrix. One can formulate a Linear Program for each player to solve for these probabilities. Each player assumes that the other player chooses his probabilities to minimize the other player's loss, given the first player's probabilities. A minimax problem arises. The neat thing about the two Linear Programs is that they are dual problems. Although von Neumann helped develop Linear Programming, vN and Morgenstern don't point out this connection. However, both vN's paper on activity analysis and vN and M's book used a fixed point theorem in the proof of the most important relevant theorems.
How to extend the concept of a solution to more than two players, or to non-constant sum games, is an interesting question. vN and M introduced "fictional players", so to speak, to make the general game like a two-person zero-sum game. A dummy player with one strategy can absorb the losses and winnings in a non-zero sum game. Thus, the game, with this dummy appended, becomes a zero-sum game. The multiplayer game can be thought of as a two player game between a winning coalition and the remaining players, thus becoming equivalent to a two-person game. vN and M emphasize that how the players in a coalition will split up their winnings is indeterminate, in general. Threats of players to leave a coalition and join the other side, though, impose constraints on the range of variability in the set of solution imputations.
Economists nowadays say that the vN and M solution applies to what are known as cooperative games. Players can discuss how to share winnings beforehand, and agreements are enforcable by some external institution. vN and M, had a different perspective:
"21.2.3. If our theory were applied as a statistical analysis of a long series of plays of the same game - and not as the analysis of one isolated play - an alternative interpretation would suggest itself. We should then view agreements and all forms of cooperation as establishing themselves by repetition in such a long series of plays.
It would not be impossible to derive a mechanism of enforcement from the player's desire to maintain his record and to be able to rely on the on the record of his partner. However, we prefer to view our theory as applying to an individual play. But these considerations, nevertheless, possess a certain signiificance in a virtual sense. The situation is similar to the one we encountered in the analysis of the (mixed) strategies of a zero-sum two-person game. The reader should apply the discussions of 17.3 mutatis mutandis to the present situation." -- John Von Neumann and Oscar Morgenstern (1953) p. 254.
John Williams, one of the participants in Flood and Dresher's original experiment was puzzled why Armen Alchian did not behave according to this way of thinking. On the 50th iteration, he wrote, "He's a shady character and doesn't realize we are playing a 3rd party, not each other."
John Forbes Nash extended the two-person zero-sum solution in another manner. He defined the Nash equilibrium. In a Nash equilibrium each player's mixed strategy yields that player the maximum payoff, given that all other players are choosing their optimal strategy by the same rule. A Nash equilibrium is not necessarily unique for a given game. Nash also redefined vN and M's approach to be applied to cooperative games. The Nash equilibrium is said to apply to non-cooperative games.
Lots of questions arose from this work. How can the players decide on which Nash equilibria to choose? Can this indeterminacy be narrowed? Researchers have proposed a whole slew of refinements and variations - subgame perfect equilibria, trembling hand equilibria, etc. - the details of which I forget. This looks like a different approach to economics than Walrasian General Equilibrium theory. Are they related? Well, the proofs of the existence of Arrow-Debreu equilibria grew out of the mathematics of game theory. Furthermore, the equivalence principle, which M. never accepted, states that game theoretic solutions will approach Arrow-Debreu equilibria as the number of players increases.
It seems many mathematicians and economists have decided that, in practice, one can usually not set up the game and solve it. Nevertheless, game theory provides a language to talk about such situations. Discussions in this language have dissected "rationality" until, perhaps, the concept has fallen apart. You can view Survivor or the Weakest Link as laboratories to test game theory. In fact, experimental economics grew up with game theory, including experiments in which the players are computer code.
References- Philip Mirowski. 2002. Machine Dreams: Economics Becomes a Cyborg Science. Cambridge University Press.
- John Von Neumann and Oscar Morgenstern. 1953. Theory of Games and Economic Behavior, 3rd ed. Princton University Press