Showing posts with label Game Theory. Show all posts
Showing posts with label Game Theory. Show all posts

Saturday, October 29, 2022

An Overview Of Game Theory

An Experiment in Game Theory

Game Theory provides a formal treatment of well-specified situations in which the outcome depends on the choices of several agents who may have conflicting interests.

Abstractly, a player chooses a strategy, where a strategy specifies the player's move in every situation that may arise in the game. For example, a strategy for white in chess specifies, roughly white's move for every board configuration in which it is his turn. This example is rough because white's play will prevent certain configurations from arising, and his strategy need not provide a move for those unreachable configurations. A game tree is a useful representation for a game in extensive form. I think this definition of a strategy elides important issues of algorithms and computational complexity.

A game in normal form lists the players, the strategies for each player, and the expected payoffs to each player for each combination of strategies (the payoff matrix). Table 1 gives an example for what may be the most famous game designed by game theorists. The first entry in each ordered pair is the payoff to player A when A plays the strategy indicated by the row label and B plays the strategy indicated by the column label. The second entry shows the payoff to player B.

Table 1: A Prisoner's Dilemma
Player A's StrategyPlayer B's Strategy
CooperateDefect
Cooperate(1/2, 1)(-1, 2)
Defect(1, -1)(0, 1/2)

Suppose the payoffs, in each entry in the payoff matrix, sum over all players to zero. Then the game is a zero sum game. The prisoner's dilemma is not a zero-sum game.

Consider simple two-person zero-sum games like "Odds and Evens" or "Rock, Scissors, Paper". The best strategy is not to play the same simple strategy over and over, but to randomly mix strategies. This is an interesting insight from game theory - that randomness in economics can come from optimal choices even in games with completely deterministic rules. The probabilities that the players should choose depend on the payoff matrix. One can formulate a Linear Program for each player to solve for these probabilities. Each player assumes that the other player chooses his probabilities to minimize the other player's loss, given the first player's probabilities. A minimax problem arises. The neat thing about the two Linear Programs is that they are dual problems. Although von Neumann helped develop Linear Programming, vN and Morgenstern don't point out this connection. However, both vN's paper on activity analysis and vN and M's book used a fixed point theorem in the proof of the most important relevant theorems.

How to extend the concept of a solution to more than two players, or to non-constant sum games, is an interesting question. vN and M introduced "fictional players", so to speak, to make the general game like a two-person zero-sum game. A dummy player with one strategy can absorb the losses and winnings in a non-zero sum game. Thus, the game, with this dummy appended, becomes a zero-sum game. The multiplayer game can be thought of as a two player game between a winning coalition and the remaining players, thus becoming equivalent to a two-person game. vN and M emphasize that how the players in a coalition will split up their winnings is indeterminate, in general. Threats of players to leave a coalition and join the other side, though, impose constraints on the range of variability in the set of solution imputations.

Economists nowadays say that the vN and M solution applies to what are known as cooperative games. Players can discuss how to share winnings beforehand, and agreements are enforcable by some external institution. vN and M, had a different perspective:

"21.2.3. If our theory were applied as a statistical analysis of a long series of plays of the same game - and not as the analysis of one isolated play - an alternative interpretation would suggest itself. We should then view agreements and all forms of cooperation as establishing themselves by repetition in such a long series of plays.

It would not be impossible to derive a mechanism of enforcement from the player's desire to maintain his record and to be able to rely on the on the record of his partner. However, we prefer to view our theory as applying to an individual play. But these considerations, nevertheless, possess a certain signiificance in a virtual sense. The situation is similar to the one we encountered in the analysis of the (mixed) strategies of a zero-sum two-person game. The reader should apply the discussions of 17.3 mutatis mutandis to the present situation." -- John Von Neumann and Oscar Morgenstern (1953) p. 254.

John Williams, one of the participants in Flood and Dresher's original experiment was puzzled why Armen Alchian did not behave according to this way of thinking. On the 50th iteration, he wrote, "He's a shady character and doesn't realize we are playing a 3rd party, not each other."

John Forbes Nash extended the two-person zero-sum solution in another manner. He defined the Nash equilibrium. In a Nash equilibrium each player's mixed strategy yields that player the maximum payoff, given that all other players are choosing their optimal strategy by the same rule. A Nash equilibrium is not necessarily unique for a given game. Nash also redefined vN and M's approach to be applied to cooperative games. The Nash equilibrium is said to apply to non-cooperative games.

Lots of questions arose from this work. How can the players decide on which Nash equilibria to choose? Can this indeterminacy be narrowed? Researchers have proposed a whole slew of refinements and variations - subgame perfect equilibria, trembling hand equilibria, etc. - the details of which I forget. This looks like a different approach to economics than Walrasian General Equilibrium theory. Are they related? Well, the proofs of the existence of Arrow-Debreu equilibria grew out of the mathematics of game theory. Furthermore, the equivalence principle, which M. never accepted, states that game theoretic solutions will approach Arrow-Debreu equilibria as the number of players increases.

It seems many mathematicians and economists have decided that, in practice, one can usually not set up the game and solve it. Nevertheless, game theory provides a language to talk about such situations. Discussions in this language have dissected "rationality" until, perhaps, the concept has fallen apart. You can view Survivor or the Weakest Link as laboratories to test game theory. In fact, experimental economics grew up with game theory, including experiments in which the players are computer code.

References
  • Philip Mirowski. 2002. Machine Dreams: Economics Becomes a Cyborg Science. Cambridge University Press.
  • John Von Neumann and Oscar Morgenstern. 1953. Theory of Games and Economic Behavior, 3rd ed. Princton University Press

Tuesday, June 28, 2016

Getting Greater Weight For Your Vote May Not Give You Relatively More Power

1.0 Introduction

This post presents a perhaps surprising example of results from measuring political power in a system with weighted voting. I provide examples in which the weight of a person's vote is increased. Yet that voter, in some cases, gains no additional power, in some sense. In one case, by the measures of voting power considered here, the additional weight has no effect on the power of any voter. In another case, another player, with unchanged weight to his vote, is elevated in power with the voter whose weight is increased.

I find these results to be an interesting consequence of power measures. I have not yet found a simple example where the effect on the ranking of voting power is different for the three indices considered here. Nor have I found an example where a voter declines in power with an increase in the weight of his vote.

2.0 An Example of a Voting Game

A voting game is specified as a set of players, the number of votes needed to enact a bill into law (also referred to as passing a proposition), and the weights for the votes of each player. In considering voting games with a small number of players and weighted, unequal votes, one might think of such a game as describing a council or board of directors, where members represent blocs or geographic districts of varying sizes.

As example, consider a set, P, of four players, indexed from 0 through 3:

P = The set of players = {0, 1, 2, 3}

A common way to indicate the remaining parameters for a voting game is a tuple in which the first element is followed by a colon and the remaining elements are separated by commas:

(6: 4, 3, 2, 1)

The positive integer before the colon indicates the number of votes - six, in this case - needed to pass a proposition. The remaining integers are the weights of players' votes. In this case, the weight of Player 0's vote is 4, the weight of Player 1's vote is 3, and so on.

3.0 Two Power Indices

Consider all 16 possible subsets of the four players. These subsets are listed in the first column of Table 1. A subset of players is labeled a coalition. The second column indicates whether or not the coalition for that row has enough weighted votes to pass a proposition. If so, the characteristic function for that coalition is assigned the value unity. Otherwise, it gets the value zero. A player is decisive for a coalition if the player leaving the coalition will convert it from a winning to a losing coalition. The last four columns in Table 1 have entries of unity for each player that is decisive for each coalition. The last row in Table 1 provides a count, for each player, of the number of coalitions in which that player is decisive. The Penrose-Banzhaf power index, for each player, is the ratio of this total to the number of coalitions.

Table 1: Calculations for Penrose-Banzhaf Power Index
CoalitionCharacteristic
Function
Player
0123
{}v( {} ) = 00000
{0}v( {0} ) = 00000
{1}v( {1} ) = 00000
{2}v( {2} ) = 00000
{3}v( {3} ) = 00000
{0, 1}v( {0, 1} ) = 11100
{0, 2}v( {0, 2} ) = 11010
{0, 3}v( {0, 3} ) = 00000
{1, 2}v( {1, 2} ) = 00000
{1, 3}v( {1, 3} ) = 00000
{2, 3}v( {2, 3} ) = 00000
{0, 1, 2}v( {0, 1, 2} ) = 11000
{0, 1, 3}v( {0, 1, 3} ) = 11100
{0, 2, 3}v( {0, 2, 3} ) = 11010
{1, 2, 3}v( {1, 2, 3} ) = 10111
{0, 1, 2, 3}v( {0, 1, 2, 3} ) = 10000
Total:5331

The Shapley-Shubik power index considers the order in which players enter a coalition. For the example, one considers all 24 permutations for the players. The first column in Table 2 lists these permutation. For each row, a player gets an entry of unity in the appropriate one of the last four columns if including that player in a coalition, reading the entries in a permutation from left to right, creates a winning coalition. The Shapley-Shubik power index, for each player, is the ratio of the totals of each of the last four columns to the number of permutations.

Table 2: Calculations for the Shapley-Shubik Power Index
PermutationPlayer
0123
(0, 1, 2, 3)0100
(0, 1, 3, 2)0100
(0, 2, 1, 3)0010
(0, 2, 3, 1)0010
(0, 3, 1, 2)0100
(0, 3, 2, 1)0010
(1, 0, 2, 3)1000
(1, 0, 3, 2)1000
(1, 2, 0, 3)1000
(1, 2, 3, 0)0001
(1, 3, 0, 2)1000
(1, 3, 2, 0)0010
(2, 0, 1, 3)1000
(2, 0, 3, 1)1000
(2, 1, 0, 3)1000
(2, 1, 3, 0)0001
(2, 3, 0, 1)1000
(2, 3, 1, 0)0100
(3, 0, 1, 2)0100
(3, 0, 2, 1)0010
(3, 1, 0, 2)1000
(3, 1, 2, 0)0010
(3, 2, 0, 1)1000
(3, 2, 1, 0)0100
Total:10662

4.0 Three Power Indices for Three Voting Games

Table 3 summarizes and expands on the above calculations. The Penrose-Banzhaf power index need not sum over the players to unity. Accordingly, I break this index down into two indices, where the second index is normalized. The Shapley-Shubik power index is guaranteed to sum to unity. I introduce two other voting games, with corresponding power indices, presented in Tables 4 and 5.

Table 3: Power Indices for (6: 4, 3, 2, 1)
PlayerPenrose-Banzhaf Power IndexShapley-Shubik
Power Index
IndexNormalized
05/165/1210/24 = 5/12
13/163/12 = 1/46/24 = 1/4
23/163/12 = 1/46/24 = 1/4
31/161/122/24 = 1/12

Table 4: Power Indices for (6: 4, 2, 2, 1)
PlayerPenrose-Banzhaf Power IndexShapley-Shubik
Power Index
IndexNormalized
06/16 = 3/86/10 = 3/516/24 = 2/3
12/16 = 1/82/10 = 1/54/24 = 1/6
22/16 = 1/82/10 = 1/54/24 = 1/6
3000

Table 5: Power Indices for (5: 4, 2, 2, 1)
PlayerPenrose-Banzhaf Power IndexShapley-Shubik
Power Index
IndexNormalized
06/16 = 3/86/12 = 1/212/24 = 1/2
12/16 = 1/82/12 = 1/64/24 = 1/6
22/16 = 1/82/12 = 1/64/24 = 1/6
32/16 = 1/82/12 = 1/64/24 = 1/6

5.0 Constitutional Changes

Consider a change in the constitution, from one of the three voting games with tables in the previous section to another such game. The calculations allow one to measure the impact on voting power for any such change. To simplify matters, I consider only rankings of voting power. And, for these three voting games, the three power indices consider here happen to yield the same ranks, for any given voting game out of these three.

Accordingly, Table 6 shows changes in the rules (the "constitution") for these cases. The change to the rules on the right superficially strengthens Player 1, either by increasing the weight of Player 1's vote or requiring less votes to pass a resolution. As noted below, I am unsure what naive intuition might be for the second row. For the third vote, the number of votes needed to pass a proposition is altered such that a simple majority is needed before and after the change in weight.

Table 6: Changing the Rules to Strengthen the Players?
Starting GamePlayer RanksEnding GamePlayer Ranks
(6: 4, 2, 2, 1)0 > 1 = 2 > 3(6: 4, 3, 2, 1)0 > 1 = 2 > 3
(6: 4, 2, 2, 1)(5: 4, 2, 2, 1)0 > 1 = 2 = 3
(5: 4, 2, 2, 1)0 > 1 = 2 = 3(6: 4, 3, 2, 1)0 > 1 = 2 > 3

The first row shows a case where the weight of Player 1's vote increases, which might intuitively give him more power with respect to the apparently weaker Players 2 and 3. Yet this increase in weight also increases the power of Players 2 and 3, even though the weight of their votes does not change. And Player 1 remains equal in power to Player 2, both before and after the change. In fact, the change has no effect on the ranking of the players' voting power.

The second row shows a case where the votes needed to pass a measure declines, after the change in rules, from a super-majority to a simple majority, given the total of weighted votes. Would one expect such a constitutional amendment to strengthen the most powerful, or moderately powerful voters before the change? I find that this change raises the power of the weakest voter to the power of the middling voters. I am not sure this is counter-intuitive, unlike the other two rows.

The third row shows a case in which, like the first row, the weight of Player 1's vote increases. Both before and after the change, a simple majority, given the total of weighted votes, is needed to pass a proposition. This change makes Player 1 more powerful than the weakest player, as one might intuitively expect. But Player 2 is also made more powerful than the weakest player, despite the weight of his vote not varying. And Player 1 ends up no more powerful than Player 2. These effects on Player 2 seem counter-intuitive to me.

6.0 Conclusions

So my examples above have presented somewhat counter-intuitive results in voting games.

I gather that the Deegan-Packel and Holler-Packel are some other power indices I might find of interest. And Straffin (1994) is one paper that explains axioms that characterize some power index or other.

References
  • Donald P. Green and Ian Shapiro (1996). Pathologies of Rational Choice Theory: A Critique of Applications in Political Science, Yale University Press
  • P. Straffin (1994). Power and stability in politics. Handbook of Game Theory with Economic Applications, V. 2, Elsevier.

Wednesday, June 15, 2016

The History and Sociology of Game Theory: A Reading List

For me, this list is aspirational. I've read Mirowski and the Weintraub-edited book. I've just checked the Erickson book out of a library.

Wednesday, April 13, 2016

Math Is Power

1.0 Introduction

A common type of post in this blog is the presentation of concrete numerical examples in economics. Sometimes I present examples to illustrate some principle. But usually I try to find examples that are counter-intuitive or perverse, at least from the perspective of economics as mainstream economists often misteach it.

Voting games provide an arena where one can find surprising results in political science. I am thinking specifically of power indices. In this post, I try to explain two of the most widely used power indices by means of an example.

2.0 Me and My Aunt: A Voting Game

For purposes of exposition, I consider a specific game, called Me and My Aunt. There are four players in this version of the game, represented by elements of the set:

P = The set of players = {0, 1, 2, 3}

Out of respect, the first player gets two votes, while all other players get a vote each (Table 1). A coalition, S, is a set of players. That is, a coalition is a subset of P. A coalition passes a resolution if it has a majority of votes. Since there are four players, one of who has two votes, the total number of votes is five. So a majority, in this game of weighted voting, is three votes.

Table 1: Players and Their Votes
PlayersVotes
0 (Aunt)2
1 (Me)1
21
31

One needs to specify the payoff to each coalition to complete the definition, in characteristic function form, of this game. The characteristic function, v(S) maps the set of all subsets of P to the set {0, 1}. If the players in S have three or more votes,v(S) is 1. Otherwise, it is 0. That is, a winning coalition gains a payoff of one to share among its members.

3.0 The Penrose-Banzhaf Power Index

Power for a player, in this mathematical approach, is the ability to be the decisive member of a coalition. If, for a large number of coalitions, you being in or out of a coalition determines whether or not that coalition can pass a resolution, you have a lot of power. Correspondingly, if the members of most coalitions do not care whether you join, because your presence has no influence on whether or not they can put their agenda into effect, you have little power.

The Penrose-Banzhaf power index is one (of many) attempts to quantify this idea. Table 2 lists all 16 coalitions for the voting game under consideration. (The number of coalitions is the sum of a row in Pascal's triangle.) The second column in Table 2 specifies the value for the characteristic function for that coalition. Equivalently, the third column notes which eight coalitions are winning coalitions, and which eight are losing. The last two columns are useful for tallying up counts needed for the Penrose-Banzhaf index.

Table 2: Calculations for Penrose-Banzhaf Power Index
CoalitionCharacteristic
Function
Winning
or Losing
Player
Aunt (0)Me (1)
{}v( {} ) = 0Losing00
{0}v( {0} ) = 0Losing00
{1}v( {1} ) = 0Losing00
{2}v( {2} ) = 0Losing00
{3}v( {3} ) = 0Losing00
{0, 1}v( {0, 1} ) = 1Winning11
{0, 2}v( {0, 2} ) = 1Winning10
{0, 3}v( {0, 3} ) = 1Winning10
{1, 2}v( {1, 2} ) = 0Losing00
{1, 3}v( {1, 3} ) = 0Losing00
{2, 3}v( {2, 3} ) = 0Losing00
{0, 1, 2}v( {0, 1, 2} ) = 1Winning10
{0, 1, 3}v( {0, 1, 3} ) = 1Winning10
{0, 2, 3}v( {0, 2, 3} ) = 1Winning10
{1, 2, 3}v( {1, 2, 3} ) = 1Winning01
{0, 1, 2, 3}v( {0, 1, 2, 3} ) = 1Winning00

The Penrose-Banzhaf index, ψ(i) is calculated for each player i. It is defined, for a given player, to be the ratio of the number of winning coalitions in which that player is decisive to the total number of coalitions, winning or losing. A player is decisive for a coalition if:

  • The coalition is a winning coalition.
  • The removal of the player from the coalition converts it to a losing coalition.

From the table above, one can see that player 0 is decisive for six coalitions, while player 1 is decisive for only two coalitions. Hence, the Penrose-Banzhaf index for "my aunt" is:

ψ(0) = 6/16 = 3/8

By symmetry, the index values for players 2 and 3 are the same as the value for player 1:

ψ(1) = ψ(2) = ψ(3) = 2/16 = 1/8

More than one player can be decisive for a winning coalition. No need exists for the Penrose-Banzhaf index to sum up to one. How much one's vote is weighted does not bear a simple relationship to how much power one has. Also note that the definition of this power index is not confined to simple majority games. Power indices can be calculated for voting games in which a super-majority is required to pass a measure. For example, in the United States Senate, 60 senators are needed to end a filibuster.

4.0 The Shapley-Shubik Power Index

The Shapley-Shubik power index is an application of the calculation of the Shapley value to voting games. The Shapley value applies to cooperative games, in general. For its use as a measure of power in voting games, it matters in which order players enter a coalition. Accordingly, Table 3 lists all 24 permutations of all four players in the voting game being analyzed.

Table 3: Calculations for the Shapley-Shubik Power Index
PermutationPlayer
Aunt (0)Me (1)
(0, 1, 2, 3)v( {0} ) - v( {} ) = 0v( {0, 1} ) - v( {0} ) = 1
(0, 1, 3, 2)v( {0} ) - v( {} ) = 0v( {0, 1} ) - v( {0} ) = 1
(0, 2, 1, 3)v( {0} ) - v( {} ) = 0v( {0, 1, 2} )
- v( {0, 2} ) = 0
(0, 2, 3, 1)v( {0} ) - v( {} ) = 0v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(0, 3, 1, 2)v( {0} ) - v( {} ) = 0v( {0, 1, 3} )
- v( {0, 3} ) = 0
(0, 3, 2, 1)v( {0} ) - v( {} ) = 0v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(1, 0, 2, 3)v( {0, 1} )
- v( {1} ) = 1
v( {1} ) - v( {} ) = 0
(1, 0, 3, 2)v( {0, 1} )
- v( {1} ) = 1
v( {1} ) - v( {} ) = 0
(1, 2, 0, 3)v( {0, 1, 2} )
- v( {1, 2} ) = 1
v( {1} ) - v( {} ) = 0
(1, 2, 3, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1} ) - v( {} ) = 0
(1, 3, 0, 2)v( {0, 1, 3} )
- v( {1, 3} ) = 1
v( {1} ) - v( {} ) = 0
(1, 3, 2, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1} ) - v( {} ) = 0
(2, 0, 1, 3)v( {0, 2} )
- v( {2} ) = 1
v( {0, 1, 2} )
- v( {0, 2} ) = 0
(2, 0, 3, 1)v( {0, 2} )
- v( {2} ) = 1
v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(2, 1, 0, 3)v( {0, 1, 2} )
- v( {1, 2} ) = 1
v( {1, 2} ) - v( {2} ) = 0
(2, 1, 3, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1, 2} ) - v( {2} ) = 0
(2, 3, 0, 1)v( {0, 2, 3} )
- v( {2, 3} ) = 1
v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(2, 3, 1, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1, 2, 3} )
- v( {2, 3} ) = 1
(3, 0, 1, 2)v( {0, 3} )
- v( {3} ) = 1
v( {0, 1, 3} )
- v( {0, 3} ) = 0
(3, 0, 2, 1)v( {0, 3} )
- v( {3} ) = 1
v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(3, 1, 0, 2)v( {0, 1, 3} )
- v( {1, 3} ) = 1
v( {1, 3} ) - v( {3} ) = 0
(3, 1, 2, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1, 3} ) - v( {3} ) = 0
(3, 2, 0, 1)v( {0, 2, 3} )
- v( {2, 3} ) = 1
v( {0, 1, 2, 3} )
- v( {0, 2, 3} ) = 0
(3, 2, 1, 0)v( {0, 1, 2, 3} )
- v( {1, 2, 3} ) = 0
v( {1, 2, 3} )
- v( {2, 3} ) = 1

Table 3 shows some initially confusing calculations in the last two columns, where each of these columns is defined for a given player. Suppose a player and a permutation are defined. For that permutation, let the set Sπ, i contain those players in the permutation π to the left of the given player i. The difference, in the last two columns, is the following, for i equal to 0 and to 1, respectively:

v(Sπ, i ∪ {i}) - v(Sπ, i)

The Shapley-Shubik power index, for a player, is the ratio of a sum to the number of permutations of players. And that sum is calculated for each player, as the sum over all permutations, of the above difference in the value of the value of the characteristic function.

If I understand correctly, given a permutation, the above difference can only take on values of 0 or 1. And it will only be 1 for one player, where that player determines whether the formation of the coalition in the order given will be a winning coalition. As a consequence, the Shapley-Shubik power index is guaranteed to sum over players to unity. In this case, power is a fixed amount, with each player being measured as having a defined proportion of that power.

5.0 Both Power Indices

The above has stepped through the calculation of two power indices, for all players, in a given game. Table 4 lists their values, as well as a normalization of the Penrose-Banzhauf power index such that the sum of the power, over all players, is unity. (I gather that the Penrose-Banzhauf index and the normalized index do not have the same properties.) As one might expect from the definition of the game, "my aunt" has more power than "me" in this game.

Table 4: The Penrose-Banzhaf and Shapley-Shubik Power Indices
PlayerPenrose-Banzhaf Power IndexShapley-Shubik
Power Index
IndexNormalized
06/16 = 3/86/12 = 1/212/24 = 1/2
12/16 = 1/82/12 = 1/64/24 = 1/6
22/16 = 1/82/12 = 1/64/24 = 1/6
32/16 = 1/82/12 = 1/64/24 = 1/6

In many voting games, the normalized Penrose-Banzhauf and Shapley-Shubik power indices are not identical for all players. In fact, suppose the rules for the above variation of Me and my Aunt voting game are varied. Suppose now that four votes - a supermajority - are needed to carry a motion. The normalized Penrose-Banzhaf index for player 0 becomes 1/3, while each of the other players have a normalized Penrose-Banzhaf index of 2/9. Interestingly enough, the Shapley-Shubik indices for the players do not change, if I have calculated correctly. But the values assigned to rows in Table 3 do sometimes vary. Anyways, that one tweak of the rules results in different power indices, depending on which method one adopts. A more interesting example would be one in which the rankings vary among power indices.

Other power indices, albeit less common, do exist. Which one is most widely applicable? I would think that mainstream economists, given game theory and marginalism, would tend to prefer the Shapley-Shubik power index. Felsenthal and Machover (2004) seem to be widely recognized experts on measures of voting power, and they have come to prefer the Penrose-Banzhaf index over the Shapley-Shubik index.

6.0 Where To Go From Here

I have described above a couple of power indices in voting games. As I understand it, many have tried to write down reasonable axioms that characterize power indices. One challenge is to specify a set of axioms such that your preferred power index is the only one that satisfies them. But, as I understand it, some sets of reasonable axioms are open insofar as more than one power index would satisfy them. I seem to recall a theorem that one could create a power index for a reasonable set of axioms such that whichever player you want in a voting game is the most powerful. Apparently, a connection can be drawn between a power index and a voting procedure. And Donald Saari boasts that he could create an apparently fair voting procedure that would result in whatever candidate you like being elected.

I gather that many examples of voting games have been presented in which apparently paradoxical or perverse results arise. And these do not seem to be merely theoretical results. Can I find some such examples? Perhaps, I should look here at some of Daron Acemoglu's work.

I am aware of three types of examples to look for. One is that of a dummy. A dummy is a player that, under the weights and the rule for how many votes are needed for passage, can never be decisive in a coalition. Whether this player drops out or joins a coalition can never change whether or not a resolution is passed, even though the player has a positive weight. A second odd possibility arises as the consequence of adding a new member to the electorate:

"...power of a weighted voting body may increase, rather than decrease, when new members are added to the original body." -- Steven J. Brams and Paul J. Affuso (1976).

A third odd possibility apparently can arise on a council when one district annexes another. Suppose, the district annexing the other consequently increases the weight of its vote accordingly. One might think a greater weight leads to more power. But, in certain cases, the normalized Penrose-Banzhaf index can decrease.

The above calculations for the Penrose-Banzhaf and Shapley-Shubik power indices treat all coalitions or permutations, respectively, as equally likely to arise. Empirically, this does not seem to be true. And this has an impact on how one might measure power. For example, since voting is unweighted on the Supreme Court of the United States, all justices might be thought to be equally powerful. But, because of the formation of well-defined blocks, Anthony Kennedy was often described as being particularly powerful in deciding court decisions, at least when Antonin Scalia was still alive. So empirically, one might include some assessment of the affinities of the players for one another and, thus, some influence on the probabilities of each coalition forming. This will have consequences on the calculation of power indices. But why stop there? In the United States these days, politicians only seem to represent the most wealthy.

Update: This page, from the University of Warwick, has links to utilities for calculating various power indices.

References
  • Steven J. Brams and Paul J. Affuso (1976). Power and Size: A New Paradox, Theory and Decision. V. 7, Iss. 1 (Feb.): pp. 29-56.
  • Dan S. Felsenthal and Moshé Machover (2004). Voting Power Measurement: A Story of Misreinvention, London Scool of Economics and Political Science
  • Andrew Gelman, Jonathan N. Katz, and Joseph Bafumi (2004). Standard Voting Power Indexes Do Not Work: An Empirical Analysis, B. J. Pol. S.. V. 34: pp. 657-674.
  • Guillermo Owen (1971) Political Games, Naval Research Logistics Quarterly. V. 18, Iss. 3 (Sep.): pp. 345-355.
  • Donald G. Saari and Katri K. Sieberg (1999). Some Surprising Properties of Power Indices.

Monday, February 02, 2015

A Cynical Take By Greece's Finance Minister On Mainstream Economists

I have found Yanis Varoufakis' 2014 book, Economic Indeterminacy: A personnel encounter with economists' peculiar nemesis a bit too abstract for my tastes. I am not sure that game theory counts as a subset of neoclassical economics, although I can see how some game theory meets Varoufakis' definition. One might see how a lot of game theory illustrates the idea that economists, collectively, exhibit weakness of will. That is, a lot of game theory can be used to develop models with multiple equilibria and of nondeterministic outcomes. One might expect economists to shy away from these conclusions.

I find it hard to accept Varoufakis's argument that in games, one might want to deliberately be irrational. I wondered if that was so, wouldn't an opponent see this? And, thus, would not this irrational behavior therefore be rational at a meta-level? Varoufakis' argument is structured to address this objection.

But my point in this post is to quote from the preface:

"...my project's failure was predetermined, at least in the sense that it was never going to cause a shift in the attitudes and demeanour of a profession which operates like a priesthood, dedicated solely to preservation of its dogmas... as well as to the recapitulation of its authority within the universities, the financial sector and the government. Indeed, at no point did I harbour any significant hope that this priesthood would take kindly to the demons of doubt and indeterminacy which my work was bound to give rise to. But it did not matter, at least not at a personal level. My intimate familiarity with the neoclassical models was sufficient to keep me on the roster of neoclassical economics departments, where a capacity to teach these models, and produce academic papers based on them is all that matters.

Looking back at these long years of tampering with, and delving into, the complex models of the neoclassical tradition, I cannot but question my decision to keep pushing, Sisyphus-like, the theoretical rock up the neoclassical hill. Why did I stick to this task, when I knew it would end up in failure? In retrospect, there were two reasons, neither of which was predicated upon any hope of influencing a profession utterly uninterested in the truth status of its models. First, I deeply enjoyed toying with these models as an end-in-itself, just as a clockmaker enjoys taking apart and then re-assembling some old clock for the hell of it. Secondly, and more importantly, I felt it necessary to make it hard for my colleagues to pretend to themselves that the models they were being forced to with, by a particularly authoritarian profession, were logically coherent. Bringing them, even fleetingly, to the point when they had to confess to their models' internal contradictions was, I felt, a victory of sorts; the equivalent of a lone sniper behind enemy lines making life difficult for an army of cocupation." -- Yannis Varoufakis (2004: p. xxiv.)

Varoufakis has some other books that sound interesting and more popular. I think his book; The Global Minotaur: America, Europe and the Future of the Global Economy; might be especially topical at the moment.

Update: Steve Keen provides a link to one exposition of Varoufakis' argument that, in game theory, agents can and will deliberately choose irrational behavior.

Friday, December 12, 2014

First Formulation of Folk Theorem and Indeterminacy in Game Theory

Initial and Chaotic Learning in Rock-Paper-Scissors

Consider a game, as games are defined in game theory. And consider some strategy for some player in some game. The folk theorem states, roughly, that any strategy can be justified as a solution for a game by considering an infinitely repeated game. (An amusing corollary might be stated as saying that competition is the same as monopoly, if you do the math right.) The following seems to me to state the folk theorem (abstracting from the distinction between Nash equilibria and Von Neumann and Morgenstern's solution concept):

"21.2.3. If our theory were applied as a statistical analysis of a long series of plays of the same game - and not as the analysis of one isolated play - an alternative interpretation would suggest itself. We should then view agreements and all forms of cooperation as establishing themselves by repetition in such a long series of plays.

It would not be impossible to derive a mechanism of enforcement from the player's desire to maintain his record and to be able to rely on the on the record of his partner. However, we prefer to view our theory as applying to an individual play. But these considerations, nevertheless, possess a certain signiificance in a virtual sense. The situation is similar to the one we encountered in the analysis of the (mixed) strategies of a zero-sum two-person game. The reader should apply the discussions of 17.3 mutatis mutandis to the present situation." -- John Von Neumann and Oscar Morgenstern (1953) p. 254.

I have heard it claimed that economic theory has developed such that any moderately informed graduate student can now provide you with a model that yields any conclusion that you like. The folk theorem, as I understand it, is not even the most threatening finding for the ability of game theory to yield determinate conclusions.

Consider an iterated game before an equilibrium, under some definition or another, has been achieved. The players are trying to learn each others' strategies. Even a simple game, such as Rock-Scissors-Paper, can yield chaotic dynamics (Sato, Akiyama, and Farmer 2002; Galla and Farmer 2013). An equilibrium might never be established, for it is worthwhile for some players to deliberately choose "irrational" moves so as to ensure that other players do not achieve equilibrium, instead of a result that benefits the supposedly irrational player (Foster and Young 2012). (I hope I found this reference from reading Yanis Varoufakis, who, in one paper in one of his books, makes this point with the centipede game.) Apparently, this irrationality does not disappear by moving towards a more meta-theoretic level. And one player, who understands the evolutionary behavior of the other player in a Prisoner's Dilemma, can manipulate the other player to result in a asymmetric result - that is, a case where the non-evolutionary player extorts the player following a mindless evolutionary strategy (Press and Dyson 2012, Stewart and Plotkin 2012).

References

Tuesday, October 14, 2014

Jean Tirole, A Practitioner Of New Industrial Organization

I have occasionally summarized certain aspects of microeconomics, concentrating on markets that are not perfectly competitive. Further developments along these lines can be found in the theory of Industrial Organization.

One can distinguish in the literature two approaches to IO know as old IO and new IO. Old IO extends back to the late 1950s. Joe Bain and Paolo Sylos Labini laid the foundations to this approach, and they were heralded by Franco Modigliani. I have not read any of Bain and only a bit of Sylos Labini. Sylos was a Sraffian and quite critical of neoclassical economics. He also had interesting things to say about economic development.

As I understand it, new IO consists of applying game theory to imperfectly competitive and oligopolistic markets. I gather new IO took off in the 1980s. Jean Tirole, the winner of this year's "Nobel" prize in economics, is a prominent exponent of new IO.

One can tell interesting stories about corporations with both old IO and new IO. For example, Tirole has had something to say about vertical integration which, based on what I've read in the popular press, might be of interest to me. (Typically, when I explore the theory of vertical integration, following Luigi Pasinetti, the integration is only notional, not at the more concrete level of concern in IO.)

I wonder, though, whether economists can point to empirical demonstrations of the superiority of new IO over old IO. Or have economists studying IO come to embrace new IO more because of the supposed theoretical rigor of game theory? Are specialists in IO willing to embrace the indeterminism that arises in game theory, what with the variety of solution concepts and the existence of multiple equilibria in many games? Or do they insist on closed models with unique equilibria?

References
  • Franco Modigliani (1958). New developments on the Oligopoly Front, Journal of Political Economy, V. 66, No. 3: pp. 215-232.

Update (same day): Corrected a glitch in the title. Does this Paul Krugman post read as a direct response to my post?

Thursday, October 21, 2010

Papers To Read

I seem to be very slow to either read these or write up a detailed explanation:
  • Francis M. Bator (1958) "The Anatomy of Market Failure", Quarterly Journal of Economics, V. 72, N. 3 (Aug): 351-379. John Cassidy takes this paper as the authoritative definition in his book How Markets Fail: The Logic of Economic Calamities.
  • Arindrajit Dube, T. William Lester, and Michael Reich (2008) "Minimum Wage Effects Across State Borders: Estimates Using Contiguous Counties", forthcoming in the Review of Economics and Statistics. Generalizes the natural experiment approach of Card and Krueger to look at all cross-state local differences in minimum wages in the United States between 1990 and 2006. They find no adverse employment effects from higher minimum wages in the ranges examined. You can watch a video interview with Dube here. (The Wikipedia page on minimum wages also lists meta-analyses, by Stanley and by Doucouliagos & Stanley, more recent than Card and Krueger's meta-analysis.)
  • Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou (2009) "The Complexity of Computing a Nash Equilibrium", Communications of the ACM, V. 52, No. 2: pp. 89-97. Defines a complexity class between P and NP and proves that computing a Nash equilibrium is in that class. Thus, if PNP, Nash equilibria cannot be computed in polynomial time for arbitrary games. In other words, computing a Nash equilibrium in general is infeasible in practice. (Tim Roughgarden's "Algorithmic Game Theory" (Communications of the ACM, V. 53, No. 7 (Jul. 2010)) and Yoav Shoham's "Computer Science and Game Theory" (Communications of the ACM, V. 51, No. 5 (Aug. 2008)) are survey articles.)
  • Colin F. Camerer (2006) "Wanting, Liking, and Learning: Neuroscience and Paternalism", University of Chicago Law Review, V. 73 (Winter). Argues that three neural subsystems in our brains process "wanting", "liking", and "learning" separately. I don't think this is quite what Ian Steedman and Ulrich Krause mean by a Faustian agent, but it seems to be related.

P.S. Commentator Emil Bakhum lists some objections to Sraffa's analysis from Alfred Muller. I do not agree that these objections correctly characterize Sraffa's analysis, a point to which I may return. I think I would like a more complete reference, although I might not be able to read it if it is in german.

Wednesday, November 18, 2009

An Indeterminate Two-Person Zero-Sum Game With Perfect Information

1.0 Introduction
I have stumbled upon some odd mathematics, some mathematics that I have not validated. Consider the claim that all two-person zero-sum games with perfect information have a value. Apparently, this claim is inconsistent with the Axiom of Choice, an axiom in set theory. This inconsistency is shown by the Banach-Mazur game and its variants. I guess it is essential to this demonstration that these games have a potentially countable infinite number of moves.

I don't know that this demonstration is as important for economics as, for instance, W. F. Lucas' example of a cooperative game without an equilibrium.

A game has perfect information if the results of all moves prior to any given move are known to all players. Simple examples of games with imperfect information are card games in which the deal gives a player a hand which only he knows. A two-person zero-sum game is determinate if one can prove either (1) the first player wins some definite amount, (2) the second player wins some definite amount, or (3) the game is a draw. Chess is a determinate game, although it is in practice impossible to expand the tree enough to determine its value.

2.0 A Game
I steal this example from a Usenet post by Herman Rubin.

The game is fully specified by the rules and by defining a set C, where C is a given subset of the real numbers between 0 and 1, inclusive. The two players alternatively select the successive binary digits of the base-two expansion of a number within the interval [0, 1].

In other words, consider the number:

(1/2) x1 + (1/4) x2 + (1/8) x3 + (1/16) x4 + ...
where, for all i, xi is in {0, 1}. The first player chooses the binary digits with the odd indices, and the second player chooses the binary digits with the even indices. But they take turns and go in order.

The game ends with the second player paying the first player a unit when it is guaranteed that any further expansion will result in a number within C. The game ends with the second player winning a unit payment from the first player when it is guaranteed that any further expansion will result in a number in the complement of C.

A simple example is C = [0.5, 1]. The first player wins in this case. A more complicated game arises when C is the set of all irrational numbers in the unit interval. I gather this game is determinate, but I don't see offhand who wins. Finally, consider a set C that does not have a Lebesque measure. (The Axiom of Choice is necessary for the definition of such a set.) I gather that in this case, the game is not determinate. Nobody can tell a priori who will win.

Friday, December 19, 2008

Don't Say "There Must Be Something Common, Or They Would Not Be Called 'Games'"

1.0 Introduction
Von Neumann and Morgenstern posed a mathematical problem in 1944: Does every game have a solution, where a solution is defined in their sense? W. F. Lucas solved this problem in 1967. Not all games have such a solution. (It is known that such a solution need not be unique. In fact, the solution to the three person game I use below to illustrate the Von Neumann and Morgenstern solution is not unique.)

I may sometime in the future try to explain the game with ten players Lucas presents as a counterexample, assuming I can grasp it better than I do now. With this post, I try to explain some concepts of cooperative game theory so as to have this post for reference when and if I do. The Nash equilibrium and refinements are notions from the different theory of non-cooperative game theory.

2.0 Definition of a Game
Roughly, a game is specified by:
  • The number of players
  • The strategies available for each player
  • The payoffs to each player for each combination of player strategies
How a strategy is described depends on the specification of the game - whether it is in extensive form, normal form, or characteristic function form. Von Neumann and Morgenstern hoped that all three forms would be equivalent, with less data needing to be specified in the later forms in this series. This hope has arguably not worked out.

2.1 Extensive Form
A game in extensive form is specified as a tree. This is most easily seen for board games, like backgammon or chess. Each node in the tree is a board position, with the root of the tree corresponding to the initial position.

The specification of a node includes which player is to move next, as well as the board position. Each possible move the player whose turn it is can make is shown by a link leading from the node to a node for the board position after that choice of a move. Random moves are specified as moves made by a fictitious player, who might be named "Mother Nature". The roll of a pair of dice or the deal of a randomly selected card are examples of random moves. With a random move, the probability of each move is specified along the line connecting one node to another. Since a move by an actual player is freely chosen, the probabilities of any move by an actual player are not specified in the specification of a game.

The above description of the specification of a game cannot yet handle games like poker. In poker, not every player knows every card that is dealt. Von Neumann and Morgenstern introduce the concept of "information sets" to allow one to specify that, for instance, a player only knows all the cards in his hands and, perhaps, some of the cards in the other players' hands. An information set at a node, specifies for the player whose turn it is, which of the previous choices of moves in the game he has knowledge of. That is, an information set is a subset of the set of links in the tree leading from the initial position to the current node position. Since some of these moves were random, this specification allows for the dealing of hands of cards, for example.

The final element in this specification of a game occurs at the leaves of the tree. These are the final positions in the games. Leaves have assigned the values of the payouts to each player in the game.

It is easy to see how to define a player's strategy with this specification of a game. A strategy states the player's choice of a move at each node in the game denoting a position in which it is the player's move. A play of the game consists of each player specifying their strategy and the random selection of a choice from the specified probability distributions at each node at which a random move is chosen. These strategies and the random moves determine the leaf at which the game terminates. And one can then see the payouts to all players for the play.

One can get rid of the randomness, in some sense, by considering an infinite number of plays of the game for each combination of players' strategies. This will result in a probability distribution for payouts. The assumption is that each player is interested in the expected value, that is, the mean payout, to be calculated from this probability distribution. (All these descriptions of calculations have abstracted from time and space computational constraints.)

2.2 Normal Form
One abstracts from the sequence of moves and from random moves in specifying a game in normal form. The extensive form allows for the definition of strategies for each player, and each strategy can be assigned an arbitrary label. A game in normal form consists of a grid or table. A player's strategies are listed along one dimension of the table, and each dimension corresponds to a player. Each entry in the table consists of a ordered tuple, where the elements of the tuple are the expected payouts to the players for the specified combination of strategies.

Table 1 shows a simple example - the children's game, "Rock, Paper, Scissors." The rules specify the winner. Rock crushes scissors, scissors cut paper, and paper covers rock. This is a two-person zero-sum game. The payouts are shown in the table for the player whose strategies are listed for each row to the left. The payouts to the column player are, in this case, the additive inverse of the table entries.

Table 1: Rock, Paper, Scissors
RockScissorsPaper
Rock0+1-1
Scissors-10+1
Paper+1-10

By symmetry, no pure strategy in Rock, Paper, Scissors is better than any other. A mixed strategy is formed for a player by assigning probabilities to each of that player's pure strategies. Probabilities due to states of nature are removed in the analysis of games by taking mathematical expectations. But probabilities reappear from rational strategization. I also found interesting Von Neumann and Morgenstern's analysis of an idealized form of poker. One wants one's bluffs to be called in bluffing on occasion so that players will be willing to add more to the pot when one raises on a good hand.

Each player's best mixed strategy in a two-person zero-sum game can be found by solving a Linear Program (LP). Let p1, p2, and p3 be the probabilities that the row player in Table 1 chooses strategies Rock, Scissors, and Paper, respectively. The value of the game to the row player is v. The row player's LP is:
Choose p1, p2, p3, v
To Maximize v
Such that
-p2 + p3v
p1 - p3v
-p1 + p2v
p1 + p2 + p3 = 1
p1 ≥ 0, p2 ≥ 0, p3 ≥ 0
The interest of the column player is to minimize the payout to the row player. The left-hand sides of the first three constraints show the expected value to the row player when the column player plays Rock, Scissors, and Paper, respectively. That is, the coefficients by which the probabilities are multiplied in these constraints come from the columns in Table 1. Given knowledge of the solution probabilities, the column player can guarantee the value of the game does not exceed these expected values by choosing the corresponding column strategy. That is, the column player chooses a pure strategy to minimize the expected payout to the row player.

The column player's LP is the dual of the above LP. As a corollary of duality theory in Linear Programming, a minimax solution exists for all two-person zero-sum games. This existence is needed for the definition of the characteristic function form of a game.

2.3 Characteristic Function Form
The characteristic function form of a game is defined in terms of coalitions of players. An n-person game is reduced to a two-person game, where the "players" consist of a coalition of true players and the remaining players outside the coalition. The characteristic function for a game is the value of the corresponding two-person zero-sum game for each coalition of players. The characteristic function form of the game specifies the characteristic function.

As an illustration, Von Neumann and Morgenstern specify the three-person game in Table 2. In this game, coalitions of exactly two people win a unit.

Table 2: Canonical Three Person Game
CoalitionValue
{ }v( { } ) = 0
{1}v( {1} ) = -1
{2}v( {2} ) = -1
{3}v( {3} ) = -1
{1, 2}v( {1, 2} ) = 1
{1, 3}v( {1, 3} ) = 1
{2, 3}v( {2, 3} ) = 1
{1, 2, 3}v( {1, 2, 3} ) = 0

3.0 A Solution Concept

Definition: An imputation for an n-person game is an n-tuple (a1, a2, ..., an) such that:
  • For all players i, the payout to that player in the imputation does not fall below the amount that that player can obtain without the cooperation of any other player. That is, aiv( {i} ).
  • The total in the imputation of the payouts over all players is the payout v( {1, 2, ..., n} ) to the coalition consisting of all players.

Definition: An imputation a = (a1, a2, ..., an) dominates another imputation b = (b1, b2, ..., bn) if and only if there exists a set of players S such that:
  • S is a subset of {1, 2, ..., n}
  • S is not empty
  • The total in the imputation a of the payouts over all players in S does not exceed the payout v( S ) to the coalition consisting of those players
  • For all players i in S, the payouts ai in a strictly exceed the payouts bi in b

Definition: A set of imputations is a solution (also known as a Von Neumann and Morgenstern solution or a stable set solution) to a game with characteristic function v( ) if and only if:
  • No imputation in the solution is dominated by another imputation in the solution
  • All imputations outside the solution are dominated by some imputation in the solution

Notice that an imputation in a stable set solution can be dominated by some imputation outside the solution. The following set of three imputations is a solution to the three-person zero-sum game in Table 2:
{(-1, 1/2, 1/2), (1/2, -1, 1/2), (1/2, 1/2, -1)}
This solution is constructed by considering all two-person coalitions. In each imputation in the solution, the payouts to the winning coalition are evenly divided.

The above is not the only solution to this game. An uncountably infinite number of solutions exist. Another solution is the following uncountable set of imputations:
{(a, 1 - a, -1) | -1 ≤ a ≤ 2}
This solution can be understood in at least two ways:
  • Player 3 is being discriminated against.
  • The above is a solution to the two-person, non-constant game with the characteristic function in Table 3. A fictitious third player has been appended to allow the game to be analyzed as a three-person zero-sum game.
Von Neumann and Morgenstern present both interpretations.

Table 3: A Two-Person Game With Variable Sum
CoalitionValue
{ }v( { } ) = 0
{1}v( {1} ) = -1
{2}v( {2} ) = -1
{1, 2}v( {1, 2} ) = 1

The above has defined the Von Neumann and Morgenstern solution to a game. Mathematicians have defined at least one other solution concept to a cooperative game, the core, in which no imputation in the solution set is dominated by any other imputation. I'm not sure I consider the Shapley value as a solution concept, although it does have the structure, I guess, of an imputation.

References
  • W. F. Lucas, "A Game With No Solution", Bulletin of the American Mathematical Society, V. 74, N. 2 (March 1968): 237-239
  • John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, Princeton University Press (1944, 1947, 1953)

Wednesday, September 20, 2006

Does Studying Mainstream Economics Make You A Bad Person?

(I had the title and the next three paragraphs written before Radek's comments today.)

Experimental evidence on the topic suggests a disquieting affirmative answer. Specifically, I refer to "Does Studying Economics Inhibit Cooperation", by Robert H. Frank, Thomas Gilovich, and Dennis T. Regan (Journal of Economic Perspectives, V. 7, N. 2 (Spring 1993): 159-171)

I believe that more up-to-date work exists in this vein. I was able to quickly locate a reference to "Does Studying Economics Discourage Cooperation? Watch What We Do Not What We Say or How We Play", by Yetzer, Goldfarb, and Poppen (Journal of Economic Perspectives, V. 10, N. 1 (1996): 177-186). (I haven't read this.)

But look at the URL for that copy of the Frank, Gilovich, and Regan paper. Why should Richard Stallman want more people to know of their findings? What does this have to do with open-source and free (as in "freedom") software?

Perhaps if I browsed around the proceedings of one of the Wizard of OS conferences, I would see some connection between advocating open source and being anti-mainstream economics. Given the interests of at least one of my readers, I want to note that Lawrence Lessig is the keynote speaker for this year's conference, which just ended.