"... The following example ... I have made, for the sake of clarity, so extreme as to be absurd if taken literally. Imagine the community, during a given short period, to be all asleep, so that in this period neither exchange nor new production takes place, and prices must be supposed to remain where they were when business closed down the previous evening. Suppose that, on waking up the next morning and resuming business, all wealth-owners find that a fit of optimism about the (prospective) price of residential property has come over them. (I have taken this particular asset as typical of an asset having a high degree of durability, a long period of production and a low degree of substitutability, and am ignoring the complications due to the existence of various types of residential houses, selling at different prices and more or less inter-substitutable; that is to say, we assume only one kind of house available to live in or to deal in or to build.) Immediately the normal exchange of residential house-property resumes in the morning, there will be a sellers' market and the price will rise sharply. If we further assume the increase in liquidity-premium attaching to houses owing to the mental revaluations of owners and potential owners to be equal in all cases - that is, the change in opinion to be unanimous - no more and no less buying and selling will take place than on the day before. (More money will be required, other things being equal, to finance this volume of trade in houses at the higher price-level; we assume this to be forthcoming to all who want to deal, e.g. out of bank-loans.) If opinion is not unanimous, additional exchange of houses between the 'bulls' and the 'bears' will take place and will settle the price, but not in general at its former level; we assume for, the sake of the example, that the bulls preponderate, so that the price rises, the necessary money for the dealing, as before, being forthcoming. House-building will, of course, have become an abnormally profitable occupation; and in time the diversion of resources to this industry will come into play and will tend to readjust the relative prices of houses and of assets and people's expectations about them towards their former levels. But before it can do so completely, in general further (similar or opposite) spontaneous changes in the liquidity-premiums attaching to the existing houses will have taken place; obviously the physical production of new houses can never take place fast enough for its effect on prices to catch up with people's purely mental revaluations of existing ones. For the latter operate without any time-lag at all. Of course, in practice, the possibility or prospect of new production bringing down again the money-price of houses is present to people's minds, and operates to diminish optimism or to cause a wave of optimism to be followed by a wave of pessimism. (It is essential to the argument that people think in term of money-prices ...) But there is in fact no reason why new building should ever bring down the money-price of houses at all; if the price of building materials and/or labour is rising rapidly, the new production of houses may operate to reduce their relative price only, the prices of other valuables rising to the necessary degree - or, of course, intermediately to any extent. Or again, all prices may fall, that of houses more than others; or all prices may rise, that of houses less than others. The course of the actual money-price of houses is thus quite indeterminate, even in the shortest period, unless we know the course of the money-price of some one single or composite valuable (e.g. labour) - i.e. unless we have a 'convention of stability.' And, even so, the relative price, and therefore in spite of the convention of stability, the actual price of houses is still not precisely determined; it remains indeterminate to the extent to which it may be influenced by unknown changes in liquidity-preferences. This holds even in the shortest period." -- Hugh Townshend (1937) "Liquidity-Premium and the Theory of Value", Economic Journal, V. 47, N. 185 (March): pp. 157-169.Townshend continues by considering this argument as valid for any durable asset, including monetary assets and equitities. He cites, as another example, cotton-mills in Lancashire during the 1920s. According to Townshend, cotton mills were being bought and sold, not with regard to "expectations about the price of cotton goods", but with the intent to "flip" them - to use the jargon of the recent U.S. housing market.
Saturday, December 27, 2008
A Prescient Passage From 1937
One can read Keynes as proposing an alternative theory of value. Hyman Minsky advances this reading to some extent. Classic texts for this reading include chapter 17 of the General Theory and Hugh Townshend's 1937 article on Keynes' book. The following passage is from the latter:
Tuesday, December 23, 2008
Minsky Versus Sraffa
Kevin "Angus" Grier reminisces about Hyman Minsky's dislike for Piero Sraffa. But he doesn't recall points at issue. Minsky expressed his views in print:
Of course, Minsky's theories and Davidson's proposals for national and international reforms are of great contemporary relevance.
"Given my interpretation of Keynes (Minsky, 1975, 1986) and my views of the problems that economists need to address as the twentieth century draws to a close, the substance of the papers in Eatwell and Milgate (1983) and the neoclassical synthesis are (1) equally irrelevant to the understanding of modern capitalist economies and (2) equally foreign to essential facets of Keynes's thought. It is more important for an economic theory to be relevant for an understanding of economies than for it to be true to the thought of Keynes, Sraffa, Ricardo, or Marx. The only significance Keynes's thought has in this context is that it contains the beginning of an economic theory that is especially relevant to understanding capitalist economies. This relevance is due to the monetary nature of Keynes's theory.I gather, from second or third-hand accounts, that debates along these lines became quite acrimonious at the annual summer school in Trieste during the 1980s. I've always imagined Paul Davidson and Pierangelo Garegnani would be the most vocal advocates of the extremes in these debates. And I think of Jan Kregel, Edward Nell, and Luigi Pasinetti as being somewhere in the middle, going off in different directions. I don't know much about monetary circuit theory, but such theory may provide an approach to integrating money into Sraffianism.
Modern capitalist economies are intensely financial. Money in these economies is endogenously determined as activity and asset holdings are financed and commitments of prior contracts are fulfilled. In truth, every economic unit can create money - this property is not restricted to banks. The main problem a 'money creator' faces is getting his money accepted...
...The title of this session, 'Sraffa and Keynes: Effective Demand in the Long Run', puzzles me. Sraffa says little or nothing about effective demand and Keynes's General Theory can be viewed as holding that the long run is not a fit subject for study. At the arid level of Sraffa, the Keynesian view that effective demand reflects financial and monetary variables has no meaning, for there is no monetary or financial system in Sraffa. At the concrete level of Keynes, the technical conditions of production, which are the essential constructs of Sraffa, are dominated by profit expectations and financing conditions." -- Hyman Minsky "Sraffa and Keynes: Effective Demand in the Long Run", in Essays of Piero Sraffa: Critical Perspectives on the Revival of Classical Theory (edited by Krishna Bharadwaj and Bertram Schefold), Unwin-Hyman (1990)
Of course, Minsky's theories and Davidson's proposals for national and international reforms are of great contemporary relevance.
Friday, December 19, 2008
Don't Say "There Must Be Something Common, Or They Would Not Be Called 'Games'"
1.0 Introduction
Von Neumann and Morgenstern posed a mathematical problem in 1944: Does every game have a solution, where a solution is defined in their sense? W. F. Lucas solved this problem in 1967. Not all games have such a solution. (It is known that such a solution need not be unique. In fact, the solution to the three person game I use below to illustrate the Von Neumann and Morgenstern solution is not unique.)
I may sometime in the future try to explain the game with ten players Lucas presents as a counterexample, assuming I can grasp it better than I do now. With this post, I try to explain some concepts of cooperative game theory so as to have this post for reference when and if I do. The Nash equilibrium and refinements are notions from the different theory of non-cooperative game theory.
2.0 Definition of a Game
Roughly, a game is specified by:
2.1 Extensive Form
A game in extensive form is specified as a tree. This is most easily seen for board games, like backgammon or chess. Each node in the tree is a board position, with the root of the tree corresponding to the initial position.
The specification of a node includes which player is to move next, as well as the board position. Each possible move the player whose turn it is can make is shown by a link leading from the node to a node for the board position after that choice of a move. Random moves are specified as moves made by a fictitious player, who might be named "Mother Nature". The roll of a pair of dice or the deal of a randomly selected card are examples of random moves. With a random move, the probability of each move is specified along the line connecting one node to another. Since a move by an actual player is freely chosen, the probabilities of any move by an actual player are not specified in the specification of a game.
The above description of the specification of a game cannot yet handle games like poker. In poker, not every player knows every card that is dealt. Von Neumann and Morgenstern introduce the concept of "information sets" to allow one to specify that, for instance, a player only knows all the cards in his hands and, perhaps, some of the cards in the other players' hands. An information set at a node, specifies for the player whose turn it is, which of the previous choices of moves in the game he has knowledge of. That is, an information set is a subset of the set of links in the tree leading from the initial position to the current node position. Since some of these moves were random, this specification allows for the dealing of hands of cards, for example.
The final element in this specification of a game occurs at the leaves of the tree. These are the final positions in the games. Leaves have assigned the values of the payouts to each player in the game.
It is easy to see how to define a player's strategy with this specification of a game. A strategy states the player's choice of a move at each node in the game denoting a position in which it is the player's move. A play of the game consists of each player specifying their strategy and the random selection of a choice from the specified probability distributions at each node at which a random move is chosen. These strategies and the random moves determine the leaf at which the game terminates. And one can then see the payouts to all players for the play.
One can get rid of the randomness, in some sense, by considering an infinite number of plays of the game for each combination of players' strategies. This will result in a probability distribution for payouts. The assumption is that each player is interested in the expected value, that is, the mean payout, to be calculated from this probability distribution. (All these descriptions of calculations have abstracted from time and space computational constraints.)
2.2 Normal Form
One abstracts from the sequence of moves and from random moves in specifying a game in normal form. The extensive form allows for the definition of strategies for each player, and each strategy can be assigned an arbitrary label. A game in normal form consists of a grid or table. A player's strategies are listed along one dimension of the table, and each dimension corresponds to a player. Each entry in the table consists of a ordered tuple, where the elements of the tuple are the expected payouts to the players for the specified combination of strategies.
Table 1 shows a simple example - the children's game, "Rock, Paper, Scissors." The rules specify the winner. Rock crushes scissors, scissors cut paper, and paper covers rock. This is a two-person zero-sum game. The payouts are shown in the table for the player whose strategies are listed for each row to the left. The payouts to the column player are, in this case, the additive inverse of the table entries.
By symmetry, no pure strategy in Rock, Paper, Scissors is better than any other. A mixed strategy is formed for a player by assigning probabilities to each of that player's pure strategies. Probabilities due to states of nature are removed in the analysis of games by taking mathematical expectations. But probabilities reappear from rational strategization. I also found interesting Von Neumann and Morgenstern's analysis of an idealized form of poker. One wants one's bluffs to be called in bluffing on occasion so that players will be willing to add more to the pot when one raises on a good hand.
Each player's best mixed strategy in a two-person zero-sum game can be found by solving a Linear Program (LP). Let p1, p2, and p3 be the probabilities that the row player in Table 1 chooses strategies Rock, Scissors, and Paper, respectively. The value of the game to the row player is v. The row player's LP is:
The column player's LP is the dual of the above LP. As a corollary of duality theory in Linear Programming, a minimax solution exists for all two-person zero-sum games. This existence is needed for the definition of the characteristic function form of a game.
2.3 Characteristic Function Form
The characteristic function form of a game is defined in terms of coalitions of players. An n-person game is reduced to a two-person game, where the "players" consist of a coalition of true players and the remaining players outside the coalition. The characteristic function for a game is the value of the corresponding two-person zero-sum game for each coalition of players. The characteristic function form of the game specifies the characteristic function.
As an illustration, Von Neumann and Morgenstern specify the three-person game in Table 2. In this game, coalitions of exactly two people win a unit.
3.0 A Solution Concept
Definition: An imputation for an n-person game is an n-tuple (a1, a2, ..., an) such that:
Definition: An imputation a = (a1, a2, ..., an) dominates another imputation b = (b1, b2, ..., bn) if and only if there exists a set of players S such that:
Definition: A set of imputations is a solution (also known as a Von Neumann and Morgenstern solution or a stable set solution) to a game with characteristic function v( ) if and only if:
Notice that an imputation in a stable set solution can be dominated by some imputation outside the solution. The following set of three imputations is a solution to the three-person zero-sum game in Table 2:
The above is not the only solution to this game. An uncountably infinite number of solutions exist. Another solution is the following uncountable set of imputations:
The above has defined the Von Neumann and Morgenstern solution to a game. Mathematicians have defined at least one other solution concept to a cooperative game, the core, in which no imputation in the solution set is dominated by any other imputation. I'm not sure I consider the Shapley value as a solution concept, although it does have the structure, I guess, of an imputation.
References
Von Neumann and Morgenstern posed a mathematical problem in 1944: Does every game have a solution, where a solution is defined in their sense? W. F. Lucas solved this problem in 1967. Not all games have such a solution. (It is known that such a solution need not be unique. In fact, the solution to the three person game I use below to illustrate the Von Neumann and Morgenstern solution is not unique.)
I may sometime in the future try to explain the game with ten players Lucas presents as a counterexample, assuming I can grasp it better than I do now. With this post, I try to explain some concepts of cooperative game theory so as to have this post for reference when and if I do. The Nash equilibrium and refinements are notions from the different theory of non-cooperative game theory.
2.0 Definition of a Game
Roughly, a game is specified by:
- The number of players
- The strategies available for each player
- The payoffs to each player for each combination of player strategies
2.1 Extensive Form
A game in extensive form is specified as a tree. This is most easily seen for board games, like backgammon or chess. Each node in the tree is a board position, with the root of the tree corresponding to the initial position.
The specification of a node includes which player is to move next, as well as the board position. Each possible move the player whose turn it is can make is shown by a link leading from the node to a node for the board position after that choice of a move. Random moves are specified as moves made by a fictitious player, who might be named "Mother Nature". The roll of a pair of dice or the deal of a randomly selected card are examples of random moves. With a random move, the probability of each move is specified along the line connecting one node to another. Since a move by an actual player is freely chosen, the probabilities of any move by an actual player are not specified in the specification of a game.
The above description of the specification of a game cannot yet handle games like poker. In poker, not every player knows every card that is dealt. Von Neumann and Morgenstern introduce the concept of "information sets" to allow one to specify that, for instance, a player only knows all the cards in his hands and, perhaps, some of the cards in the other players' hands. An information set at a node, specifies for the player whose turn it is, which of the previous choices of moves in the game he has knowledge of. That is, an information set is a subset of the set of links in the tree leading from the initial position to the current node position. Since some of these moves were random, this specification allows for the dealing of hands of cards, for example.
The final element in this specification of a game occurs at the leaves of the tree. These are the final positions in the games. Leaves have assigned the values of the payouts to each player in the game.
It is easy to see how to define a player's strategy with this specification of a game. A strategy states the player's choice of a move at each node in the game denoting a position in which it is the player's move. A play of the game consists of each player specifying their strategy and the random selection of a choice from the specified probability distributions at each node at which a random move is chosen. These strategies and the random moves determine the leaf at which the game terminates. And one can then see the payouts to all players for the play.
One can get rid of the randomness, in some sense, by considering an infinite number of plays of the game for each combination of players' strategies. This will result in a probability distribution for payouts. The assumption is that each player is interested in the expected value, that is, the mean payout, to be calculated from this probability distribution. (All these descriptions of calculations have abstracted from time and space computational constraints.)
2.2 Normal Form
One abstracts from the sequence of moves and from random moves in specifying a game in normal form. The extensive form allows for the definition of strategies for each player, and each strategy can be assigned an arbitrary label. A game in normal form consists of a grid or table. A player's strategies are listed along one dimension of the table, and each dimension corresponds to a player. Each entry in the table consists of a ordered tuple, where the elements of the tuple are the expected payouts to the players for the specified combination of strategies.
Table 1 shows a simple example - the children's game, "Rock, Paper, Scissors." The rules specify the winner. Rock crushes scissors, scissors cut paper, and paper covers rock. This is a two-person zero-sum game. The payouts are shown in the table for the player whose strategies are listed for each row to the left. The payouts to the column player are, in this case, the additive inverse of the table entries.
Rock | Scissors | Paper | |
Rock | 0 | +1 | -1 |
Scissors | -1 | 0 | +1 |
Paper | +1 | -1 | 0 |
By symmetry, no pure strategy in Rock, Paper, Scissors is better than any other. A mixed strategy is formed for a player by assigning probabilities to each of that player's pure strategies. Probabilities due to states of nature are removed in the analysis of games by taking mathematical expectations. But probabilities reappear from rational strategization. I also found interesting Von Neumann and Morgenstern's analysis of an idealized form of poker. One wants one's bluffs to be called in bluffing on occasion so that players will be willing to add more to the pot when one raises on a good hand.
Each player's best mixed strategy in a two-person zero-sum game can be found by solving a Linear Program (LP). Let p1, p2, and p3 be the probabilities that the row player in Table 1 chooses strategies Rock, Scissors, and Paper, respectively. The value of the game to the row player is v. The row player's LP is:
Choose p1, p2, p3, vThe interest of the column player is to minimize the payout to the row player. The left-hand sides of the first three constraints show the expected value to the row player when the column player plays Rock, Scissors, and Paper, respectively. That is, the coefficients by which the probabilities are multiplied in these constraints come from the columns in Table 1. Given knowledge of the solution probabilities, the column player can guarantee the value of the game does not exceed these expected values by choosing the corresponding column strategy. That is, the column player chooses a pure strategy to minimize the expected payout to the row player.
To Maximize v
Such that
-p2 + p3 ≥ v
p1 - p3 ≥ v
-p1 + p2 ≥ v
p1 + p2 + p3 = 1
p1 ≥ 0, p2 ≥ 0, p3 ≥ 0
The column player's LP is the dual of the above LP. As a corollary of duality theory in Linear Programming, a minimax solution exists for all two-person zero-sum games. This existence is needed for the definition of the characteristic function form of a game.
2.3 Characteristic Function Form
The characteristic function form of a game is defined in terms of coalitions of players. An n-person game is reduced to a two-person game, where the "players" consist of a coalition of true players and the remaining players outside the coalition. The characteristic function for a game is the value of the corresponding two-person zero-sum game for each coalition of players. The characteristic function form of the game specifies the characteristic function.
As an illustration, Von Neumann and Morgenstern specify the three-person game in Table 2. In this game, coalitions of exactly two people win a unit.
Coalition | Value |
{ } | v( { } ) = 0 |
{1} | v( {1} ) = -1 |
{2} | v( {2} ) = -1 |
{3} | v( {3} ) = -1 |
{1, 2} | v( {1, 2} ) = 1 |
{1, 3} | v( {1, 3} ) = 1 |
{2, 3} | v( {2, 3} ) = 1 |
{1, 2, 3} | v( {1, 2, 3} ) = 0 |
3.0 A Solution Concept
Definition: An imputation for an n-person game is an n-tuple (a1, a2, ..., an) such that:
- For all players i, the payout to that player in the imputation does not fall below the amount that that player can obtain without the cooperation of any other player. That is, ai ≥ v( {i} ).
- The total in the imputation of the payouts over all players is the payout v( {1, 2, ..., n} ) to the coalition consisting of all players.
Definition: An imputation a = (a1, a2, ..., an) dominates another imputation b = (b1, b2, ..., bn) if and only if there exists a set of players S such that:
- S is a subset of {1, 2, ..., n}
- S is not empty
- The total in the imputation a of the payouts over all players in S does not exceed the payout v( S ) to the coalition consisting of those players
- For all players i in S, the payouts ai in a strictly exceed the payouts bi in b
Definition: A set of imputations is a solution (also known as a Von Neumann and Morgenstern solution or a stable set solution) to a game with characteristic function v( ) if and only if:
- No imputation in the solution is dominated by another imputation in the solution
- All imputations outside the solution are dominated by some imputation in the solution
Notice that an imputation in a stable set solution can be dominated by some imputation outside the solution. The following set of three imputations is a solution to the three-person zero-sum game in Table 2:
{(-1, 1/2, 1/2), (1/2, -1, 1/2), (1/2, 1/2, -1)}This solution is constructed by considering all two-person coalitions. In each imputation in the solution, the payouts to the winning coalition are evenly divided.
The above is not the only solution to this game. An uncountably infinite number of solutions exist. Another solution is the following uncountable set of imputations:
{(a, 1 - a, -1) | -1 ≤ a ≤ 2}This solution can be understood in at least two ways:
- Player 3 is being discriminated against.
- The above is a solution to the two-person, non-constant game with the characteristic function in Table 3. A fictitious third player has been appended to allow the game to be analyzed as a three-person zero-sum game.
Coalition | Value |
{ } | v( { } ) = 0 |
{1} | v( {1} ) = -1 |
{2} | v( {2} ) = -1 |
{1, 2} | v( {1, 2} ) = 1 |
The above has defined the Von Neumann and Morgenstern solution to a game. Mathematicians have defined at least one other solution concept to a cooperative game, the core, in which no imputation in the solution set is dominated by any other imputation. I'm not sure I consider the Shapley value as a solution concept, although it does have the structure, I guess, of an imputation.
References
- W. F. Lucas, "A Game With No Solution", Bulletin of the American Mathematical Society, V. 74, N. 2 (March 1968): 237-239
- John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, Princeton University Press (1944, 1947, 1953)
Tuesday, December 16, 2008
Keynes' General Theory As A Long Period Theory
The following paragraph appears in Chapter 5 of The General Theory of Employment Interest and Money:
I think this reading is strengthened by a couple of considerations. One should distinguish between the full utilization of capital equipment and full employment. Distinguishing between these concepts makes most sense if one has dropped the idea of substitution between capital and labor. Likewise, one should drop the idea that the (long run) interest rate equilibrates savings and investment. But the dropping of these ideas is one result of the Cambridge Capital Controversies. On the other hand, it is not clear that Sraffa accepted Keynes' Chapter 17, another important locus for a long period interpretation of Keynes' General Theory.
I conclude with a couple of important Sraffian references on this issue. I could probably find some more recent. But I think Milgate (1982) and Eatwell and Milgate (1983) are key texts in this controversial area (even though I haven't read them in years).
Reference
"If we suppose a state of expectation to continue for a sufficient length of time for the effect on employment to have worked itself out so completely that there is, broadly speaking, no piece of employment going on which would not have taken place if the new expectation had always existed, the steady level of employment thus attained may be called the long-period employment corresponding to that state of expectation. It follows that, although expectations may change so frequently that the actual level of employment has never had time to reach the long-period employment corresponding to the existing state of expectation, nevertheless every state of expectation has its definite corresponding level of long-period employment."It seems to me that this passage is important in an interpretation of Keynes as claiming that his theory applies in both the long and short periods. Even if the capital equipment in the economy were adjusted to effective demand, Keynes claims, the labor force need not be fully employed.
I think this reading is strengthened by a couple of considerations. One should distinguish between the full utilization of capital equipment and full employment. Distinguishing between these concepts makes most sense if one has dropped the idea of substitution between capital and labor. Likewise, one should drop the idea that the (long run) interest rate equilibrates savings and investment. But the dropping of these ideas is one result of the Cambridge Capital Controversies. On the other hand, it is not clear that Sraffa accepted Keynes' Chapter 17, another important locus for a long period interpretation of Keynes' General Theory.
I conclude with a couple of important Sraffian references on this issue. I could probably find some more recent. But I think Milgate (1982) and Eatwell and Milgate (1983) are key texts in this controversial area (even though I haven't read them in years).
Reference
- John Eatwell and Murray Milgate (editors) (1983) Keynes's Economics and the Theory of Value and Distribution, Duckworth
- Murray Milgate (1982) Capital and Employment: A Study of Keynes's Economics, Academic Press
Sunday, December 14, 2008
Stiglitz the Keynesian
Stigliz has an article, "Capitalist Fools", in the January issue of Vanity Fair. He argues that the new depression is the result of:
And he has an 11 December article in Business Day, a South African newspaper. Stiglitz is interested in how to formulate Keynesian policy effectively.
In local news... Last March, Stiglitz wrote the New York State governor recommending that NY address its deficit by raising taxes on the rich.
Here's a characterization of Stiglitz's economic teaching:
- Firing Volker after he successfully fought inflation
- Abolishing Glass-Steagall
- Imposing the non-stimulative and regressive Bush tax cuts
- Incentive structure encouraging faulty accounting
- Paulson's faulty October bail out package
And he has an 11 December article in Business Day, a South African newspaper. Stiglitz is interested in how to formulate Keynesian policy effectively.
In local news... Last March, Stiglitz wrote the New York State governor recommending that NY address its deficit by raising taxes on the rich.
Here's a characterization of Stiglitz's economic teaching:
"In his lectures, Stiglitz applied the machinery of neoclassical economics to upturn the standard results. Like a magician drawing rabbits from a hat, he could make demand curves slope up, supply curves slope down, markets in competitive equilibrium fail to clear, cross-subsidies make everyone better off, students over-educate themselves, and farmers produce the wrong quantities of goods. And then he would show how the magic reflected some very human and rational response to imperfect information. The theorem that individual rationality leads to social rationality applies to a special case, not the general case." -- Karla Hoff, in Economics for an Imperfect World: Essays in Honor of Joseph E. Stiglitz (ed. by R. Arnott, B. Greenwald, R. Kanbur, & B. Nalebuff) MIT Press, 2003 (quoted by John Lodewijks, "Review", Review of Political Economy, V. 21, N. 1, 2009)So why is Stiglitz considered a mainstream economist and Ian Steedman a non-mainstream heterodox economist?
Friday, December 12, 2008
Economists Unsuccessful In Teaching
I've previously commented on Philip Ball's opinion of economics. He's recently offered an article in Nature. This is behind a pay wall, but Ball provides an unedited, longer version on his blog, and a comment on it.
Ball notices that mainstream economists often defend their discipline against critics by asserting that critics attack a straw person. Of course introductory courses are simplified, but sophisticated research has long move beyond such models. Ball's point seems to be that, if so, economists have not been successful in getting the public or policy-makers to realize the introductory nature of simplified models or to be aware of more sophisticated lessons. "Knowledgable economists and critics of traditional economics are on the same side."
Ball notices that mainstream economists often defend their discipline against critics by asserting that critics attack a straw person. Of course introductory courses are simplified, but sophisticated research has long move beyond such models. Ball's point seems to be that, if so, economists have not been successful in getting the public or policy-makers to realize the introductory nature of simplified models or to be aware of more sophisticated lessons. "Knowledgable economists and critics of traditional economics are on the same side."
Monday, December 08, 2008
Designing A Keynesian Stimulus Plan
Some version of this New York Times article contains the following passage:
Based on the reports I found, I doubt the report referred to by the New York Times article goes into details on the analytical justifications for its figures, which I'd like to see. I know that, in principle, one could create a transactions table from use and make tables. From such a transactions table, one can calculate multipliers by sectors and some measure of environmental impact. But I do not understand all the accounting conventions and approximations one would have to make to get such practical analyses from the national income accounts.
"A blueprint for such spending can be found in a study financed by the Political Economy Research Institute at the University of Massachusetts and the Center for American Progress, a Washington research organization founded by John D. Podesta, who is a co-chairman of Mr. Obama's transition team.I went looking for this study, but was unable to find it. The PERI report, "Green Recovery: A Program to Create Good Jobs and Start Building a Low-Carbon Economy" (by Robert Pollin, Heidi Garrett-Peltier, James Heintz, and Helen Scharber) is dated September 2008. The CAP report, "How to Spend $350 Billion in a First Year of Stimulus and Recovery" (by Will Straw and Michael Ettinger) is dated 5 December 2008.
The study, released in November after months of work, found that a $100 billion investment in clean energy could create 2 million jobs over two years." -- Peter Baker and John M. Broder, New York Times, 7 December 2008 [Links inserted by Robert Vienneau]
Based on the reports I found, I doubt the report referred to by the New York Times article goes into details on the analytical justifications for its figures, which I'd like to see. I know that, in principle, one could create a transactions table from use and make tables. From such a transactions table, one can calculate multipliers by sectors and some measure of environmental impact. But I do not understand all the accounting conventions and approximations one would have to make to get such practical analyses from the national income accounts.
Saturday, December 06, 2008
How Individuals Can Choose, Even Though They Do Not Maximize Utility
1.0 Introduction
I think of this post as posing a research question. S. Abu Turab Rizvi re-interprets the primitives of social choice theory to refer to mental modules or subroutines in an individual. He then shows that the logical consequence is that individuals are not utility-maximizers. That is, in general, no preference relation exists for an individual that satisfies the conditions equivalent to the existence of an utility function. I have been reading Donald Saari on the mathematics of voting. What are the consequences for individual choice from interpreting this mathematics in Rizvi's terms?
I probably will not pursue this question, although I may draw on these literatures to present some more interesting counter-intuitive numerical examples.
2.0 Arrow's Impossibility Theorem and Work-Arounds
Consider a society of individuals. These individuals are "rational" in that each individual can rank all alternatives, and each individual ranking is transitive. Given the rankings of individuals, we seek a rule, defined for all individual rankings, to construct a complete and transitive ranking of alternatives for society. This rule should satisfy certain minimal properties:
Arrow's work has generated lots of critical and interesting research. For example, Sen considers choice functions for society, instead of rankings. A choice function selects the best alternative for every subset of alternatives. That is, for any menu of alternatives, a choice function specifies a best alternative. Consider a rule mapping every set of individual preferences to a choice function. All of Arrow's conditions are consistent for such a map from individual preferences to a choice function.
Saari criticizes the IIA property as requiring a collective choice rule not to use all available information. In particular, the rule makes no use of the number of alternatives, if any, that each individual ranks between each pair. The rule does not make use of enough information to check that each individual has transitive preferences. (Apparently, the IIA condition has generated other criticisms, including by Gibbard.) Saari proposes relaxing the IIA condition to use information sufficient for checking the transitivity of each individual's preference.
Saari also describes a collective choice rule that includes each individual numbering their choices in order, with the first choice being assigned 1, the second 2, and so on. With these numerical assignments, the choices are summed over individuals, and the ranking for society is the ranking resulting from these sums. This aggregation procedure is known as the Borda count. Saari shows that Borda count satisfies the relaxed IIA condition and Arrow's remaining conditions.
3.0 Philosophy of Mathematics
Above, I have summarized aspects of the theory of social choice in fairly concrete terms, such as "individuals" and "society". The mathematics behind these theorems is formulated in set-theoretic terms. The referent for mathematical terms is not fixed by the mathematics:
4.0 An Interpretation
Rizvi re-interprets the social choice formalism as applying to another set of referents. A society’s ranking, in the traditional interpretation, is now an individual’s ranking. An individual’s ranking, in the traditional interpretation, is now an influence on an individual’s ranking. Rizvi’s approach reminds me of Marvin Minsky's society of mind, in which minds are understood to be modular. Rizvi examines the implication’s of Sen’s impossibility of a Paretian liberal for individual preferences under this interpretation of the mathematics of social choice theory.
Constructing natural numbers in terms of set theory allows one to derive the Peano axioms as theorems. Similarly, interpreting social choice theory as applying to decision-making components for an individual allows one to analyze whether the conditions often imposed on individual preferences by mainstream economists can be derived from this deeper structure. And, it follows from Arrow's impossibility theorem, these conditions cannot be so derived in general. Individuals do not and need not maximize utility. On the other hand, Sen's result explains how individuals can choose a best choice from menus with which they may be presented.
References
I think of this post as posing a research question. S. Abu Turab Rizvi re-interprets the primitives of social choice theory to refer to mental modules or subroutines in an individual. He then shows that the logical consequence is that individuals are not utility-maximizers. That is, in general, no preference relation exists for an individual that satisfies the conditions equivalent to the existence of an utility function. I have been reading Donald Saari on the mathematics of voting. What are the consequences for individual choice from interpreting this mathematics in Rizvi's terms?
I probably will not pursue this question, although I may draw on these literatures to present some more interesting counter-intuitive numerical examples.
2.0 Arrow's Impossibility Theorem and Work-Arounds
Consider a society of individuals. These individuals are "rational" in that each individual can rank all alternatives, and each individual ranking is transitive. Given the rankings of individuals, we seek a rule, defined for all individual rankings, to construct a complete and transitive ranking of alternatives for society. This rule should satisfy certain minimal properties:
- Non-Dictatorship: No individual exists such that the rule merely assigns his or her ranking to society.
- Independence of Irrelevant Alternatives (IIA): Consider two countries composed of the same number of individuals. Suppose the same number in each country prefer one alternative to another in a certain pair of alternatives, and the same number are likewise indifferent between these alternatives. Then the rule cannot result in societal rankings for the two countries that differ in the order in which these two alternatives are ranked.
- Pareto Principle: If one alternative is ranked higher than another for all individuals, then the ranking for society must rank the former alternative higher than the latter as well.
Arrow's work has generated lots of critical and interesting research. For example, Sen considers choice functions for society, instead of rankings. A choice function selects the best alternative for every subset of alternatives. That is, for any menu of alternatives, a choice function specifies a best alternative. Consider a rule mapping every set of individual preferences to a choice function. All of Arrow's conditions are consistent for such a map from individual preferences to a choice function.
Saari criticizes the IIA property as requiring a collective choice rule not to use all available information. In particular, the rule makes no use of the number of alternatives, if any, that each individual ranks between each pair. The rule does not make use of enough information to check that each individual has transitive preferences. (Apparently, the IIA condition has generated other criticisms, including by Gibbard.) Saari proposes relaxing the IIA condition to use information sufficient for checking the transitivity of each individual's preference.
Saari also describes a collective choice rule that includes each individual numbering their choices in order, with the first choice being assigned 1, the second 2, and so on. With these numerical assignments, the choices are summed over individuals, and the ranking for society is the ranking resulting from these sums. This aggregation procedure is known as the Borda count. Saari shows that Borda count satisfies the relaxed IIA condition and Arrow's remaining conditions.
3.0 Philosophy of Mathematics
Above, I have summarized aspects of the theory of social choice in fairly concrete terms, such as "individuals" and "society". The mathematics behind these theorems is formulated in set-theoretic terms. The referent for mathematical terms is not fixed by the mathematics:
"One must be able to say at all times - instead of points, straight lines, and planes - tables, chairs, and beer mugs." - David Hilbert (as quoted by Constance Reid, Hilbert, Springer-Verlag, 1970: p. 57)
"Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." -- Bertrand Russell
4.0 An Interpretation
Rizvi re-interprets the social choice formalism as applying to another set of referents. A society’s ranking, in the traditional interpretation, is now an individual’s ranking. An individual’s ranking, in the traditional interpretation, is now an influence on an individual’s ranking. Rizvi’s approach reminds me of Marvin Minsky's society of mind, in which minds are understood to be modular. Rizvi examines the implication’s of Sen’s impossibility of a Paretian liberal for individual preferences under this interpretation of the mathematics of social choice theory.
Constructing natural numbers in terms of set theory allows one to derive the Peano axioms as theorems. Similarly, interpreting social choice theory as applying to decision-making components for an individual allows one to analyze whether the conditions often imposed on individual preferences by mainstream economists can be derived from this deeper structure. And, it follows from Arrow's impossibility theorem, these conditions cannot be so derived in general. Individuals do not and need not maximize utility. On the other hand, Sen's result explains how individuals can choose a best choice from menus with which they may be presented.
References
- Kenneth J. Arrow (1963) Social Choice and Individual Values, Second edition, Cowles Foundation
- Alan G. Isaac (1998) "The Structure of Neoclassical Consumer Theory", working paper (9 July)
- Marvin Minsky (1987) The Society of Mind, Simon and Schuster
- Donald G. Saari (2001) Chaotic Elections! A Mathematician Looks at Voting, American Mathematical Society
- S. Abu Turab Rizvi (2001) "Preference Formation and the Axioms of Choice", Review of Political Economy, V. 13, N. 2 (Nov.): 141-159
- Amartya K. Sen (1969) "Quasi-Transitivity, Rational Choice and Collective Decisions", Review of Economic Studies, V. 36, N. 3 (July): 381-393 (I haven't read this.)
- Amartya K. Sen (1970) "The Impossibility of a Paretian Liberal", Journal of Political Economy, V. 78, N. 1 (Jan.-Feb.): 152-157
Monday, December 01, 2008
On John Maynard Keynes
- Paul Krugman again and again and again
- Brad DeLong basically quotes Krugman
- Tyler Cowen
- Peter Boettke comments on Cowen on Keynes
- Matthew Mueller comments on Boettke on Keynes
Update: I suppose I ought to mention my game, which is closer to hydraulic keynesianism. I did get the idea for the underlying model from Kalecki.