"... The following example ... I have made, for the sake of clarity, so extreme as to be absurd if taken literally. Imagine the community, during a given short period, to be all asleep, so that in this period neither exchange nor new production takes place, and prices must be supposed to remain where they were when business closed down the previous evening. Suppose that, on waking up the next morning and resuming business, all wealth-owners find that a fit of optimism about the (prospective) price of residential property has come over them. (I have taken this particular asset as typical of an asset having a high degree of durability, a long period of production and a low degree of substitutability, and am ignoring the complications due to the existence of various types of residential houses, selling at different prices and more or less inter-substitutable; that is to say, we assume only one kind of house available to live in or to deal in or to build.) Immediately the normal exchange of residential house-property resumes in the morning, there will be a sellers' market and the price will rise sharply. If we further assume the increase in liquidity-premium attaching to houses owing to the mental revaluations of owners and potential owners to be equal in all cases - that is, the change in opinion to be unanimous - no more and no less buying and selling will take place than on the day before. (More money will be required, other things being equal, to finance this volume of trade in houses at the higher price-level; we assume this to be forthcoming to all who want to deal, e.g. out of bank-loans.) If opinion is not unanimous, additional exchange of houses between the 'bulls' and the 'bears' will take place and will settle the price, but not in general at its former level; we assume for, the sake of the example, that the bulls preponderate, so that the price rises, the necessary money for the dealing, as before, being forthcoming. House-building will, of course, have become an abnormally profitable occupation; and in time the diversion of resources to this industry will come into play and will tend to readjust the relative prices of houses and of assets and people's expectations about them towards their former levels. But before it can do so completely, in general further (similar or opposite) spontaneous changes in the liquidity-premiums attaching to the existing houses will have taken place; obviously the physical production of new houses can never take place fast enough for its effect on prices to catch up with people's purely mental revaluations of existing ones. For the latter operate without any time-lag at all. Of course, in practice, the possibility or prospect of new production bringing down again the money-price of houses is present to people's minds, and operates to diminish optimism or to cause a wave of optimism to be followed by a wave of pessimism. (It is essential to the argument that people think in term of money-prices ...) But there is in fact no reason why new building should ever bring down the money-price of houses at all; if the price of building materials and/or labour is rising rapidly, the new production of houses may operate to reduce their relative price only, the prices of other valuables rising to the necessary degree - or, of course, intermediately to any extent. Or again, all prices may fall, that of houses more than others; or all prices may rise, that of houses less than others. The course of the actual money-price of houses is thus quite indeterminate, even in the shortest period, unless we know the course of the money-price of some one single or composite valuable (e.g. labour) - i.e. unless we have a 'convention of stability.' And, even so, the relative price, and therefore in spite of the convention of stability, the actual price of houses is still not precisely determined; it remains indeterminate to the extent to which it may be influenced by unknown changes in liquidity-preferences. This holds even in the shortest period." -- Hugh Townshend (1937) "Liquidity-Premium and the Theory of Value", Economic Journal, V. 47, N. 185 (March): pp. 157-169.Townshend continues by considering this argument as valid for any durable asset, including monetary assets and equitities. He cites, as another example, cotton-mills in Lancashire during the 1920s. According to Townshend, cotton mills were being bought and sold, not with regard to "expectations about the price of cotton goods", but with the intent to "flip" them - to use the jargon of the recent U.S. housing market.
Saturday, December 27, 2008
A Prescient Passage From 1937
One can read Keynes as proposing an alternative theory of value. Hyman Minsky advances this reading to some extent. Classic texts for this reading include chapter 17 of the General Theory and Hugh Townshend's 1937 article on Keynes' book. The following passage is from the latter:
Tuesday, December 23, 2008
Minsky Versus Sraffa
Kevin "Angus" Grier reminisces about Hyman Minsky's dislike for Piero Sraffa. But he doesn't recall points at issue. Minsky expressed his views in print:
Of course, Minsky's theories and Davidson's proposals for national and international reforms are of great contemporary relevance.
"Given my interpretation of Keynes (Minsky, 1975, 1986) and my views of the problems that economists need to address as the twentieth century draws to a close, the substance of the papers in Eatwell and Milgate (1983) and the neoclassical synthesis are (1) equally irrelevant to the understanding of modern capitalist economies and (2) equally foreign to essential facets of Keynes's thought. It is more important for an economic theory to be relevant for an understanding of economies than for it to be true to the thought of Keynes, Sraffa, Ricardo, or Marx. The only significance Keynes's thought has in this context is that it contains the beginning of an economic theory that is especially relevant to understanding capitalist economies. This relevance is due to the monetary nature of Keynes's theory.I gather, from second or third-hand accounts, that debates along these lines became quite acrimonious at the annual summer school in Trieste during the 1980s. I've always imagined Paul Davidson and Pierangelo Garegnani would be the most vocal advocates of the extremes in these debates. And I think of Jan Kregel, Edward Nell, and Luigi Pasinetti as being somewhere in the middle, going off in different directions. I don't know much about monetary circuit theory, but such theory may provide an approach to integrating money into Sraffianism.
Modern capitalist economies are intensely financial. Money in these economies is endogenously determined as activity and asset holdings are financed and commitments of prior contracts are fulfilled. In truth, every economic unit can create money - this property is not restricted to banks. The main problem a 'money creator' faces is getting his money accepted...
...The title of this session, 'Sraffa and Keynes: Effective Demand in the Long Run', puzzles me. Sraffa says little or nothing about effective demand and Keynes's General Theory can be viewed as holding that the long run is not a fit subject for study. At the arid level of Sraffa, the Keynesian view that effective demand reflects financial and monetary variables has no meaning, for there is no monetary or financial system in Sraffa. At the concrete level of Keynes, the technical conditions of production, which are the essential constructs of Sraffa, are dominated by profit expectations and financing conditions." -- Hyman Minsky "Sraffa and Keynes: Effective Demand in the Long Run", in Essays of Piero Sraffa: Critical Perspectives on the Revival of Classical Theory (edited by Krishna Bharadwaj and Bertram Schefold), Unwin-Hyman (1990)
Of course, Minsky's theories and Davidson's proposals for national and international reforms are of great contemporary relevance.
Friday, December 19, 2008
Don't Say "There Must Be Something Common, Or They Would Not Be Called 'Games'"
1.0 Introduction
Von Neumann and Morgenstern posed a mathematical problem in 1944: Does every game have a solution, where a solution is defined in their sense? W. F. Lucas solved this problem in 1967. Not all games have such a solution. (It is known that such a solution need not be unique. In fact, the solution to the three person game I use below to illustrate the Von Neumann and Morgenstern solution is not unique.)
I may sometime in the future try to explain the game with ten players Lucas presents as a counterexample, assuming I can grasp it better than I do now. With this post, I try to explain some concepts of cooperative game theory so as to have this post for reference when and if I do. The Nash equilibrium and refinements are notions from the different theory of non-cooperative game theory.
2.0 Definition of a Game
Roughly, a game is specified by:
2.1 Extensive Form
A game in extensive form is specified as a tree. This is most easily seen for board games, like backgammon or chess. Each node in the tree is a board position, with the root of the tree corresponding to the initial position.
The specification of a node includes which player is to move next, as well as the board position. Each possible move the player whose turn it is can make is shown by a link leading from the node to a node for the board position after that choice of a move. Random moves are specified as moves made by a fictitious player, who might be named "Mother Nature". The roll of a pair of dice or the deal of a randomly selected card are examples of random moves. With a random move, the probability of each move is specified along the line connecting one node to another. Since a move by an actual player is freely chosen, the probabilities of any move by an actual player are not specified in the specification of a game.
The above description of the specification of a game cannot yet handle games like poker. In poker, not every player knows every card that is dealt. Von Neumann and Morgenstern introduce the concept of "information sets" to allow one to specify that, for instance, a player only knows all the cards in his hands and, perhaps, some of the cards in the other players' hands. An information set at a node, specifies for the player whose turn it is, which of the previous choices of moves in the game he has knowledge of. That is, an information set is a subset of the set of links in the tree leading from the initial position to the current node position. Since some of these moves were random, this specification allows for the dealing of hands of cards, for example.
The final element in this specification of a game occurs at the leaves of the tree. These are the final positions in the games. Leaves have assigned the values of the payouts to each player in the game.
It is easy to see how to define a player's strategy with this specification of a game. A strategy states the player's choice of a move at each node in the game denoting a position in which it is the player's move. A play of the game consists of each player specifying their strategy and the random selection of a choice from the specified probability distributions at each node at which a random move is chosen. These strategies and the random moves determine the leaf at which the game terminates. And one can then see the payouts to all players for the play.
One can get rid of the randomness, in some sense, by considering an infinite number of plays of the game for each combination of players' strategies. This will result in a probability distribution for payouts. The assumption is that each player is interested in the expected value, that is, the mean payout, to be calculated from this probability distribution. (All these descriptions of calculations have abstracted from time and space computational constraints.)
2.2 Normal Form
One abstracts from the sequence of moves and from random moves in specifying a game in normal form. The extensive form allows for the definition of strategies for each player, and each strategy can be assigned an arbitrary label. A game in normal form consists of a grid or table. A player's strategies are listed along one dimension of the table, and each dimension corresponds to a player. Each entry in the table consists of a ordered tuple, where the elements of the tuple are the expected payouts to the players for the specified combination of strategies.
Table 1 shows a simple example - the children's game, "Rock, Paper, Scissors." The rules specify the winner. Rock crushes scissors, scissors cut paper, and paper covers rock. This is a two-person zero-sum game. The payouts are shown in the table for the player whose strategies are listed for each row to the left. The payouts to the column player are, in this case, the additive inverse of the table entries.
By symmetry, no pure strategy in Rock, Paper, Scissors is better than any other. A mixed strategy is formed for a player by assigning probabilities to each of that player's pure strategies. Probabilities due to states of nature are removed in the analysis of games by taking mathematical expectations. But probabilities reappear from rational strategization. I also found interesting Von Neumann and Morgenstern's analysis of an idealized form of poker. One wants one's bluffs to be called in bluffing on occasion so that players will be willing to add more to the pot when one raises on a good hand.
Each player's best mixed strategy in a two-person zero-sum game can be found by solving a Linear Program (LP). Let p1, p2, and p3 be the probabilities that the row player in Table 1 chooses strategies Rock, Scissors, and Paper, respectively. The value of the game to the row player is v. The row player's LP is:
The column player's LP is the dual of the above LP. As a corollary of duality theory in Linear Programming, a minimax solution exists for all two-person zero-sum games. This existence is needed for the definition of the characteristic function form of a game.
2.3 Characteristic Function Form
The characteristic function form of a game is defined in terms of coalitions of players. An n-person game is reduced to a two-person game, where the "players" consist of a coalition of true players and the remaining players outside the coalition. The characteristic function for a game is the value of the corresponding two-person zero-sum game for each coalition of players. The characteristic function form of the game specifies the characteristic function.
As an illustration, Von Neumann and Morgenstern specify the three-person game in Table 2. In this game, coalitions of exactly two people win a unit.
3.0 A Solution Concept
Definition: An imputation for an n-person game is an n-tuple (a1, a2, ..., an) such that:
Definition: An imputation a = (a1, a2, ..., an) dominates another imputation b = (b1, b2, ..., bn) if and only if there exists a set of players S such that:
Definition: A set of imputations is a solution (also known as a Von Neumann and Morgenstern solution or a stable set solution) to a game with characteristic function v( ) if and only if:
Notice that an imputation in a stable set solution can be dominated by some imputation outside the solution. The following set of three imputations is a solution to the three-person zero-sum game in Table 2:
The above is not the only solution to this game. An uncountably infinite number of solutions exist. Another solution is the following uncountable set of imputations:
The above has defined the Von Neumann and Morgenstern solution to a game. Mathematicians have defined at least one other solution concept to a cooperative game, the core, in which no imputation in the solution set is dominated by any other imputation. I'm not sure I consider the Shapley value as a solution concept, although it does have the structure, I guess, of an imputation.
References
Von Neumann and Morgenstern posed a mathematical problem in 1944: Does every game have a solution, where a solution is defined in their sense? W. F. Lucas solved this problem in 1967. Not all games have such a solution. (It is known that such a solution need not be unique. In fact, the solution to the three person game I use below to illustrate the Von Neumann and Morgenstern solution is not unique.)
I may sometime in the future try to explain the game with ten players Lucas presents as a counterexample, assuming I can grasp it better than I do now. With this post, I try to explain some concepts of cooperative game theory so as to have this post for reference when and if I do. The Nash equilibrium and refinements are notions from the different theory of non-cooperative game theory.
2.0 Definition of a Game
Roughly, a game is specified by:
- The number of players
- The strategies available for each player
- The payoffs to each player for each combination of player strategies
2.1 Extensive Form
A game in extensive form is specified as a tree. This is most easily seen for board games, like backgammon or chess. Each node in the tree is a board position, with the root of the tree corresponding to the initial position.
The specification of a node includes which player is to move next, as well as the board position. Each possible move the player whose turn it is can make is shown by a link leading from the node to a node for the board position after that choice of a move. Random moves are specified as moves made by a fictitious player, who might be named "Mother Nature". The roll of a pair of dice or the deal of a randomly selected card are examples of random moves. With a random move, the probability of each move is specified along the line connecting one node to another. Since a move by an actual player is freely chosen, the probabilities of any move by an actual player are not specified in the specification of a game.
The above description of the specification of a game cannot yet handle games like poker. In poker, not every player knows every card that is dealt. Von Neumann and Morgenstern introduce the concept of "information sets" to allow one to specify that, for instance, a player only knows all the cards in his hands and, perhaps, some of the cards in the other players' hands. An information set at a node, specifies for the player whose turn it is, which of the previous choices of moves in the game he has knowledge of. That is, an information set is a subset of the set of links in the tree leading from the initial position to the current node position. Since some of these moves were random, this specification allows for the dealing of hands of cards, for example.
The final element in this specification of a game occurs at the leaves of the tree. These are the final positions in the games. Leaves have assigned the values of the payouts to each player in the game.
It is easy to see how to define a player's strategy with this specification of a game. A strategy states the player's choice of a move at each node in the game denoting a position in which it is the player's move. A play of the game consists of each player specifying their strategy and the random selection of a choice from the specified probability distributions at each node at which a random move is chosen. These strategies and the random moves determine the leaf at which the game terminates. And one can then see the payouts to all players for the play.
One can get rid of the randomness, in some sense, by considering an infinite number of plays of the game for each combination of players' strategies. This will result in a probability distribution for payouts. The assumption is that each player is interested in the expected value, that is, the mean payout, to be calculated from this probability distribution. (All these descriptions of calculations have abstracted from time and space computational constraints.)
2.2 Normal Form
One abstracts from the sequence of moves and from random moves in specifying a game in normal form. The extensive form allows for the definition of strategies for each player, and each strategy can be assigned an arbitrary label. A game in normal form consists of a grid or table. A player's strategies are listed along one dimension of the table, and each dimension corresponds to a player. Each entry in the table consists of a ordered tuple, where the elements of the tuple are the expected payouts to the players for the specified combination of strategies.
Table 1 shows a simple example - the children's game, "Rock, Paper, Scissors." The rules specify the winner. Rock crushes scissors, scissors cut paper, and paper covers rock. This is a two-person zero-sum game. The payouts are shown in the table for the player whose strategies are listed for each row to the left. The payouts to the column player are, in this case, the additive inverse of the table entries.
Rock | Scissors | Paper | |
Rock | 0 | +1 | -1 |
Scissors | -1 | 0 | +1 |
Paper | +1 | -1 | 0 |
By symmetry, no pure strategy in Rock, Paper, Scissors is better than any other. A mixed strategy is formed for a player by assigning probabilities to each of that player's pure strategies. Probabilities due to states of nature are removed in the analysis of games by taking mathematical expectations. But probabilities reappear from rational strategization. I also found interesting Von Neumann and Morgenstern's analysis of an idealized form of poker. One wants one's bluffs to be called in bluffing on occasion so that players will be willing to add more to the pot when one raises on a good hand.
Each player's best mixed strategy in a two-person zero-sum game can be found by solving a Linear Program (LP). Let p1, p2, and p3 be the probabilities that the row player in Table 1 chooses strategies Rock, Scissors, and Paper, respectively. The value of the game to the row player is v. The row player's LP is:
Choose p1, p2, p3, vThe interest of the column player is to minimize the payout to the row player. The left-hand sides of the first three constraints show the expected value to the row player when the column player plays Rock, Scissors, and Paper, respectively. That is, the coefficients by which the probabilities are multiplied in these constraints come from the columns in Table 1. Given knowledge of the solution probabilities, the column player can guarantee the value of the game does not exceed these expected values by choosing the corresponding column strategy. That is, the column player chooses a pure strategy to minimize the expected payout to the row player.
To Maximize v
Such that
-p2 + p3 ≥ v
p1 - p3 ≥ v
-p1 + p2 ≥ v
p1 + p2 + p3 = 1
p1 ≥ 0, p2 ≥ 0, p3 ≥ 0
The column player's LP is the dual of the above LP. As a corollary of duality theory in Linear Programming, a minimax solution exists for all two-person zero-sum games. This existence is needed for the definition of the characteristic function form of a game.
2.3 Characteristic Function Form
The characteristic function form of a game is defined in terms of coalitions of players. An n-person game is reduced to a two-person game, where the "players" consist of a coalition of true players and the remaining players outside the coalition. The characteristic function for a game is the value of the corresponding two-person zero-sum game for each coalition of players. The characteristic function form of the game specifies the characteristic function.
As an illustration, Von Neumann and Morgenstern specify the three-person game in Table 2. In this game, coalitions of exactly two people win a unit.
Coalition | Value |
{ } | v( { } ) = 0 |
{1} | v( {1} ) = -1 |
{2} | v( {2} ) = -1 |
{3} | v( {3} ) = -1 |
{1, 2} | v( {1, 2} ) = 1 |
{1, 3} | v( {1, 3} ) = 1 |
{2, 3} | v( {2, 3} ) = 1 |
{1, 2, 3} | v( {1, 2, 3} ) = 0 |
3.0 A Solution Concept
Definition: An imputation for an n-person game is an n-tuple (a1, a2, ..., an) such that:
- For all players i, the payout to that player in the imputation does not fall below the amount that that player can obtain without the cooperation of any other player. That is, ai ≥ v( {i} ).
- The total in the imputation of the payouts over all players is the payout v( {1, 2, ..., n} ) to the coalition consisting of all players.
Definition: An imputation a = (a1, a2, ..., an) dominates another imputation b = (b1, b2, ..., bn) if and only if there exists a set of players S such that:
- S is a subset of {1, 2, ..., n}
- S is not empty
- The total in the imputation a of the payouts over all players in S does not exceed the payout v( S ) to the coalition consisting of those players
- For all players i in S, the payouts ai in a strictly exceed the payouts bi in b
Definition: A set of imputations is a solution (also known as a Von Neumann and Morgenstern solution or a stable set solution) to a game with characteristic function v( ) if and only if:
- No imputation in the solution is dominated by another imputation in the solution
- All imputations outside the solution are dominated by some imputation in the solution
Notice that an imputation in a stable set solution can be dominated by some imputation outside the solution. The following set of three imputations is a solution to the three-person zero-sum game in Table 2:
{(-1, 1/2, 1/2), (1/2, -1, 1/2), (1/2, 1/2, -1)}This solution is constructed by considering all two-person coalitions. In each imputation in the solution, the payouts to the winning coalition are evenly divided.
The above is not the only solution to this game. An uncountably infinite number of solutions exist. Another solution is the following uncountable set of imputations:
{(a, 1 - a, -1) | -1 ≤ a ≤ 2}This solution can be understood in at least two ways:
- Player 3 is being discriminated against.
- The above is a solution to the two-person, non-constant game with the characteristic function in Table 3. A fictitious third player has been appended to allow the game to be analyzed as a three-person zero-sum game.
Coalition | Value |
{ } | v( { } ) = 0 |
{1} | v( {1} ) = -1 |
{2} | v( {2} ) = -1 |
{1, 2} | v( {1, 2} ) = 1 |
The above has defined the Von Neumann and Morgenstern solution to a game. Mathematicians have defined at least one other solution concept to a cooperative game, the core, in which no imputation in the solution set is dominated by any other imputation. I'm not sure I consider the Shapley value as a solution concept, although it does have the structure, I guess, of an imputation.
References
- W. F. Lucas, "A Game With No Solution", Bulletin of the American Mathematical Society, V. 74, N. 2 (March 1968): 237-239
- John von Neumann and Oskar Morgenstern, Theory of Games and Economic Behavior, Princeton University Press (1944, 1947, 1953)
Tuesday, December 16, 2008
Keynes' General Theory As A Long Period Theory
The following paragraph appears in Chapter 5 of The General Theory of Employment Interest and Money:
I think this reading is strengthened by a couple of considerations. One should distinguish between the full utilization of capital equipment and full employment. Distinguishing between these concepts makes most sense if one has dropped the idea of substitution between capital and labor. Likewise, one should drop the idea that the (long run) interest rate equilibrates savings and investment. But the dropping of these ideas is one result of the Cambridge Capital Controversies. On the other hand, it is not clear that Sraffa accepted Keynes' Chapter 17, another important locus for a long period interpretation of Keynes' General Theory.
I conclude with a couple of important Sraffian references on this issue. I could probably find some more recent. But I think Milgate (1982) and Eatwell and Milgate (1983) are key texts in this controversial area (even though I haven't read them in years).
Reference
"If we suppose a state of expectation to continue for a sufficient length of time for the effect on employment to have worked itself out so completely that there is, broadly speaking, no piece of employment going on which would not have taken place if the new expectation had always existed, the steady level of employment thus attained may be called the long-period employment corresponding to that state of expectation. It follows that, although expectations may change so frequently that the actual level of employment has never had time to reach the long-period employment corresponding to the existing state of expectation, nevertheless every state of expectation has its definite corresponding level of long-period employment."It seems to me that this passage is important in an interpretation of Keynes as claiming that his theory applies in both the long and short periods. Even if the capital equipment in the economy were adjusted to effective demand, Keynes claims, the labor force need not be fully employed.
I think this reading is strengthened by a couple of considerations. One should distinguish between the full utilization of capital equipment and full employment. Distinguishing between these concepts makes most sense if one has dropped the idea of substitution between capital and labor. Likewise, one should drop the idea that the (long run) interest rate equilibrates savings and investment. But the dropping of these ideas is one result of the Cambridge Capital Controversies. On the other hand, it is not clear that Sraffa accepted Keynes' Chapter 17, another important locus for a long period interpretation of Keynes' General Theory.
I conclude with a couple of important Sraffian references on this issue. I could probably find some more recent. But I think Milgate (1982) and Eatwell and Milgate (1983) are key texts in this controversial area (even though I haven't read them in years).
Reference
- John Eatwell and Murray Milgate (editors) (1983) Keynes's Economics and the Theory of Value and Distribution, Duckworth
- Murray Milgate (1982) Capital and Employment: A Study of Keynes's Economics, Academic Press
Sunday, December 14, 2008
Stiglitz the Keynesian
Stigliz has an article, "Capitalist Fools", in the January issue of Vanity Fair. He argues that the new depression is the result of:
And he has an 11 December article in Business Day, a South African newspaper. Stiglitz is interested in how to formulate Keynesian policy effectively.
In local news... Last March, Stiglitz wrote the New York State governor recommending that NY address its deficit by raising taxes on the rich.
Here's a characterization of Stiglitz's economic teaching:
- Firing Volker after he successfully fought inflation
- Abolishing Glass-Steagall
- Imposing the non-stimulative and regressive Bush tax cuts
- Incentive structure encouraging faulty accounting
- Paulson's faulty October bail out package
And he has an 11 December article in Business Day, a South African newspaper. Stiglitz is interested in how to formulate Keynesian policy effectively.
In local news... Last March, Stiglitz wrote the New York State governor recommending that NY address its deficit by raising taxes on the rich.
Here's a characterization of Stiglitz's economic teaching:
"In his lectures, Stiglitz applied the machinery of neoclassical economics to upturn the standard results. Like a magician drawing rabbits from a hat, he could make demand curves slope up, supply curves slope down, markets in competitive equilibrium fail to clear, cross-subsidies make everyone better off, students over-educate themselves, and farmers produce the wrong quantities of goods. And then he would show how the magic reflected some very human and rational response to imperfect information. The theorem that individual rationality leads to social rationality applies to a special case, not the general case." -- Karla Hoff, in Economics for an Imperfect World: Essays in Honor of Joseph E. Stiglitz (ed. by R. Arnott, B. Greenwald, R. Kanbur, & B. Nalebuff) MIT Press, 2003 (quoted by John Lodewijks, "Review", Review of Political Economy, V. 21, N. 1, 2009)So why is Stiglitz considered a mainstream economist and Ian Steedman a non-mainstream heterodox economist?
Friday, December 12, 2008
Economists Unsuccessful In Teaching
I've previously commented on Philip Ball's opinion of economics. He's recently offered an article in Nature. This is behind a pay wall, but Ball provides an unedited, longer version on his blog, and a comment on it.
Ball notices that mainstream economists often defend their discipline against critics by asserting that critics attack a straw person. Of course introductory courses are simplified, but sophisticated research has long move beyond such models. Ball's point seems to be that, if so, economists have not been successful in getting the public or policy-makers to realize the introductory nature of simplified models or to be aware of more sophisticated lessons. "Knowledgable economists and critics of traditional economics are on the same side."
Ball notices that mainstream economists often defend their discipline against critics by asserting that critics attack a straw person. Of course introductory courses are simplified, but sophisticated research has long move beyond such models. Ball's point seems to be that, if so, economists have not been successful in getting the public or policy-makers to realize the introductory nature of simplified models or to be aware of more sophisticated lessons. "Knowledgable economists and critics of traditional economics are on the same side."
Monday, December 08, 2008
Designing A Keynesian Stimulus Plan
Some version of this New York Times article contains the following passage:
Based on the reports I found, I doubt the report referred to by the New York Times article goes into details on the analytical justifications for its figures, which I'd like to see. I know that, in principle, one could create a transactions table from use and make tables. From such a transactions table, one can calculate multipliers by sectors and some measure of environmental impact. But I do not understand all the accounting conventions and approximations one would have to make to get such practical analyses from the national income accounts.
"A blueprint for such spending can be found in a study financed by the Political Economy Research Institute at the University of Massachusetts and the Center for American Progress, a Washington research organization founded by John D. Podesta, who is a co-chairman of Mr. Obama's transition team.I went looking for this study, but was unable to find it. The PERI report, "Green Recovery: A Program to Create Good Jobs and Start Building a Low-Carbon Economy" (by Robert Pollin, Heidi Garrett-Peltier, James Heintz, and Helen Scharber) is dated September 2008. The CAP report, "How to Spend $350 Billion in a First Year of Stimulus and Recovery" (by Will Straw and Michael Ettinger) is dated 5 December 2008.
The study, released in November after months of work, found that a $100 billion investment in clean energy could create 2 million jobs over two years." -- Peter Baker and John M. Broder, New York Times, 7 December 2008 [Links inserted by Robert Vienneau]
Based on the reports I found, I doubt the report referred to by the New York Times article goes into details on the analytical justifications for its figures, which I'd like to see. I know that, in principle, one could create a transactions table from use and make tables. From such a transactions table, one can calculate multipliers by sectors and some measure of environmental impact. But I do not understand all the accounting conventions and approximations one would have to make to get such practical analyses from the national income accounts.
Saturday, December 06, 2008
How Individuals Can Choose, Even Though They Do Not Maximize Utility
1.0 Introduction
I think of this post as posing a research question. S. Abu Turab Rizvi re-interprets the primitives of social choice theory to refer to mental modules or subroutines in an individual. He then shows that the logical consequence is that individuals are not utility-maximizers. That is, in general, no preference relation exists for an individual that satisfies the conditions equivalent to the existence of an utility function. I have been reading Donald Saari on the mathematics of voting. What are the consequences for individual choice from interpreting this mathematics in Rizvi's terms?
I probably will not pursue this question, although I may draw on these literatures to present some more interesting counter-intuitive numerical examples.
2.0 Arrow's Impossibility Theorem and Work-Arounds
Consider a society of individuals. These individuals are "rational" in that each individual can rank all alternatives, and each individual ranking is transitive. Given the rankings of individuals, we seek a rule, defined for all individual rankings, to construct a complete and transitive ranking of alternatives for society. This rule should satisfy certain minimal properties:
Arrow's work has generated lots of critical and interesting research. For example, Sen considers choice functions for society, instead of rankings. A choice function selects the best alternative for every subset of alternatives. That is, for any menu of alternatives, a choice function specifies a best alternative. Consider a rule mapping every set of individual preferences to a choice function. All of Arrow's conditions are consistent for such a map from individual preferences to a choice function.
Saari criticizes the IIA property as requiring a collective choice rule not to use all available information. In particular, the rule makes no use of the number of alternatives, if any, that each individual ranks between each pair. The rule does not make use of enough information to check that each individual has transitive preferences. (Apparently, the IIA condition has generated other criticisms, including by Gibbard.) Saari proposes relaxing the IIA condition to use information sufficient for checking the transitivity of each individual's preference.
Saari also describes a collective choice rule that includes each individual numbering their choices in order, with the first choice being assigned 1, the second 2, and so on. With these numerical assignments, the choices are summed over individuals, and the ranking for society is the ranking resulting from these sums. This aggregation procedure is known as the Borda count. Saari shows that Borda count satisfies the relaxed IIA condition and Arrow's remaining conditions.
3.0 Philosophy of Mathematics
Above, I have summarized aspects of the theory of social choice in fairly concrete terms, such as "individuals" and "society". The mathematics behind these theorems is formulated in set-theoretic terms. The referent for mathematical terms is not fixed by the mathematics:
4.0 An Interpretation
Rizvi re-interprets the social choice formalism as applying to another set of referents. A society’s ranking, in the traditional interpretation, is now an individual’s ranking. An individual’s ranking, in the traditional interpretation, is now an influence on an individual’s ranking. Rizvi’s approach reminds me of Marvin Minsky's society of mind, in which minds are understood to be modular. Rizvi examines the implication’s of Sen’s impossibility of a Paretian liberal for individual preferences under this interpretation of the mathematics of social choice theory.
Constructing natural numbers in terms of set theory allows one to derive the Peano axioms as theorems. Similarly, interpreting social choice theory as applying to decision-making components for an individual allows one to analyze whether the conditions often imposed on individual preferences by mainstream economists can be derived from this deeper structure. And, it follows from Arrow's impossibility theorem, these conditions cannot be so derived in general. Individuals do not and need not maximize utility. On the other hand, Sen's result explains how individuals can choose a best choice from menus with which they may be presented.
References
I think of this post as posing a research question. S. Abu Turab Rizvi re-interprets the primitives of social choice theory to refer to mental modules or subroutines in an individual. He then shows that the logical consequence is that individuals are not utility-maximizers. That is, in general, no preference relation exists for an individual that satisfies the conditions equivalent to the existence of an utility function. I have been reading Donald Saari on the mathematics of voting. What are the consequences for individual choice from interpreting this mathematics in Rizvi's terms?
I probably will not pursue this question, although I may draw on these literatures to present some more interesting counter-intuitive numerical examples.
2.0 Arrow's Impossibility Theorem and Work-Arounds
Consider a society of individuals. These individuals are "rational" in that each individual can rank all alternatives, and each individual ranking is transitive. Given the rankings of individuals, we seek a rule, defined for all individual rankings, to construct a complete and transitive ranking of alternatives for society. This rule should satisfy certain minimal properties:
- Non-Dictatorship: No individual exists such that the rule merely assigns his or her ranking to society.
- Independence of Irrelevant Alternatives (IIA): Consider two countries composed of the same number of individuals. Suppose the same number in each country prefer one alternative to another in a certain pair of alternatives, and the same number are likewise indifferent between these alternatives. Then the rule cannot result in societal rankings for the two countries that differ in the order in which these two alternatives are ranked.
- Pareto Principle: If one alternative is ranked higher than another for all individuals, then the ranking for society must rank the former alternative higher than the latter as well.
Arrow's work has generated lots of critical and interesting research. For example, Sen considers choice functions for society, instead of rankings. A choice function selects the best alternative for every subset of alternatives. That is, for any menu of alternatives, a choice function specifies a best alternative. Consider a rule mapping every set of individual preferences to a choice function. All of Arrow's conditions are consistent for such a map from individual preferences to a choice function.
Saari criticizes the IIA property as requiring a collective choice rule not to use all available information. In particular, the rule makes no use of the number of alternatives, if any, that each individual ranks between each pair. The rule does not make use of enough information to check that each individual has transitive preferences. (Apparently, the IIA condition has generated other criticisms, including by Gibbard.) Saari proposes relaxing the IIA condition to use information sufficient for checking the transitivity of each individual's preference.
Saari also describes a collective choice rule that includes each individual numbering their choices in order, with the first choice being assigned 1, the second 2, and so on. With these numerical assignments, the choices are summed over individuals, and the ranking for society is the ranking resulting from these sums. This aggregation procedure is known as the Borda count. Saari shows that Borda count satisfies the relaxed IIA condition and Arrow's remaining conditions.
3.0 Philosophy of Mathematics
Above, I have summarized aspects of the theory of social choice in fairly concrete terms, such as "individuals" and "society". The mathematics behind these theorems is formulated in set-theoretic terms. The referent for mathematical terms is not fixed by the mathematics:
"One must be able to say at all times - instead of points, straight lines, and planes - tables, chairs, and beer mugs." - David Hilbert (as quoted by Constance Reid, Hilbert, Springer-Verlag, 1970: p. 57)
"Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true." -- Bertrand Russell
4.0 An Interpretation
Rizvi re-interprets the social choice formalism as applying to another set of referents. A society’s ranking, in the traditional interpretation, is now an individual’s ranking. An individual’s ranking, in the traditional interpretation, is now an influence on an individual’s ranking. Rizvi’s approach reminds me of Marvin Minsky's society of mind, in which minds are understood to be modular. Rizvi examines the implication’s of Sen’s impossibility of a Paretian liberal for individual preferences under this interpretation of the mathematics of social choice theory.
Constructing natural numbers in terms of set theory allows one to derive the Peano axioms as theorems. Similarly, interpreting social choice theory as applying to decision-making components for an individual allows one to analyze whether the conditions often imposed on individual preferences by mainstream economists can be derived from this deeper structure. And, it follows from Arrow's impossibility theorem, these conditions cannot be so derived in general. Individuals do not and need not maximize utility. On the other hand, Sen's result explains how individuals can choose a best choice from menus with which they may be presented.
References
- Kenneth J. Arrow (1963) Social Choice and Individual Values, Second edition, Cowles Foundation
- Alan G. Isaac (1998) "The Structure of Neoclassical Consumer Theory", working paper (9 July)
- Marvin Minsky (1987) The Society of Mind, Simon and Schuster
- Donald G. Saari (2001) Chaotic Elections! A Mathematician Looks at Voting, American Mathematical Society
- S. Abu Turab Rizvi (2001) "Preference Formation and the Axioms of Choice", Review of Political Economy, V. 13, N. 2 (Nov.): 141-159
- Amartya K. Sen (1969) "Quasi-Transitivity, Rational Choice and Collective Decisions", Review of Economic Studies, V. 36, N. 3 (July): 381-393 (I haven't read this.)
- Amartya K. Sen (1970) "The Impossibility of a Paretian Liberal", Journal of Political Economy, V. 78, N. 1 (Jan.-Feb.): 152-157
Monday, December 01, 2008
On John Maynard Keynes
- Paul Krugman again and again and again
- Brad DeLong basically quotes Krugman
- Tyler Cowen
- Peter Boettke comments on Cowen on Keynes
- Matthew Mueller comments on Boettke on Keynes
Update: I suppose I ought to mention my game, which is closer to hydraulic keynesianism. I did get the idea for the underlying model from Kalecki.
Saturday, November 29, 2008
Larry Summers As Dissenting Economist
"... formal econometric work, where elaborate technique is used to apply theory to data or isolate the direction of causal relationships when they are not obvious a priori, virtually always fails. The only empirical research that has contributed to thinking about substantive issues and the development of economics is pragmatic empirical work, based on methodological principles directly opposed to those that have become fashionable in recent years." - Lawrence H. Summers (1991) "The Scientific Illusion in Empirical Macroeconomics", Scandinavian Journal of Economics, V. 93, N. 2: 129-148
Friday, November 28, 2008
Everything Old Is New Again
An orthodox response to the Cambridge Capital Controversy (CCC) is to assert that mainstream price theory is centered around General Equilibrium theory. Very short run models of intertemporal and temporary equilibrium are, it is claimed, unaffected by the CCC. I think that capital reversing is manifested in such models by dynamic equilibrium paths with counter-intuitive behavior.
For example, suppose the labor supply increases along such a path in that later generations increasingly prefer to consume commodities, not leisure. A dynamic equilibrium path can be constructed in which this increasing labor supply is associated with an increasing wage. Likewise, suppose instead that the supply of capital increases in that later consumers increasingly prefer to defer present consumption in favor of future consumption. Here, too, such increased savings can be associated with an increasing interest rate.
I have expressed this view before. I have decided that I am not going to develop concrete numerical examples any time soon. So I have put up what I have so far as a paper over on the Social Sciences Research Network (SSRN).
If I were to continue, the next step in the analysis is conceptually straightforward, although tedious. One would linearize the stationary state solutions around limit points for the dynamic equilibrium points. An examination of eigenvalues of the resulting matrix reveals local stability properties. I expect at least some equilibria will have, at best, the stability of saddle points. But multiple equilibria arise, and perhaps the desired dynamic equilibrium paths can be constructed. One might consider other forms of the utility maximization problem, if different stability properties are desired.
For example, suppose the labor supply increases along such a path in that later generations increasingly prefer to consume commodities, not leisure. A dynamic equilibrium path can be constructed in which this increasing labor supply is associated with an increasing wage. Likewise, suppose instead that the supply of capital increases in that later consumers increasingly prefer to defer present consumption in favor of future consumption. Here, too, such increased savings can be associated with an increasing interest rate.
I have expressed this view before. I have decided that I am not going to develop concrete numerical examples any time soon. So I have put up what I have so far as a paper over on the Social Sciences Research Network (SSRN).
If I were to continue, the next step in the analysis is conceptually straightforward, although tedious. One would linearize the stationary state solutions around limit points for the dynamic equilibrium points. An examination of eigenvalues of the resulting matrix reveals local stability properties. I expect at least some equilibria will have, at best, the stability of saddle points. But multiple equilibria arise, and perhaps the desired dynamic equilibrium paths can be constructed. One might consider other forms of the utility maximization problem, if different stability properties are desired.
Sunday, November 23, 2008
Let The Sunshine In
I was on travel last week. I find intriguing the application of solar, wind, geothermal, hydroelectric, and tide power generation on an industrial scale. In the meridian of a highway near the Denver airport they have an array, which I took a photo of.
It seems solar and wind are becoming quite practical as a matter of dollars and cents, even without having internalized the externalities of various power generation sources. From Richard Stevenson ("First Solar: Quest for the $1 Watt", IEEE Spectrum, V. 45, N. 8 (August 2008): 26-31), I learn to be competitive for off-peak power generation, solar cells need to be priced to generate electricity at $1 per watt. Apparently, a number of companies are developing competing technologies that may attain this goal in a couple of years.
Disclaimer: In mentioning First Solar and their Cadmium Telluride (CdTe) technology, I am "talking my book" - a phrase I learned from D-Squared - in a small way.
Solar Power Generation |
Denver |
Disclaimer: In mentioning First Solar and their Cadmium Telluride (CdTe) technology, I am "talking my book" - a phrase I learned from D-Squared - in a small way.
Saturday, November 22, 2008
Ricardo On Profits
Sraffa started a controversy on the interpretation of Ricardo's theory of value and distribution. Or perhaps Hollander did in his reaction against Sraffa.
In Sraffa's view, important developments in Ricardo's understanding happened the year before Ricardo's 1815 publication of "An Essay on the Influence of a low Price of Corn on the Profits of Stock". A fortiori, these developments precede the 1817 first edition of On the Principles of Political Economy and Taxation.
Hutches Trower, in his 2 March 1814 letter to Ricardo, states he is returning Ricardo's now lost "papers on the profits of Capital". Ricardo's response is important evidence for Sraffa's interpretation. Here is Ricardo's letter in full:
For Sraffa, Ricardo can be understood as claiming that wages are spent entirely on corn and that corn is the only basic commodity in the system. Sraffa defines "basic commodities" in his book. According to this interpretation, the rate of profits is a physical ratio in agriculture. The prices of manufactured commodities adjust, under the Classical understanding of supply and demand, until this same rate of profit prevails, both in agriculture and in manufacturing.
In Sraffa's view, important developments in Ricardo's understanding happened the year before Ricardo's 1815 publication of "An Essay on the Influence of a low Price of Corn on the Profits of Stock". A fortiori, these developments precede the 1817 first edition of On the Principles of Political Economy and Taxation.
Hutches Trower, in his 2 March 1814 letter to Ricardo, states he is returning Ricardo's now lost "papers on the profits of Capital". Ricardo's response is important evidence for Sraffa's interpretation. Here is Ricardo's letter in full:
Upper Brook Street
8th March 1814
Dear Trower
I called at your house yesterday; I wished to tell you that though well disposed to enter into the defence of my opinions, I was now so much occupied by business, that I could not devote the necessary time to it. Not having found you at home I must tell you so by “these present”. At the same time I must observe that what I feared, I believe, has happened. To one not aware of the whole difference between Mr. Malthus and me, the papers you read were not clear, and I think you have not entirely made out the subject in dispute.
Without entering further into the question I will endeavor to state the question itself. When Capital increases in a country, and the means of employing Capital already exists, or increases, in the same proportion, the rate of interest and of profits will not fall.
Interest rises only when the means of employment for Capital bears a greater proportion than before to the Capital itself, and falls when the Capital bears a greater proportion to the arena, as Mr. Malthus has called it, for its employment. On these points I believe we are all agreed, but I contend that the arena for the employment of new Capital cannot increase in any country in the same or greater proportion than the Capital itself, [footnote:] the following to be inserted: unless Capital be withdrawn from the land [end footnote] unless there be improvements in husbandry, - or new facilities be offered for the introduction of food from foreign countries; - that in short it is the profits of the farmer which regulate the profits of all other trades, - and as the profits of the farmer must necessarily decrease with every augmentation of Capital employed on the land, provided no improvements be at the same time made in husbandry, all other profits must diminish and therefore the rate of interest must fall. To this proposition Mr. Malthus does not agree. He thinks that the arena for the employment of Capital may increase, and consequently profits and interest may rise, altho' there should be no new facilities, either by importation, or improved tillage, for the production of food; - that the profits of the farmer no more regulate the profits of other trades, than the profits of other trades regulate the profits of the farmer, and consequently if new markets are discovered, in which we can obtain a greater quantity of foreign commodities in exchange for our commodities, than before the discovery of such markets, profits will increase and interest will rise.
In such a state of things the rate of interest would rise as well as the profits of the farmer, he thinks even if more Capital were employed on the land. Do you understand?
Nothing, I say, can increase the profits permanently on trade, with the same or an increased Capital, but a really cheaper mode of obtaining food. A cheaper mode of obtaining food will undoubtedly increase profits says Mr. Malthus but there are many other circumstances which may also increase profits with an increase of Capital. The discovery of a new market where there will be a great demand for our manufactures is one.
Believe me
Yrs very faithfully
David Ricardo
I have written this in great haste after devoting the necessary time to my accounts. You must excuse the scrawl, and corrections.
For Sraffa, Ricardo can be understood as claiming that wages are spent entirely on corn and that corn is the only basic commodity in the system. Sraffa defines "basic commodities" in his book. According to this interpretation, the rate of profits is a physical ratio in agriculture. The prices of manufactured commodities adjust, under the Classical understanding of supply and demand, until this same rate of profit prevails, both in agriculture and in manufacturing.
Monday, November 17, 2008
Militant Voting
This post illustrates a phenomenon that is a possibility in pairwise voting. Consider a constituency of 30 voters deciding among six candidates for a given vacancy. (I take this example from Donald Saari.) Table 1 describes the preferences of these voters. For example, the first row shows that ten voters prefer Anne to Barb, and Barb to Carol, and so on.) The voters are asked to choose between successive pairs of candidates, as shown in Figure 1. In the first election, Debra defeats Elaine. But Carol defeats Debra in the next choice. And so on, until Flicka is the only choice standing, after a landslide victory. It seems that clearly Flicka is the consensus choice. Strangely enough, though, every voter prefers Carol, Debra, and Elaine to Flicka. The voting system doesn’t seem to allow for a true expression of the preferences of the members of the electorate.
Militant was a Trotskyite tendency practicing entryism within the British Labour Party. They seemed to have figured out how to use party discipline to have their way in Liverpool in the early 1980s:
References
Number | Preference Ranking |
10 | A > B > C > D > E > F |
10 | B > C > D > E > F > A |
10 | C > D > E > F > A > B |
Figure 1: Pairwise Elections in Example |
Militant was a Trotskyite tendency practicing entryism within the British Labour Party. They seemed to have figured out how to use party discipline to have their way in Liverpool in the early 1980s:
"All political parties on the Monday before the Council meeting on a Wednesday, have a caucus meeting to decide the line of approach at the Council. The agenda for the meeting comes out on a Friday. The ten or twelve Militant members ... meet on either the Friday or the Saturday and go through the agenda to look for important policy decisions and for important vacancies. They then have a meeting with the broad left of the Labour group on Sunday morning at Pirrie Labour Club ... those Militants turn up in their full strength. There are generally about twenty people and the ten or eleven Militants there. They carry the majority vote there that commits the broad left for the meeting of the Labour group. On the Monday night at the Labour group meeting of forty-two members the commitment that Militant have made themselves, plus other people they've taken along at the meeting on Sunday morning, gives them a majority. ... so you find that of forty-two Labour councilors, ten Militants control the policy of the Labour group." -- Eddie Roderick, as quoted by Michael CrickCrick says, "Roderick's analysis may be a rather simplified version." At any rate, Figure 2 illustrates how the Council seems to have made their decisions. Figures 1 and 2 look quite similar. Maybe Saari’s math describes more than a theoretical possibility.
Figure 1: Pairwise Voting on the Liverpool City Council |
References
- Michael Crick (1984). Militant, Faber and Faber
- Donald G. Saari (2004). "Geometry of Chaotic and Stable Discussion", American Mathematical Monthly, V. 111, N. 5: 377-393
Sunday, November 16, 2008
Elsewhere
Gualra has a blog, titled "Controversias Del Capital". It seems like the kind of thing I would read if I were able to read spanish. Gualra kindly alerts me to aspects of Kaldor's thought I neglected.
I don't understand Matthew Mueller. One reason I expect professors can know more than most students is that typically students have simply not lived enough years to have been able to read as much. But this does not seem to apply to Matthew. I think this post from Matthew agrees with my previous post on the same topic. But, as I recall, I was alerted to Horwitz's controversy with Post Keynesianism by Matthew's comments on some post over at the Austrian Economics. (Here's a short recommendation of both Matthew's and my blog.)
I laugh at Paul Walker's display of ignorance, to put it kindly. Gabriel Mihalache already points out some major problems with Paul's statements. General Equilibrium does not address a research problem in which Adam Smith was interested; tâtonnement decribes a centralized - not decentralized - process for setting prices; and general results do not show the tâtonnement process converging to a stable equilibrium. In his last comment, Paul makes mistaken claims about Walras. He does not seem to realize that Walras's system is mathematically contradictory. Walras takes endowments of all goods, including individual capital goods, as givens in solving for steady-state prices.
By the way, the email discussion list for the Societies for the History of Economics (SHOE) has replaced the list for the History of Economics Societies (HES).
I don't understand Matthew Mueller. One reason I expect professors can know more than most students is that typically students have simply not lived enough years to have been able to read as much. But this does not seem to apply to Matthew. I think this post from Matthew agrees with my previous post on the same topic. But, as I recall, I was alerted to Horwitz's controversy with Post Keynesianism by Matthew's comments on some post over at the Austrian Economics. (Here's a short recommendation of both Matthew's and my blog.)
I laugh at Paul Walker's display of ignorance, to put it kindly. Gabriel Mihalache already points out some major problems with Paul's statements. General Equilibrium does not address a research problem in which Adam Smith was interested; tâtonnement decribes a centralized - not decentralized - process for setting prices; and general results do not show the tâtonnement process converging to a stable equilibrium. In his last comment, Paul makes mistaken claims about Walras. He does not seem to realize that Walras's system is mathematically contradictory. Walras takes endowments of all goods, including individual capital goods, as givens in solving for steady-state prices.
By the way, the email discussion list for the Societies for the History of Economics (SHOE) has replaced the list for the History of Economics Societies (HES).
Thursday, November 13, 2008
A Minsky Snapshot
Figure 1 is a screen snapshot of normalized data from Google trends. Terms such as "Austrian Business Cycle" and "Austrian Business Cycle Theory", according to Google, "do not have enough search volume to show graphs." Mayhaps Michael is correct, at least for the incorrect Austrian Business Cycle Theory.
Figure 1: Trends in Popular Minsky Citations |
Tuesday, November 11, 2008
Keynes As "A Puzzling Mathematician"
Where do I recall the quoted phrase in the title from? It's not in Skidelsky:
"Keynes finally saw Roosevelt for an hour at 5:16 p.m. on Monday, 28 May [1934]. No one knows what they talked about. Keynes found the tête-a-tête 'fascinating and illuminating'; Roosevelt told Frankfurter that he had had a 'grand talk with Keynes and liked him immensely'. As always, Keynes paid particular attention to Roosevelt's hands - 'Rather disappointing. Firm and fairly strong, but not clever or with finesse'." -- Robert Skidelsky, John Maynard Keynes: The Economist as Savior: 1920-1937, Penguin (1992)And it's not in Galbraith, at least here:
"The following year he visited FDR, but the letter had been a better means of communication. Each man was puzzled by the face-to-face encounter. The President thought Keynes some kind of 'a mathematician rather than a political economist.' Keynes was depressed; he had 'supposed the President was more literate, economically speaking.'" -- John Kenneth Galbraith, The Age of Uncertainty
Monday, November 10, 2008
SDM: Path-Dependence and Instability
I recently read Peter Dorman's "Waiting for an Echo: The Revolution in General Equilibrium Theory and The Paralysis in Introductory Economics" (Review of Radical Political Economics, V. 33 (2001): pp. 325-333). Dorman claims that, in teaching introductory microeconomics, General Equilibrium Theory (GET) is "one of the back-of-the-book chapters we rarely get to." And if GET is taught, the teaching fails to reflect a "virtual revolution in GET during the past quarter-century". His thesis is that these developments in GET can and should be taught in introductory microeconomics classes.
The Sonnenschein-Debreu-Mantel theorem is one of these developments. This theorem states that almost any excess demand curves in markets for individual goods can be justified by aggregating over individual excess demands. Theory imposes only Walras' law, homogeneity of degree zero, and a technical continuity condition. No other restrictions need arise on the shape of aggregate demand curves.
Why are the SDM results exciting? They imply the general possibility of multiple equilibrium and instability. Or at least, that's what I have taken from the literature. I first thought idiosyncratic Dorman's take on the SDM results. He says that they show the "path-dependence instability of general equilibrium" and the indeterminancy of equilibrium:
I have heard of indeterminacy, but had not thought of it in the context of the SDM. As I understand the instability implications of the SDM results, they are explored in the context of tâtonnement dynamics. How then, can one talk about path dependence here?
I did come up with some justification after some thought. The SDM results show that any dynamics is possible in GET. And I know of an interesting example of chaos in which the sensitive dependence on initial conditions is connected to a particular fractal structure. Newton's method is a numerical method for solving non-linear equations. One can think of Newton's method as a dynamical system for iteratively mapping a point in the complex plane to a root of an equation, when the method converges. Polynomials, for example, have multiple roots. Color the plane by the roots to which Newton's method maps each point. All points that map to a given root are the same color. For certain simple polynomials, you will have drawn a fractal. (Google also gave me this.) Thus, in certain regions, any infinitesimal change in the initial conditions can cause this dynamical method to tend towards a different equilibrium. This property is independent of any claim that multiple equilibrium lie along a continuum.
Since, according to the SDM results, any dynamics is possible, I guess that some sort of dynamics like I have described for Newton's method is possible in GET. And so one can say that the SDM results show the possibility of path-dependence in economics.
The Sonnenschein-Debreu-Mantel theorem is one of these developments. This theorem states that almost any excess demand curves in markets for individual goods can be justified by aggregating over individual excess demands. Theory imposes only Walras' law, homogeneity of degree zero, and a technical continuity condition. No other restrictions need arise on the shape of aggregate demand curves.
Why are the SDM results exciting? They imply the general possibility of multiple equilibrium and instability. Or at least, that's what I have taken from the literature. I first thought idiosyncratic Dorman's take on the SDM results. He says that they show the "path-dependence instability of general equilibrium" and the indeterminancy of equilibrium:
"In general equilibrium, each action that alters the distribution of resources among agents (and that would be just about anything) also alters the equilibrium vector of prices. It is not possible to identify an equilibrium seperate from the actions individuals take either in pursuit of in utter ignorance of it."And he writes:
"The first task facing a principles instructor is to ignore the scholarly debate that has surrounded S-D-M. The original authors demonstrated that out-of-equilibrium exchanges altered the distribution of resources, and, since different individuals have different preferences, also altered the general equilibrium itself. Since then, researchers have been investigating the exact extent of preference differentiation under which this result would hold. This, it seems to me, is an utterly arid line of investigation, and it has no meaningful implications for nonspecialists."
I have heard of indeterminacy, but had not thought of it in the context of the SDM. As I understand the instability implications of the SDM results, they are explored in the context of tâtonnement dynamics. How then, can one talk about path dependence here?
I did come up with some justification after some thought. The SDM results show that any dynamics is possible in GET. And I know of an interesting example of chaos in which the sensitive dependence on initial conditions is connected to a particular fractal structure. Newton's method is a numerical method for solving non-linear equations. One can think of Newton's method as a dynamical system for iteratively mapping a point in the complex plane to a root of an equation, when the method converges. Polynomials, for example, have multiple roots. Color the plane by the roots to which Newton's method maps each point. All points that map to a given root are the same color. For certain simple polynomials, you will have drawn a fractal. (Google also gave me this.) Thus, in certain regions, any infinitesimal change in the initial conditions can cause this dynamical method to tend towards a different equilibrium. This property is independent of any claim that multiple equilibrium lie along a continuum.
Since, according to the SDM results, any dynamics is possible, I guess that some sort of dynamics like I have described for Newton's method is possible in GET. And so one can say that the SDM results show the possibility of path-dependence in economics.
Sunday, November 09, 2008
Sraffa at Corfu
"[O]ne should emphasize the distinction between two types of measurement. First, there was the one in which the statisticians were mainly interested. Second, there was measurement in theory. The statisticians' measures were only approximate and provided a suitable field for work in solving index number problems. The theoretical measures required absolute precision. Any imperfections in these theoretical measures were not merely upsetting, but knocked down the whole theoretical basis. One could measure capital in pounds or dollars and introduce this into a production function. The definition in this case must be absolutely water-tight, for with a given quantity of capital one had a certain rate of interest so that the quantity of capital was an essential part of the mechanism. One therefore had to keep the definition of capital separate from the needs of statistical measurement, which were quite different. The work of J. B. Clark, Bohm-Bawerk and others was intended to produce pure definitions of capital, as required by their theories, not as a guide to actual measurement. If we found contradictions, then these pointed to defects in the theory, and an inability to define measures of capital accurately. It was on this - the chief failing of capital theory - that we should concentrate rather than on problems of measurement." -- Piero Sraffa, Interventions in the debate at the Corfu Conference on the "Theory of Capital", 4-11 September 1958.
Tuesday, November 04, 2008
The Route To Normal Science
"Compared with physics, it seems fair to say that the quantitative success of the economic sciences has been disappointing. Rockets fly to the Moon; energy is extracted from minute changes of atomic mass. What is the flagship achievement of economics? Only its recurrent inability to predict and avert crises, including the current worldwide credit crunch...
...Crucially, the mindset of those working in economics and financial engineering needs to change. Economics curricula need to include more natural science. The prerequisites for more stability in the long run are the development of a more pragmatic and realistic representation of what is going on in financial markets, and to focus on data, which should always supersede perfect equations and aesthetic axioms" -- Jean-Phillippe Bouchard, "Economics Needs a Scientific Revolution", Nature, V. 455, 30 October 2008
Sunday, November 02, 2008
Peter Boettke With Nothing To Say
Peter Boettke goes on and on and on in this post. I'll confine my comments to Boettke's first numbered "point", even though I have views on some others. This point deals with Paul Davidson's review essay on O'Driscoll and Rizzo's The Economics of Time and Ignorance.
I believe Boettke's first point is an inadequate response to Matthew Mueller here and here. Boettke points out a verbal statement from Mises rejecting the claim that money is neutral. It never seems to occur to Boettke that Davidson's point might be that the logic of Mises and Hayek's positions requires them to accept the axioms of ergodicity (logical time), money neutrality, and gross substitution. Boettke cannot rationally challenge that view by citing statements where Mises or Hayek refuse to accept the first two axioms and therefore become inconsistent.
I am of the opinion that Austrian Business Cycle Theory is consistent with money non-neutrality only in the short run. Furthermore, Hayek's notion of intertemporal equilibrium is closely related to J. R. Hick's Capital and Value model of temporary equilibrium and to the Arrow-Debreu model of intertemporal equilibrium. Hicks came to recognize late in his life that his model could not be set in historical time. Hahn has pointed out the difficulties of introducing money in any essential way into the Arrow-Debreu model. I don't know that Austrians have ever grappled with these decades-old developments. I don't see that Boettke engages in arguments.
I believe Boettke's first point is an inadequate response to Matthew Mueller here and here. Boettke points out a verbal statement from Mises rejecting the claim that money is neutral. It never seems to occur to Boettke that Davidson's point might be that the logic of Mises and Hayek's positions requires them to accept the axioms of ergodicity (logical time), money neutrality, and gross substitution. Boettke cannot rationally challenge that view by citing statements where Mises or Hayek refuse to accept the first two axioms and therefore become inconsistent.
I am of the opinion that Austrian Business Cycle Theory is consistent with money non-neutrality only in the short run. Furthermore, Hayek's notion of intertemporal equilibrium is closely related to J. R. Hick's Capital and Value model of temporary equilibrium and to the Arrow-Debreu model of intertemporal equilibrium. Hicks came to recognize late in his life that his model could not be set in historical time. Hahn has pointed out the difficulties of introducing money in any essential way into the Arrow-Debreu model. I don't know that Austrians have ever grappled with these decades-old developments. I don't see that Boettke engages in arguments.
Friday, October 31, 2008
The 4,827th Reexamination of Hayek's System
In various blogs, John Holbo, Julian Sanchez, and Matthew Yglesias comment on Hayek. In commenting on Jesse Larner's views on Hayek, I have already mentioned my opinion that The Road to Serfdom can be read more as a jeopardy argument than a slippery slope argument. I've also noted Hayek's difficulties in analyzing mixed, mainly capitalist economies.
Update: Brad DeLong says that The Road to Serfdom was too a slippery slope argument, and Matthew Yglesias reacts.
Update: Brad DeLong says that The Road to Serfdom was too a slippery slope argument, and Matthew Yglesias reacts.
Thursday, October 30, 2008
Read Colander and Klamer Before Applying to Graduate School
I am amused by the thread created by this polemic, titled "What I thought a PhD was about":
When I first envitioned the dream of having a PhD one day, I had really wrong ideas about it. At first, I thought a PhD was a synonimous of erudition in economics. I believed that someone who had a PhD would have run trhough all the economic theory. But I was mistaken.
It really amaze me that it exists some people who have a PhD in economics and who have never read the wealth of Nations. Right now, a PhD in Economics looks more like an Applied Math PhD. Which is something that really annoys me.
Today, a PhD in economics is not about mastery of economic science but about mastery of statistics and mathematics, in respect to their applications to economic theory.
If I were a graduate dean, I wouldn't admit anyone without undergrad training in economics to persuit graduate studies. In my opinion the profession is too much full with frustrated mathematician and physist who, after they realized they wouldn't do anything worth in their field turned to economics to corrupt that beautiful science with their arcane mathematics.
I hate that. And I hate people who allows that..."
Monday, October 27, 2008
A Characteristic In Common Between the New Palgrave and Wikipedia
Mark Blaug reviewed the 1987 edition of the New Palgrave, apparently for some right-wing outfit. I find much in his review to disagree with. But I think he has a point here:
Wikipedia also has closely related articles with different names. Here are some examples in economics:
The Law of Value/Labor Theory of Value and Marginalism/Neoclassical economics pairs closely follow Blaug's complaint. Each member of a pair are written from very different perspectives. (I've been in edit wars with the crank maintaining the marginalism entry.)
By the way, both the comparative advantage and the Heckscher-Ohlin entry, including related entries on HO theorems, contain the usual errors about capital. That is, these entries are simply incorrect.
"The Eatwell-Milgate-Newman policy of publishing multiple entries with slightly different titles for identical subjects constantly produces curious results... On balance, a policy of presenting competing opinions under the same title would have been vastly preferable to the Eatwell-Milgate-Newman policy of several entries under different titles on what is in fact one and the same topic." -- Mark Blaug, Economics Through the Looking Glass: The Distorted Perspective of the New Palgrave Dictionary of Economics, Institute of Economic Affairs, 1988.An example I found would be Walter Eltis' article "Falling Rate of Profit" and N. Okishio on "Choice of Technique and Rate of Profit". Neither references the other. Sometimes the Eatwell-Milgate-Newman policy makes no difference, e.g. in successive articles on "Competition," "Competition: Austrian Conceptions," "Competition: Classical Conceptions," and "Competition: Marxian Conceptions".
Wikipedia also has closely related articles with different names. Here are some examples in economics:
- Arrow-Debreu Model and General Equilibrium
- Comparative Advantage and Heckscher-Ohlin Model (Only the former is part of the International Trade template)
- Labor Theory of Value and Law of Value (Only the latter appears in the Marxist Theory template)
- Marginalism and Neoclassical Economics
The Law of Value/Labor Theory of Value and Marginalism/Neoclassical economics pairs closely follow Blaug's complaint. Each member of a pair are written from very different perspectives. (I've been in edit wars with the crank maintaining the marginalism entry.)
By the way, both the comparative advantage and the Heckscher-Ohlin entry, including related entries on HO theorems, contain the usual errors about capital. That is, these entries are simply incorrect.
Friday, October 24, 2008
Edward, You Ignorant Arse
In an email exchange with a graduate student, Edward Prescott proves himself to be an impolite, ignorant, arrogant fool. Prescott's correspondent, Leonid Teytelman, doubted this must be Prescott in full possession of his faculties - maybe an adolescent niece or nephew had somehow gotten ahold of Prescott's account. He could not be drunk, since the interchange took place over several days. Myself, I have no problem in believing that Prescott understands neither the science of economics nor basic facts about the United States economy.
Apparently Prescott, in his professional work with Kyland, is equally incoherent. Jim Hartley documents that
Apparently Prescott, in his professional work with Kyland, is equally incoherent. Jim Hartley documents that
"In five different programmatic manifestos over a span of 15 years, Kydland and Prescott have offered five different—and in many ways mutually incompatible—justifications for the models they were advocating." -- James E. Hartley (2006) "Kyland and Prescott's Nobel Prize: the Methodology of Time Consistency and Real Business Cycle Models", Review of Political Economy, V. 18, N. 1 (January): 1-28At one point, business cycles are caused by the time-to-build capital equipment. No, they are caused by technology shocks in a growth model. You should believe this because "the smoothed series and the derivations from the smoothed series are quantitatively consistent with the observed behavior". No, only because the model explains co-movements of the deviations. No, because the newly interpreted model follows from "standard" theory (Solow-Swan growth modeling). And deviations from the model are because the measurements are bad. No, the model was built to explain previously observed facts, which are observed not by looking at deviations from the Solow-Swan growth model, but from deviations from the output of the Hodrick-Prescott filter. In particular, the model explains the observed acyclical nature of movements in real wages. That is, the model explains the observed strong procyclical nature of movements in real wages. And the model is merely an application of Computable General Equilibrium modeling. The parameters of the model are based on observed values. No, they are chosen so the model outputs "mimic the world".
Wednesday, October 22, 2008
A SF Fraction Or Faction
I sometimes wonder if Ken MacLeod writes science fiction novels just for me. What other novelist has characters refer to Leontief matrices?
Anyway, he has a blogroll I find quite interesting to explore. I here skip over commentators on science and on current events. The suggested reading off Kevin Carson's mutualist site makes available all sorts of old works. I trust William Godwin's Enquiry Concerning the Principles of Political Justice is the pamphlet Malthus reacted to. I am interested in how the Ricardian socialists argued, on the basis of classical political economy, that the source of returns to capital is the exploitation of labor. I expect to find that that is an aspect of Thomas Hodgskin's Labour Defended against the Claims of Capital Or the Unproductiveness of Capital proved with Reference to the Present Combinations amongst Journeymen.
Another site on MacLeod's blogroll that will take me years to explore gathers essays refuting myths & legends about Marx. I suspect I will be more open to some of these arguments than others.
I have read compliments on David Schweickart. I want to recall to look up his "Economic Democracy: A Worthy Socialism that Would Really Work" (Science & Society, V. 56, N. 1, Spring 1992: 9-38) next time I am in a university library, also available at SolidarityEconomy.net, which seems defunct.
Anyway, he has a blogroll I find quite interesting to explore. I here skip over commentators on science and on current events. The suggested reading off Kevin Carson's mutualist site makes available all sorts of old works. I trust William Godwin's Enquiry Concerning the Principles of Political Justice is the pamphlet Malthus reacted to. I am interested in how the Ricardian socialists argued, on the basis of classical political economy, that the source of returns to capital is the exploitation of labor. I expect to find that that is an aspect of Thomas Hodgskin's Labour Defended against the Claims of Capital Or the Unproductiveness of Capital proved with Reference to the Present Combinations amongst Journeymen.
Another site on MacLeod's blogroll that will take me years to explore gathers essays refuting myths & legends about Marx. I suspect I will be more open to some of these arguments than others.
I have read compliments on David Schweickart. I want to recall to look up his "Economic Democracy: A Worthy Socialism that Would Really Work" (Science & Society, V. 56, N. 1, Spring 1992: 9-38) next time I am in a university library, also available at SolidarityEconomy.net, which seems defunct.
Monday, October 20, 2008
Kaldor's Contributions: An Impressionistic Survey
Introduction
This post gives a quick overview of my impressions of the contributions to economics of Nicholas Kaldor. In writing this post, I deliberately did not review the entries on him at Gonçalo Fonseca's site on the history of economic thought, in the New Palgrave, or at Wikipedia. I did use Turner (1993) for the biographical details.
Biography
I will be brief on the biography of Lord Nicholas Kaldor (12 May 1908 - 1986). Born in Budapest, he later studied at Berlin. He transferred to the London School of Economics (LSE) as an undergraduate in the Fall of 1927. Kaldor visited the United States, including Harvard and Princeton, in 1935. He moved to Cambridge in 1939, with the evacuation of the LSE to Cambridge, and stayed on at Cambridge (King's College) after the war. He joined the United States Strategic Bombing Survey under the direction of John Kenneth Galbraith. He became a Baron in 1974 and was the president of the Royal Economic Society in 1976. Kaldor's wife was named Clarissa, and they had four daughters. Anthony P. Thirwall was named his literary executor.
1930s
Economists in the 1930s had, once again, a controversy on the theory of capital, with Frank Knight on one side and Friedrich A. Hayek and Fritz Machlup on the other. Early in his career, Kaldor (1937) surveyed that dispute. He followed up with investigations (1939a, 1942) of Hayek's capital theory and exposition of the Austrian Business Cycle Theory. Although Kaldor's judgments are sharp, I think these articles might have been more convincing if the standard of the time allowed for more mathematics.
I don't recall ever reading Kaldor's original contributions to welfare economics. Apparently, he had an article in the 1939 volume of the Economic Journal. This article and one by J. R. Hicks are the primary source of the famous Hicks-Kaldor compensation principle.
Apparently the younger economists at Cambridge and LSE, such as Robinson and Kaldor, respectively, met once a month to debate macroeconomics even before the publication of Keynes' General Theory. Kaldor became a convert to Keynes, as can be seen in Kaldor (1939b). Barkley Rosser, Jr., tends to cite Goodwin and Kaldor as early explorations of non-linear dynamics in economic models. Maybe Kaldor (1940) is important here, which I have not read in at least a decade, if ever.
Later Work on Growth and Distribution Theory
Kaldor's later work on growth and distribution is more clearly Post Keynesian, in my opinion. His 1956 paper compares and contrast three theories of distribution: a neoclassical theory which makes most sense with a now exploded scarcity theory of value, a classical theory in which wages are exogeneous in the theory of value and distribution, and a Post Keynesian theory in which the distribution of income depends on macroeconomic savings propensities. I think this paper led to the souring of his relationship with Joan Robinson; she was, I guess, worried about priority in publication. Luigi Pasinetti disputed the logical consistency of Kaldor's presentation, in which workers obtain income from capital but save that portion of their income at the higher rate characteristic in Kaldor's model of savings out of profits. In a later seminar with Pasinetti, Robinson, and Samuelson & Modigliani, Kaldor (1966) clarified that he thought of the savings rate as pertaining to the source of income, not the individual savers. This ties into the idea that savings out of retained earnings is not transparent to those holding stock in corporations. Kaldor suggested these ideas can explain how the market value of corporate stock relates to the book value of the assets owned by corporations. Later work by others demonstrate that for two classes to persist in Kaldor's model, the rate of profits must exceed the rate of interest (i.e., the return to capital obtainable by workers in the financial markets they have access to). This may not be a good idea, but perhaps it would be an interesting idea to synthesize this literature with literature related to De Long et al (1990) - and I would prefer not to reference Shleifer.
Kaldor developed a related series of growth models. He presented one at the famous August 1958 Corfu conference. I guess it was in this paper he presented his "stylized facts". He presented another model in this series (Kaldor and Mirrlees 1962) in the same issue of the Review of Economic Studies in which he welcomed (1962) Arrow to the band of heretics for his "Learning by Doing" paper. Kaldor's models use a technical progress function, which, I gather, is empirically indistinguishable from a Cobb-Douglas production function with technical progress.
Kaldor emphasized increasing returns in manufacturing in these models, and he championed Verdoon's law. Thirwall (e.g., 1986) applies these ideas to developing economics. I gather a policy recommendation in this literature is for export-led growth. An emphasis on increasing returns underlies Kaldor's (1972, 1975, and 1985) mature criticisms of neoclassical economics.
Finally, I want to mention Kaldor's theory of endogenous money. Kaldor described both the inability of monetary authorities to control the supply of money under some given definition and the ability of financial institutions to continually evolve new instruments to serve as money. He used these ideas to refute monetarism (1986, first edition 1982).
References
This post gives a quick overview of my impressions of the contributions to economics of Nicholas Kaldor. In writing this post, I deliberately did not review the entries on him at Gonçalo Fonseca's site on the history of economic thought, in the New Palgrave, or at Wikipedia. I did use Turner (1993) for the biographical details.
Biography
I will be brief on the biography of Lord Nicholas Kaldor (12 May 1908 - 1986). Born in Budapest, he later studied at Berlin. He transferred to the London School of Economics (LSE) as an undergraduate in the Fall of 1927. Kaldor visited the United States, including Harvard and Princeton, in 1935. He moved to Cambridge in 1939, with the evacuation of the LSE to Cambridge, and stayed on at Cambridge (King's College) after the war. He joined the United States Strategic Bombing Survey under the direction of John Kenneth Galbraith. He became a Baron in 1974 and was the president of the Royal Economic Society in 1976. Kaldor's wife was named Clarissa, and they had four daughters. Anthony P. Thirwall was named his literary executor.
1930s
Economists in the 1930s had, once again, a controversy on the theory of capital, with Frank Knight on one side and Friedrich A. Hayek and Fritz Machlup on the other. Early in his career, Kaldor (1937) surveyed that dispute. He followed up with investigations (1939a, 1942) of Hayek's capital theory and exposition of the Austrian Business Cycle Theory. Although Kaldor's judgments are sharp, I think these articles might have been more convincing if the standard of the time allowed for more mathematics.
I don't recall ever reading Kaldor's original contributions to welfare economics. Apparently, he had an article in the 1939 volume of the Economic Journal. This article and one by J. R. Hicks are the primary source of the famous Hicks-Kaldor compensation principle.
Apparently the younger economists at Cambridge and LSE, such as Robinson and Kaldor, respectively, met once a month to debate macroeconomics even before the publication of Keynes' General Theory. Kaldor became a convert to Keynes, as can be seen in Kaldor (1939b). Barkley Rosser, Jr., tends to cite Goodwin and Kaldor as early explorations of non-linear dynamics in economic models. Maybe Kaldor (1940) is important here, which I have not read in at least a decade, if ever.
Later Work on Growth and Distribution Theory
Kaldor's later work on growth and distribution is more clearly Post Keynesian, in my opinion. His 1956 paper compares and contrast three theories of distribution: a neoclassical theory which makes most sense with a now exploded scarcity theory of value, a classical theory in which wages are exogeneous in the theory of value and distribution, and a Post Keynesian theory in which the distribution of income depends on macroeconomic savings propensities. I think this paper led to the souring of his relationship with Joan Robinson; she was, I guess, worried about priority in publication. Luigi Pasinetti disputed the logical consistency of Kaldor's presentation, in which workers obtain income from capital but save that portion of their income at the higher rate characteristic in Kaldor's model of savings out of profits. In a later seminar with Pasinetti, Robinson, and Samuelson & Modigliani, Kaldor (1966) clarified that he thought of the savings rate as pertaining to the source of income, not the individual savers. This ties into the idea that savings out of retained earnings is not transparent to those holding stock in corporations. Kaldor suggested these ideas can explain how the market value of corporate stock relates to the book value of the assets owned by corporations. Later work by others demonstrate that for two classes to persist in Kaldor's model, the rate of profits must exceed the rate of interest (i.e., the return to capital obtainable by workers in the financial markets they have access to). This may not be a good idea, but perhaps it would be an interesting idea to synthesize this literature with literature related to De Long et al (1990) - and I would prefer not to reference Shleifer.
Kaldor developed a related series of growth models. He presented one at the famous August 1958 Corfu conference. I guess it was in this paper he presented his "stylized facts". He presented another model in this series (Kaldor and Mirrlees 1962) in the same issue of the Review of Economic Studies in which he welcomed (1962) Arrow to the band of heretics for his "Learning by Doing" paper. Kaldor's models use a technical progress function, which, I gather, is empirically indistinguishable from a Cobb-Douglas production function with technical progress.
Kaldor emphasized increasing returns in manufacturing in these models, and he championed Verdoon's law. Thirwall (e.g., 1986) applies these ideas to developing economics. I gather a policy recommendation in this literature is for export-led growth. An emphasis on increasing returns underlies Kaldor's (1972, 1975, and 1985) mature criticisms of neoclassical economics.
Finally, I want to mention Kaldor's theory of endogenous money. Kaldor described both the inability of monetary authorities to control the supply of money under some given definition and the ability of financial institutions to continually evolve new instruments to serve as money. He used these ideas to refute monetarism (1986, first edition 1982).
References
- J. Bradford De Long, Andrei Shleifer, Lawrence H. Summers, and Robert J. Waldmann (1990) "Noise Trader Risk in Financial Markets", Journal of Political Economy, V. 98, N. 4 (August): 703-738
- Nicholas Kaldor (1937) "Annual Survey of Economic Theory: The Recent Controversy on the Theory of Capital", Econometrica, V. 5, N. 3 (July): 201-233
- -- (1939a) "Capital Intensity and the Trade Cycle", Economica, New Series, V. 6, N. 21 (February): 40-66
- -- (1939b) "Speculation and Economic Stability", Review of Economic Studies, V. 7, N. 1 (October): 1-27
- -- (1940) "A Model of the Trade Cycle", Economic Journal, V. 50, N. 197 (March): 78-92
- -- (1942) "Professor Hayek and the Concertina-Effect", Economica, New Series, V. 9, N. 36 (November): 359-382
- -- (1956) "Alternative Theories of Distribution", Review of Economic Studies, V. 23: 83-100
- -- (1962) "Comment", Review of Economic Studies V. 29, N. 3 (June): 246-250
- -- (1966) "Marginal Productivity and Macro-Economic Theories of Distribution: Comment on Samuelson and Modigliani", Review of Economic Studies, V. 33, N. 4 (October): 309-319
- -- (1985) Economics without Equilibrium, M. E. Sharpe
- -- (1972) "The Irrelevance of Equilibrium Economics", Economic Journal, V. 82, N. 328 (December): 1237-1255
- -- (1975) "What is Wrong with Economic Theory", Quarterly Journal of Economics, V. 89, N. 3 (August): 347-357
- -- (1986) The Scourge of Monetarism, Second Edition, Oxford University Press
- Nicholas Kaldor and James A. Mirrlees (1962) "A New Model of Economic Growth", Review of Economic Studies V. 29, N. 3 (June): 174-192
- A. P. Thirwall (1986) "A General Model of Growth and Development on Kaldorian Lines", Oxford Economic Papers (July)
- Marjorie S. Turner (1993) Nicholas Kaldor and the Real World, M. E. Sharpe
"Macroeconomics as an Autonomous Discipline"
"Paradoxically, the main result obtained by the new classical economists is the demonstration - against their wishes and expectations - that a satisfactory synthesis of macroeconomics and microeconomics is not yet mature. As a matter of fact, the micro-foundations of macroeconomics which they suggest are by now far from satisfactory. They rely on the heroic assumption that the decision-makers of the models are representative agents, whose behavior fairly well approximates the aggregate behavior of the economy. Unfortunately this assumption surreptitiously eliminates the main object that should be studied by macroeconomics: aggregation problems and failures of coordination between the behavior of individuals. Even so, the suggested micro-foundations work only under very special assumptions which actually deny any importance to the main problems considered by Keynes's macroeconomics: uncertainty, disequilibrium, instability, structural change, etc. As we have seen, disequilibria are assumed to be non-intelligible and are therefore ignored; uncertainty is emasculated by the 'certainty equivalence' hypothesis; instability is defined away by arbitrarily restricting the analysis to stationary and ergodic processes and taking account only of the subset of stable solutions...
The failure of this reductionist research programme may be due to the immaturity of current macroeconomics, but it may also be due to weaknesses in existing microeconomic theory. Notwithstanding the widespread belief in its intrinsic solidity, the micro theory currently accepted by the new classical economists may prove on closer examination to be insufficiently powerful to provide solid foundations for a satisfactory macroeconomics. To take the preliminary steps towards a real synthesis between macro and micro theories, it is necessary to consider not only the micro-foundations of macroeconomics but also the macro-foundations of microeconomics (Hicks 1983).
The history of scientific thought shows that whenever a synthesis between different disciplines has been successfully accomplished, the result has been a new discipline with features not reducible to those of the original disciplines. Such a synthesis between micro and macroeconomics, if it is possible, is still far away. In the meantime the reciprocal autonomy of disciplines should be carefully safeguarded. It is particularly important to defend the autonomy of macroeconomics, as today this is greatly jeopardized by views like those mentioned above. Therefore we should revert to the original Keynesian concept of macroeconomics as an autonomous discipline. This does not imply that we should give up making serious efforts to provide rigorous micro-foundations for our macroeconomic statements, if that means searching for greater consistency between the two disciplines. In other words we should continue to pursue a full synthesis between microeconomics and macroeconomics. Many things have been learned from past attempts, unsuccessful as they were, and many others may be learned through future efforts.
But in the meantime one should not reject as non-scientific any contribution that lacks proper 'micro-foundations,' particularly in the restricted sense of a 'reduction to current Walrasian microeconomics.' As a matter of fact, though it may be found impossible to provide proper micro-foundations to a given macroeconomic statement, this might become possible in the future. Such developments have occurred many times in the past and it could happen again, especially if microeconomics extends its range well beyond its Walrasian bounds. To reject this view would be as irrational as to reject as non-scientific any biological statement not yet reducible to chemical statements. Unfortunately, as has been wisely remarked, the only known way to reduce biology to chemistry is murder." -- Alessandro Vercelli, Methodological Foundations of Macroeconomics: Keynes & Lucas, Cambridge University Press, 1991.
Thursday, October 16, 2008
You Got Me Babe
Here are two books one can read on-line and that I may read:
The second book is Nicholas Kaldor's demonstration that monetarism does not work. On this blog, Kaldor should need no introduction.
Both books are in a freely readable on-line format that I find annoying. I suppose the format of the free version of the first is a business decision to encourage the purchase of the PDF version. And I blame copyright law for the format of the second.
- Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment, National Academies Press (2008)
- Nicholas Kaldor, The Scourge of Monetarism, Second Edition, Oxford University Press (1986)
The second book is Nicholas Kaldor's demonstration that monetarism does not work. On this blog, Kaldor should need no introduction.
Both books are in a freely readable on-line format that I find annoying. I suppose the format of the free version of the first is a business decision to encourage the purchase of the PDF version. And I blame copyright law for the format of the second.
Saturday, October 11, 2008
"Just Look At The Marginal Product Of Capital"
Friday's New York Times has an editorial by Casey Mulligan, a professor of economics at the University of Chicago. Mulligan says that the U.S. economy will keep on doing fine, as shown by "the profitability of non-financial capital, what economists call the marginal product of capital." Mulligan is, of course, incorrect. Economists do not call the rate of profits "the marginal product of capital". Even the proposition that, in equilibrium, the rate of profits and the marginal product of capital are equal is without any theoretical or empirical justification. Casey Mulligan is, at best, ignorant and incompetent.
Post Keynesians and others would also tend to be skeptical of other aspects of Mulligan’s editorial. It is a Post Keynesian belief that money is not a veil, neither in the long run nor the short run. Finance can cause the real economy to become discoordinated. Mulligan looks at trends in the rate of profits from before the Great Depression to now. One could assert that one aspect of such trends is a class struggle over the distribution of the surplus. Perhaps workers are sufficiently cowed today that, unlike in the 1970s, no danger exists of a profitability crisis. (Given the Okishio theorem, I do not think that a law of the tendency of the rate of profits to fall follows from Marx’s assumptions, at least in my favorite formalizations of Marx’s approach.) A realization crisis might still arise. When income distribution is so unequal, one might expect effective demand to be weak.
Post Keynesians and others would also tend to be skeptical of other aspects of Mulligan’s editorial. It is a Post Keynesian belief that money is not a veil, neither in the long run nor the short run. Finance can cause the real economy to become discoordinated. Mulligan looks at trends in the rate of profits from before the Great Depression to now. One could assert that one aspect of such trends is a class struggle over the distribution of the surplus. Perhaps workers are sufficiently cowed today that, unlike in the 1970s, no danger exists of a profitability crisis. (Given the Okishio theorem, I do not think that a law of the tendency of the rate of profits to fall follows from Marx’s assumptions, at least in my favorite formalizations of Marx’s approach.) A realization crisis might still arise. When income distribution is so unequal, one might expect effective demand to be weak.
Thursday, October 09, 2008
Who Should Win The "Nobel" Prize In Economics?
I say Paul Davidson and Luigi Pasinetti should win it.
Sunday, October 05, 2008
For Whatever Can Walk - It Must Walk Once More
1.0 Introduction
This post presents a simple macroeconomic model that combines trend and cycle. It presents some possible aspects of economic growth and business cycles. This model has some features that I find objectionable, but I find it interesting nonetheless. It is a non-linear model of dynamics presenting a formalization of some ideas to be found in Marx's Capital. And it is a model that does not impose equilibrium, but allows for the stability of equilibrium to be analyzed.
2.0 Technology
Assume a Leontief (fixed coefficients) production function:
The capital stock depreciates at a rate of 100 δ percent. That is, output can either be consumed or added to a capital stock that experiences a force of mortality of δ. Technical progress is disembodied, and labor productivity increases at a constant rate:
3.0 Wages, Profits, Investment
Let w be the wage rate. Then (w l) or (w q/a) are total wages. Define u to be the workers share of the gross product:
Finally, assume that the rate of growth of wages is a (linear) increasing function of the employment rate:
4.0 Derivation of the Model
The rate of growth of the employment rate is the difference between the rate of growth of employment and the rate of growth of the labor force:
The following pair of equations, the fundamental equations of this model, restate equations derived above:
5.0 Solution of the Model
For what its worth, a trajectory in phase space has the equation:
6.0 Discussion
I think it interesting to note that the rate of growth of wages at the limit point is positive due to growth in productivity; in fact, the rate of growth of wages at the limit point is equal to the rate of growth in productivity. If productivity did not grow, if the labor force were stationary, and if there were no depreciation, wages would consume the entire product at the limit point; the capitalists would receive no profits.
A single cycle can easily be described in intutitive terms. Start with low unemployment. Wages will increase as a share in output. Consequently, saving and investment will decline. The growth of output will slow. Eventually, the "reserve army of the unemployed" will be recreated. Wages will decrease as a share in output, although they may still be increasing in absolute terms. Eventually, investment will pick back up. When the growth of output exceeds the growth in productivity by more than the growth of the labor force, the employment rate will increase.
Clearly, this model can reproduce a qualitative resemblance to some empirical properties of some economic time series. If one plots empirical data in the illustrated phase space, one may see a suggestion of motion in the indicated directions, but one will not find a single cycle. Perhaps shocks change the parameters of the model on some occasions. Or perhaps important considerations are not embodied in the model. This model is Classical in important respects, where I mean by "Classical" to refer to the economics of Smith and Ricardo. Richard Goodwin, the inventor of this model, has done important work attempting to integrate this model with Keynesian and Schumpeterian themes.
References
There was a conference in Sienna a number of years back devoted to this model. There's also discussion of this model in a Festschrift volume devoted to Richard Goodwin.
Update: Serena Sordi has a recent generalization of this model to four dimensions, presented at a sort of festschrift for Barkley Rosser, Jr.
This post presents a simple macroeconomic model that combines trend and cycle. It presents some possible aspects of economic growth and business cycles. This model has some features that I find objectionable, but I find it interesting nonetheless. It is a non-linear model of dynamics presenting a formalization of some ideas to be found in Marx's Capital. And it is a model that does not impose equilibrium, but allows for the stability of equilibrium to be analyzed.
2.0 Technology
Assume a Leontief (fixed coefficients) production function:
q = min(a l, k/σ)where q is gross output, l is the labor employed, k is the value of capital, a is labor productivity, and σ is the capital-output ratio. Both constraints in the production function are always met with equality:
l = q/a
σ = k/qThe capital stock is always employed, but sometimes employment can fall short of the entire labor force, as explained below.
The capital stock depreciates at a rate of 100 δ percent. That is, output can either be consumed or added to a capital stock that experiences a force of mortality of δ. Technical progress is disembodied, and labor productivity increases at a constant rate:
a = a0 exp( α t )The labor force also grows at a constant rate:
n = n0 exp( β t)where n is the labor supply. Hence, v is the employment rate, where the employment rate is defined as follow:
v = l/nWhen v is unity, the labor force is fully employed. v ranges from (a subinterval of) zero to unity in this model.
3.0 Wages, Profits, Investment
Let w be the wage rate. Then (w l) or (w q/a) are total wages. Define u to be the workers share of the gross product:
u = w/aThen (1 - u) is the capitalists' share of the product. Assume that wages are entirely consumed and that a fixed proportion of profits are saved and invested:
dk/dt = s (1 - u) q - δ kwhere s is the savings rate out of profits.
Finally, assume that the rate of growth of wages is a (linear) increasing function of the employment rate:
(1/w) dw/dt = - γ + ρ vThe above equation could also be written as:
(1/w) dw/dt = ρ [v - (γ/ρ) ]In words, wages grow faster in a tight labor market. The marginal productivity of labor is beside the point in this model.
4.0 Derivation of the Model
The rate of growth of the employment rate is the difference between the rate of growth of employment and the rate of growth of the labor force:
(1/v) dv/dt = (1/l) dl/dt - βBy similar manipulations, one can show that the rate of growth of employment is the difference between the rate of growth of output and the rate of growth of productivity:
(1/l) dl/dt = (1/q) dq/dt - αCombining these two equations yields an equation relating the rate of growth of the employment rate to the rate of growth in output:
(1/v) dv/dt = (1/q) dq/dt - (α + β)The derivation of the following equation from the definition of the capital-output ratio and the equation for the rate of change in the value of capital is simpler:
(1/q) dq/dt = (1 - u) s/σ - δHence,
(1/v) dv/dt = (1 - u)(s/σ) - (α + β + δ)The rate of growth of workers' share in gross ouput is the difference between the rate of growth of wages and the rate of growth of productivity:
(1/u) du/dt = (1/w) dw/dt - αSubstitute from the postulated relation between the rate of growth in wages and the employment rate:
(1/u) du/dt = ρ v - (α + γ)
The following pair of equations, the fundamental equations of this model, restate equations derived above:
dv/dt = (s/σ - α - β - δ) v - (s/σ) u v
du/dt = -(α + γ) u + ρ u vHere's the cool part - this is the Lotka-Volterra predator-prey model. It is a canonical non-linear dynamical system used to model, say, lynx and rabbits.
5.0 Solution of the Model
For what its worth, a trajectory in phase space has the equation:
(uν1) exp(- θ1 u) = H(v- ν2) exp(θ2 v)where
θ1 = s/σ
θ2 = ρ
ν1 = s/σ - (α + β + δ)
ν2 = (α + γ)and H is an arbitrary integrating constant. All these trajectories consist of cycles, as illustrated in Figure 1. The limit point around which these trajectories cycle is given by:
u* = ν1/θ1
v* = ν2/θ2 = (α + γ)/ρ(The origin in phase space is also a limit point; the origin has the stability of a saddle-point.)
Figure 1: Phase Space |
Figure 2: A Trajectory |
6.0 Discussion
I think it interesting to note that the rate of growth of wages at the limit point is positive due to growth in productivity; in fact, the rate of growth of wages at the limit point is equal to the rate of growth in productivity. If productivity did not grow, if the labor force were stationary, and if there were no depreciation, wages would consume the entire product at the limit point; the capitalists would receive no profits.
A single cycle can easily be described in intutitive terms. Start with low unemployment. Wages will increase as a share in output. Consequently, saving and investment will decline. The growth of output will slow. Eventually, the "reserve army of the unemployed" will be recreated. Wages will decrease as a share in output, although they may still be increasing in absolute terms. Eventually, investment will pick back up. When the growth of output exceeds the growth in productivity by more than the growth of the labor force, the employment rate will increase.
Clearly, this model can reproduce a qualitative resemblance to some empirical properties of some economic time series. If one plots empirical data in the illustrated phase space, one may see a suggestion of motion in the indicated directions, but one will not find a single cycle. Perhaps shocks change the parameters of the model on some occasions. Or perhaps important considerations are not embodied in the model. This model is Classical in important respects, where I mean by "Classical" to refer to the economics of Smith and Ricardo. Richard Goodwin, the inventor of this model, has done important work attempting to integrate this model with Keynesian and Schumpeterian themes.
References
There was a conference in Sienna a number of years back devoted to this model. There's also discussion of this model in a Festschrift volume devoted to Richard Goodwin.
- Richard Goodwin, "A Growth Cycle," in Socialism, Capitalism, & Economic Growth: Essays Presented to Maurice Dobb, (edited by C. H. Feinstein), Cambridge University Press, 1967.
- Richard Goodwin, Chaotic Economic Dynamics, Oxford University Press, 1990.
- Paul Ormerod, The Death of Economics, St. Martins, 1994.
Update: Serena Sordi has a recent generalization of this model to four dimensions, presented at a sort of festschrift for Barkley Rosser, Jr.