Showing posts with label History vs. Equilibrium. Show all posts
Showing posts with label History vs. Equilibrium. Show all posts

Saturday, October 26, 2013

An Alternative Economics

Who are the greatest economists of, say, the last fifty years? I suggest the shortlist for many academic economists would include Kenneth Arrow, Milton Friedman, and Paul A. Samuelson. Imagine1 a world in which the typical academic economist would be inclined to name the following as exemplars:

  • Fernand Braudel
  • John Kenneth Galbraith
  • Albert Hirschman
  • Gunnar Myrdal
  • Karl Polanyi.

These economists did not insist on using closed, formal, mathematical models2 everywhere3. They tended to present their findings in detailed historical and qualitative accounts4.

Footnotes
  1. Maybe the history of political economy is overdetermined, in some sense. So I am not sure what this counterfactual would entail.
  2. They did use or, at least, comment on small mathematical models, where appropriate. For example, both Hirshman and Myrdal had cautions about the Harrod-Domar model.
  3. I have been reading a little of Tony Lawson.
  4. As far as I am concerned, these are accounts of empirical research.

Saturday, October 19, 2013

Some Aspects Of A Theory Of Finance

I consider this post to be a complement to some recent comments by Lars Syll on this year's Nobel prize. I here outline an (unoriginal) theory to contrast with the Efficient Market Hypothesis.

Keynes described the prices of bonds, shares, and other financial instruments as, at any time, reflecting a balance of Bulls and Bears. As G. L. S. Shackle points out, if this is an equilibrium, it is an "inherently restless" equilibrium. If a price continues unchanged, Bulls, who expect the price to rise, will eventually be disappointed. Likewise, a symmetrical condition is true for Bears. Furthermore, news from outside the stock market will be changing market expectations, causing some Bulls to become Bears and vice versa.

For this account to make sense, it seems to me, the market must be populated with decision makers who have vastly different ontological and epistemological beliefs. It is not a case of all agents having one model, with different parameter estimates being updated in light of experience. Shackle, in modeling such decision makers, introduces the concept of focal points. If profits exceed the upper focus point at some time, the decision maker will realize that more profit can be gained than is accounted for in his theory. And so he will adopt some other theory, in some sense. Likewise, losses or too little profit will, outside of some range, lead to another change of mind, with consequent changes in plans and actions.

Stock prices are numbers. One might initially think they could be described by probability distributions. One could think of the price of a stock at a given time as a random variable. And, in this special case, perhaps Shackle's model of decision-making reduces to the theory of sequential statistical hypothesis testing.

A random variable is a function from a sample space to the real numbers, where a sample space is a set of all possible outcomes of a random experiment. An event is a (measurable) subset of the sample space. So a stock price is not an event in the sample space. For example, the recent brouhaha in the United States over the debt limit is presumably an aspect of an event that has had some impact on stock prices. Can one assume that all possible events are known beforehand? That all agents who might bet on stock prices know all events? If not, how does it make sense to model stock prices as coming from a probability distribution? Paul Davidson suggests extending the concept of non-ergodicity to cover these cases where the complete sample space is not known, and the mere possibility of some future events are capable of surprising someone:

"In expected utility theory, according to Sugden..., 'A prospect is defined as a list of consequences with an associated list of probabilities, one for each consequence, such that these probabilities sum to unity. Consequences are to be understood to be mutually exclusive possibilities: thus a prospect comprises an exhaustive list of the possible consequences of a particular course of action... An individual's preferences are defined over the set of all conceivable prospects.' Using these definitions, an environment of true uncertainty (that is, one which is nonergodic) occurs whenever an individual cannot specify and/or order a complete set of prospects regarding the future, because the decision maker either cannot conceive of a complete list of consequences that will occur in the future; or cannot assign probabilities to all consequences because 'the evidence is insufficient to establish a probability' so that possible consequences 'are not even orderable' (Hicks)... Hicks associates a violation of the ordering axiom of expected utility theory with 'Keynesian liquidity' ..., since, for Hicks..., 'liquidity is freedom' to delay action that commits claims on real resources whenever the decision maker is ignorant regarding future consequences." -- Paul Davidson (1991, p. 134)

In the above quote, Davidson is contrasting what he calls the True Uncertainty Environment with the Subjective Probability Environment. Davidson argues that my favorite definition of ergodicity, as applying to stochastic processes in which time averages converges to ensemble averages, characterizes the Objective Probability Environment.

One might argue that a more mainstream approach to finance is more empirically applicable than the Post Keynesian approach outlined above. It seems to me that if one wants to argue this, one needs to establish some reason why a diversity of agent views could not persist. I am unaware of any such argument, and I think the literature on noise trading should lean economists to think otherwise.

Traders in stock and bonds constitute one audience for theories of finance. I suppose they prefer mathematical theories that they can use to price financial instruments. I am not sure that the above theory can be cast into mathematics further than Davidson and Shackle have already accomplished. I will note that when Dutch Shell found themselves wrong-footed in the 1970s by the formation of OPEC and the oil crisis, they set up a group to do scenario planning. These are the sort of events whose existence in the sample space might not have been long foreseen. As I understand it - I forget why, some members of that group explicitly took inspiration from Shackle's work.

References
  • Davidson, Paul (1991). Is Probability Theory Relevant for Uncertainty? A Post Keynesian Perspective, Journal of Economic Perspectives, V. 5, No. 1: pp. 129-143.
  • DeLong, J. Bradford, Andrei Shleifer, Lawrence H. Summers, and Robert Waldmann (1990). Noise Trader Risk in Financial Markets, Journal of Political Economy, V. 98, Iss. 4: pp. 703-738.
  • Hogg, Robert V. and Allen T. Craig (1978). Introduction to Mathematical Statistics, Fourth edition, Macmillan.
  • Shackle, G. L. S. (1940). The Nature of the Inducement to Invest, Review of Economic Studies, V. 8, N. 1: pp. 44-48.
  • Shackle, G. L. S. (1988). Treatise, Theory, and Time, in Business, Time and Thought: Selected Papers of G. L. S. Shackle, New York University Press.
  • Shackle, G. L. S. (1988). On the Nature of Profit, in Business, Time and Thought: Selected Papers of G. L. S. Shackle, New York University Press.

Wednesday, July 13, 2011

Some More On Hayek And Sraffa

1.0 Introduction
I have previously discussed Sraffa's review of Prices and Production, Hayek's reply, and Sraffa's rejoinder. I thought I would bring up today a couple of other aspects of that debate.

2.0 Hayek Changes His Notion Of Equilibrium
The traditional neoclassical equilibrium concept, in the period roughly from 1870 to 1930, is roughly of a stationary state. Neoclassical economists in this period erroneously thought that one could define such an equilibrium, given tastes, technology, and endowments, including the endowment of capital, by some or another definition of capital. As Walras recognized, such an equilibrium can never be expected to be established. At most, actual capitalist economies can be expected to be tending towards this kind of equilibrium at any point in time.

This was Hayek's equilibrium concept in Prices and Production. He put forth another equilibrium concept in The Theory of Capital, a concept he had been developing for some time. (Hayek is very clear on this in Chapter 2.) This equilibrium concept is of plan compatibility, a concept formally equivalent in some sense to the Arrow-Debreu model of intertemporal equilibrium.

Given spot prices, a monetary interest rate, and the past history of an economy, entrepreneurs, based on their economic theories, form expectations about future prices and the expectations and plans of others. They form their plans based on these expectations. Equilibrium exists if all these plans are mutually compatible. Under this equilibrium concept, no need exists for entrepreneurs to plan to produce the same quantities period after period. Likewise, consumers might plan to consume different quantities in different periods. Furthermore, entrepreneurs and consumers will generally expect spot prices to vary over time.


Hayek's equilibrium concept of plan compatibility, as I understand it, cannot be used to ground Austrian Business Cycle Theory.

3.0 Austrian Business Cycle Theory Outside Of Historical Time
Sraffa destroyed Hayek's version of Austrian Business Cycle Theory, as Robert Skidelsky notes.

I am amused in noting part of Sraffa's critique. Keynes sets his General Theory in historical time, not logical time. I read Sraffa as pointing out that Hayek's theory, like neoclassical theory on Keynes's reading, is set in logical time:
"That the position reached as the result of 'voluntary saving' will be one of equilibrium... is clear enough; though the conclusion is not strengthened by the curious reason he gives for it.13

But equally stable would be that position if brought about by inflation; and Dr. Hayek fails to prove the contrary. In the case of inflation, just as in that of saving, the accumulation of capital takes place through a reduction in consumption. 'But now this sacrifice is not voluntary, and is not made by those who will reap the benefit from the new investments... There can be no doubt that, if their money receipts should rise again [and this rise is bound to happen as Dr. Hayek promises to prove] they would immediately attempt to expand consumption to the usual proportion', that is to say, capital will be reduced to its former amount; 'such a transition to less capitalistic method of production necessarily takes the form of an economic crisis'...

As a moment's reflection will show, 'there can be no doubt' that nothing of the sort will happen. One class has, for a time, robbed another class of a part of their incomes; and has saved the plunder. When the robbery comes to an end, it is clear that the victims cannot possibly consume the capital which is now well out of their reach. If they are wage-earners, who have all the time consumed every penny of their income, they have no wherewithal to expand consumption. And if they are capitalists, who have not shared in the plunder, they may indeed be induced to consume now a part of their capital by the fall in the rate of interest; but not more so than if the rate had been lowered by the 'voluntary savings' of other people.

13The reason given is that 'since, after the change had been completed, these persons [i.e., the savers] would get a greater proportion of the total real income, they would have no reason' to consume the newly acquired capital... But it is not necessarily true that these persons will get a greater proportion of the total real income, and if the fall in the rate of interest is large enough they will get a smaller proportion; and anyhow it is difficult to see how the proportion of total income which falls to them can be relevant to the 'decisions of individuals'. Dr. Hayek, who extols the imaginary achievements of the 'subjective method' in economics, often succeeds in making patent nonsense of it." -- Piero Sraffa (March 1932)
And again:
"The first question is whether, as Dr. Hayek asserts, the capital accumulated by 'forced saving' will be, 'at least party' dissipated as soon as inflation comes to an end: 'It is upon the truth of this point that my [Dr. H's] theory stands or falls'. My simple-minded objection was that forced saving being a misnomer for spoliation, if those who had gained by the inflation chose to save the spoils, they had no reason at a later stage to revise the decision; and at any rate those on whom forced saving had been inflicted would have no say in the matter. This appeal to common sense has not shaken Dr. Hayek: he describes it as 'surprisingly superficial', though unfortunately he forgets to tell me where it is wrong." -- Piero Sraffa (June 1932)
The distribution of endowments - who owns what - is a datum for traditional neoclassical theory. Disequilibria employment, production, and purchases will change this data. So one cannot expect, contrary to Hayek, the previous equilibra corresponding to previous data to be restored after the economy is on some disequilibrium path for some extended time.

Sraffa, like later Post Keynesians, suggested a coherent economic theory must be set in historic time.

References
  • P. Garegnani (1976) "On a Change in the Notion of Equilibrium in Recent Work on Value and Distribution", reprinted in Keynes's Economics and Theory of Value and Distribution (edited by J. L. Eatwell and M. Milgate, 1983), Oxford University Press.
  • F. A. Hayek (1935) Prices and Production, 2nd. Edition, Routledge and Sons.
  • F. A. Hayek (June 1932) "Money and Capital: A Reply", Economic Journal, V. 42: 237-249.
  • F. A. Hayek (1941) The Pure Theory of Capital, University of Chicago Press.
  • M. Milgate (1979) "On the Origin of the Notion of 'Intertemporal Equilibrium'", Economica, V. 46, N. 1: 1-10.
  • P. Sraffa (March 1932) "Dr. Hayek on Money and Capital" Economic Journal, V. 42: 42-53.
  • P. Sraffa (June 1932). "A Rejoinder", Economic Journal, V. 42: 249-251.

Monday, February 14, 2011

Stephen Smale Presciently On Global Financial Crisis?

I have argued before that weaknesses in mainstream economics exposed by our current macro-economic problems have been known for decades. I here note another example.

Stephen Smale is a Fields medal-winning mathematician who has advanced our understanding of chaotic dynamical systems. Smale has also contributed to mathematical economics. He wrote the following in 1976:
"A criticism commonly made of economic theory is its failure to make predictions of crises in the country or anticipate correctly unemployment or inflation. One must be cautious in the social sciences about looking towards physics for answers. However, some comparisons with the physical sciences seem profitable in connection with the above criticism. In those sciences, where theory itself is in a far more advanced state, limitations can be seen in a similar way. For example a given individual human body functions according to physical principles; however no physical scientist would predict a heart attack. The physical theory gives understanding of aspects of what goes on in the human body only under very idealized conditions. The physical theories eventually play some role in the education of medical doctors, who can then say some things, some times about a patient's susceptibility to a heart attack, preventive measures, and cures.

The economy of the world or even a nation is a very complex phenomenon, like a human body, involving a number of factors, both economic and political. It is no more reasonable to expect economic theorists to predict a nation's economic future than for a theoretical scientist to predict the future health of an individual...

...I would like to give some reasons why I feel equilibrium theory is far from satisfactory. For one thing the theory has not successfully confronted the question, 'How is equilibrium reached?' Dynamic considerations would seem necessary to resolve this problem. Another is the reliance of the theory on long range optimization.

In the main model of equilibrium theory, say as presented in Gerard Debreu's Theory of Value, economic agents make one life-long decision, optimizing some value. With future dating of commodities, time has almost an artificial role." -- Stephen Smale. "Dynamics in General Equilibrium Theory." American Economic Review V. 66, N. 2 (1976): pp. 288-294.

Saturday, November 27, 2010

Continued Balderdash From Liebowitz And Margolis

Joan Robinson, drawing on John Maynard Keynes, famously distinguished between models set in historical and logical time. Geoffrey Hodgson, now an institutionalist economist, has written extensively about evolutionary models in economics. Given my interest in such economists, I also find of interest how to formalize the notion that "history matters". I think mathematical models of nonergodic processes is one way of formally setting a model in historical time.

Brian Arthur and Paul David, two economists, have developed a parallel idea, that of path dependence. This post is about false statements Stan Liebowitz and Stephen Margolis like to make about this work. Liebowitz and Margolis quote Paul David:
"The foregoing account of what the term 'path dependence' means may now be compared with the rather different ways in which it has come to be explicitly and implicitly defined in some parts of the economics literature. For the moment we may put aside all of the many instances in which the phrases 'history matters' and 'path dependence' are simply interchanged, so that some loose and general connotations are suggested without actually defining either term. Unfortunately much of the non-technical literature seems bent upon avoiding explicit definitions, resorting either to analogies, or to the description of a syndrome - the set of phenomena with whose occurrences the writers associate path dependence. [Rather than telling you what path dependence is, they tell you some of the symptomology - things that may, or must happen when the condition is present. It is rather like saying that the common cold is sneezing, watering eyes and a runny nose.]" -- Paul David
Liebowitz and Margolis somehow think you will be persuaded to believe the following:
"So here we see David disqualifying, at least from others, any efforts to connect path dependence to observable phenomena. David would have path dependence discussed only in the context of the most severe abstraction, an immaculate concept immune from criticism: it is a dynamic stochastic process that is non-ergodic." -- Stan Liebowitz and Stephen Margolis
Notice Paul David never says that path dependence, under a rigorous definition, never will be manifested in observable empirical phenomena. Elsewhere Paul David notes that Markov processes can be non-ergodic, that is, path dependent. And he notes that economists have connect Markov processes, not all of which need be path-dependent, to observable penomena:
"Homogeneous Markov chains are familiar constructs in economic models of the evolving distribution of workers among employment states, firms among size categories, family lineages among wealth-classes or socio-economic (occupational) strata, and the rankings of whole economies among in the international distribution of per capita income levels." -- Paul David
Why are certain economists so willing to tell untruths?

References
  • W. Brian Arthur (1989) "Competing Technologies and Lock-In by Historical Small Events", Economic Journal, V. 99, N. 1: pp. 116-131.
  • W. Brian Arthur (2009) The Nature of Technology: What It Is and How It Evolves, The Free Press. [I haven’t read this]
  • Paul A. David (1985) "Clio and the Economics of QWERTY", American Economic Review. V. 75, N. 2 (May): pp. 332-337.
  • Paul A. David (2000) "Path Dependence, It's Critics and the Quest for 'Historical Economics'"
  • Paul A. David (2007) "Path Dependence - A Foundational Concept for Historical Social Science", Cliometrica, V. 1, N. 2: pp. 91-114 (working copy)
  • Stan J. Liebowitz and Stephen E. Margolis (2010) "How the Lock-In Movement Went off the Tracks"

Monday, August 30, 2010

Stephen Williamson, Fool or Knave?

Stephen Williamson quotes Narayana Kocherlakota, apparently a very stupid person:
"Kocherlakota says this...:
'But over the long run, money is, as we economists like to say, neutral. This means that no matter what the inflation rate is and no matter what the FOMC does, the real return on safe short-term investments averages about 1-2 percent over the long run.'
Again, uncontroversial." -- Stephen Willaimson
This, of course, is false. Communities of economists exist who set their theories in historical time and dispute that money is neutral in any run. I prefer to point to Post Keynesians, but Austrian School economists satisfy these criteria also. Furthermore, economists within such schools surpassed mainstream economists in the current historical conjuncture by having pointed out the possibility of the global financial crisis before its occurrence.

I think economists should strive not to tell untruths abouts what economists believe.

Wednesday, August 04, 2010

Phenomenology

"One of the embarrassing dirty little secrets of economics is that there is no such thing as economic theory properly so-called. There is simply no set of foundational bedrock principles on which one can base calculations that illuminate situations in the real world." -- Brad DeLong

My title does not refer to an approach in continental philosophy associated with Husserl and Heidegger. Rather, I refer to a term used in physics and engineering by practitioners who know they are not trying to develop models derived from fundamental laws, but only modeling the phenomena.

I find it of interest that Brad DeLong has recently described economics as phenomenology. A noted "rocket scientist" on Wall Street came to the same conclusion:
"The techniques of physics hardly ever produce more than the most approximate truth in finance because 'true' financial value is itself a suspect notion. In physics, a model is right when it correctly predicts the future trajectories of planets or the existence and properties of new particles, such as Gell-Mann's Omega Minus. In finance, you cannot easily prove a model right by such observation. Data are scarce and, more importantly, markets are arenas of action and reaction, dialectics of thesis, antithesis, and synthesis. People learn from past mistakes and go on to make new ones. What's right in one regime is wrong in the next.

As a result, physicists turned quants don't expect too much from their theories, though many economists naively do. Perhaps this is because physicists, raised on theories capable of superb divination, know the difference between a fundamental theory and a phenomenological toy, useful though the latter may be. Trained economists have never seen a really first-class model. It's not that physics is 'better', but rather that finance is harder. In physics you're playing against God, and He doesn't change his laws very often. When you've checkmated Him, He'll concede. In finance, you're playing against God's creatures, agents who value assets based on their ephemeral opinions. They don't know when they've lost, so they keep trying." -- Emanuel Derman (2004) My Life as a Quant: Reflections on Physics and Finance, John Wiley & Sons.
I think one can read intimations of Soros' reflexitivity or Joan Robinson's historical time in the above quote. Derman is even more direct about a Post Keynesian concept elsewhere:
"Slowly it began to dawn on me that what we faced was not so much risk as uncertainty. Risk is what you bear when you own, for example, 100 shares of Microsoft - you know exactly what those shares are worth because you can sell them in a second at something very close to the last traded price. There is no uncertainty about their current value, only the risk that their value will change in the next instant. But when you own an exotic illiquid option, uncertainty precedes its risk - you don't even know exactly what the option is currently worth because you don't know whether the model you are using is right or wrong. Or, more accurately, you know that the model you are using is both naive and wrong - the only question is how naive and how wrong." -- Emanuel Derman (2004)

Saturday, May 08, 2010

A Nonergodic, Stationary Random Process

1.0 Introduction
Joan Robinson famously distinguished between economic models set in logical and historical time. According to Robinson, the distinguishing feature of Keynes’ General Theory is its setting in historical time. Building on Paul Davidson, one might say that one sign that a model is set in historical time is that it generates nonergodic stochastic processes.

This post explains that claim somewhat by presenting a simple example of a stationary nonergodic stochastic processes, namely a Spherically Invariant Random Process (SIRP).

2.0 Spherically Invariant Random Processes (SIRPs)
A stochastic process, {X(i), i = 0, 1, ..., n - 1}, is an indexed set of random variables. Typically, the index is taken to be time. Each random variable X(i) has an associated probability distribution, which can be specified by a Cumulative Distribution Function (CDF):
Fi(x) = Prob( X(i) ≤ x),
where Fi is the CDF. The derivative of the CDF is the Probability Density Function (PDF). (I guess differentiation, in this sense, is the inverse operation of Lebesque-Stieltjes integration.)

If the stochastic process {X(i), i = 0, 1, ..., n - 1} is a SIRP, it can be represented as the product
X(i) = Y Z(i), i = 0, 1, ..., n - 1,
where Y is a random variable not indexed on time and Z(i) is from a Gaussian distribution with a mean of zero.

2.1 A Single Realization
To consider an example, I picked a distribution for Y, namely the Chi distribution. (A random variable is from a Chi distribution if it is the square root of a random variable from a Chi Squared distribution.) Arbitrarily, I set the degrees of freedom of the corresponding Chi Squared distribution to be 2. For simplicity, let the variance of the normally distributed random variables {Z(i), i = 0, 1, ..., n - 1} be unity.

Figure 1 shows 100 time samples generated from this stochastic process. A single realization y of the Chi distribution is generated for each realization of the SIRP. The time samples consist of the product of this value and 100 realizations generated from a standard normal distribution. Figure 2 is a histogram formed from these 100 time samples. Does the histogram look bell-shaped?
Figure 1: A Realization of a Random Process

Figure 2: Distribution Over Time

2.2 Many Realizations
I generated 100 realizations of this SIRP, each consisting of 100 time samples. Consider a fixed time sample, say i = 4. The 100 realizations of the SIRP allow one to create a sample of the value of the SIRP at this time sample. Figure 3 shows the resulting distribution. The distribution shown reflects variation resulting from the Chi distribution, as well as the variation in the Gaussian distribution. I don’t find it obvious to the eye that this distribution is peaked differently (has a different kurtosis) than a Gaussian distribution.
Figure 3: Distribution Across Realizations

2.3 Nonergodic Stochastic Processes
Figures 2 and 3 are constructed from two random samples, each of 100 points. These samples can each be used to estimate parameters of the stochastic process – for example, the CDF at specified values. If the stochastic process were ergodic such estimates would converge as the sample sizes increased. That is, an estimator based on a large enough number of time samples from a single realization would be equally as good, in some sense, as an estimator based on data across a large enough number of realization at a specified time sample.

But this SIRP is nonergodic. Figure 4 shows the CDFs estimated from the two random samples. The Kolmogorov-Smirnov statistic provides a formal statistical test for deciding whether the difference between these two estimates of the CDF can be explained by random variation. And that test rejects the null hypothesis at a 5% level of statistical significance. To summarize – this post has presented a Monte Carlo demonstration that a SIRP can be nonergodic. The question raised for the economist is whether their theories apply if stochastic processes observed in actual economies (for example, the prices of stocks) are nonergodic.
Figure 4: Empirical Cumulative Distribution Functions (CDF)

References
  • Paul Davidson, "Rational Expectations: A Fallacious Foundation for Studying Crucial Decision-Making Processes", Journal of Post Keynesian Economics, V. V, N. 2 (Winter 1982-83): 182-198.
  • Joan Robinson, "History versus Equilibrium", in Contributions to Modern Economics, Blackwell (1978)

Saturday, March 27, 2010

Lo and Mueller's Need for More Scholarship

Consider Andrew W. Lo and Mark T. Mueller's draft paper "WARNING: Physics Envy May Be Hazardous To Your Wealth!", to appear in the Journal of Investment Management. They argue that economists' "physics envy" has led to "a false sense of mathematical precision in some cases", and they illustrate their argument by pointing to Paul Samuelson. They distinguish between uncertainty and risk and offer a checklist to assess the degree of uncertainty in your decision-making environment. They mention chaotic dynamics.

I find their references lacking. They don't reference Philip Mirowski, particularly his book More Heat Than Light in which he considers Samuelson. They do reference Frank Knight's Risk, Uncertainty, and Profit, but not Keynes. They do not reference G. L. S. Shackle and his role on the development of scenario planning. No reference to Joan Robinson appears. I think her 1974 paper "History vs. Equilibrium", with its distinction between logical time and historical time, is particularly apropos. Paul Davidson's 1982 JPKE paper, "Rational Expectations: A Fallacious Foundation for Studying Crucial Decision-Making Processes" is also of some importance, with its emphasis on the mainstream economist's assumption of ergodicity. J. Barkley Rosser, Jr., with his treatment of insights from complex dynamics, is also unreferenced.

I don't see why one would want to read Lo and Mueller until they engage some of this literature on their point.

Monday, June 29, 2009

Elsewhere

I have added the blog of some economists at the University of Missouri-Kansas City to my blogroll. That blog is more policy-oriented than this. Bill Mitchell blogs from Australia, also more about policy than I do. Grupo Lujan-Circus seems like a blog of interest to me, but I can read only the names. The same remark applies to the blog of the Italian Association for the History of Political Economy.

Occasionally I stumble across curious articles in Wikipedia. The one on Surplus economics references Paul Baron and Paul Sweezy. It doesn't describe their ideas very well, and could do with some reference to Sraffa too. The entry on Newtonian time in economics seems to have been written by Austrian fanboys who, typically, know about neither Joan Robinson's distinction between logical and historical time nor Paul Davidson's attack on the Austrian school.

Sunday, June 14, 2009

By His Bootstraps

1.0 Introduction
One can hold savings in various forms of assets. In effect, savings is a time machine for transferring purchasing power into the future. A debt, when purchased - that is, a bond - is one such asset in which one can store savings. The relationships between bonds of various maturities and the existence of well-developed markets in which to trade bonds allows the determination of interest rates without relying on the theory of time preference, a theory which is a lot of utter hogwash anyhow.

The point of this post is to explain how, under certain institutions for selling second-hand debt, a relatively stable long-term interest rate can be maintained by beliefs in its stability. No need arises to call on the forces of thrift and productivity. This post might even be relevant to current events in the USA.

2.0 Institutions Providing a Setting in Which a Second Decision Must be Made
The model I outline here is based on Keynes' account of the two decisions a saver must make:
"The psychological time-preferences of an individual require two distinct sets of decisions to carry them out completely. The first ... determines for each individual how much of his income he will consume and how much he will reserve in some form of command over future consumption. But this decision having been made, there is a further decision which awaits him, namely, in what form he will hold the command over future consumption which he has reserved, whether out of his current income or from previous savings." -- J. M. Keynes, The General Theory of Employment, Interest and Money (1936): p. 166
Assume that the debts of the best quality available for purchase consist of Treasury bills (T-bills) that mature in three months, T-bills that mature in a year, and Treasury notes (T-notes) that mature in 10 years. These are all available in the U.S.A., along with T-bills, T-notes, and T-bonds of other maturities. In this exposition, I abstract from the existence of these other maturities. By including debts of these three maturities, the model incorporates the decision to hold money, assets that pay the short-term interest rate, or assets that pay the long-term interest rate.

In describing three-month T-bills as money, I again follow Keynes:
"...we can draw the line between 'money' and 'debts' at whatever point is most convenient for handling a particular problem. For example, we can treat as money any command over general purchasing power which the owner has not parted with for a period in excess of three months, and as debt what cannot be recovered for a longer period than this; or we can substitute for 'three months' one month or three days or three hours or any other period; or we can exclude from money whatever is not legal tender on the spot. It is often convenient to include in money time-deposits with banks and, occasionally, even such instruments as (e.g.) treasury bills." -- J. M. Keynes, The General Theory of Employment, Interest and Money (1936): p. 167
Suppose, contrary to fact, that the short term interest rate, r, was known to be constant for the next ten years, where 100 r is stated as an annual percentage. Then the long term interest rate would be established in the market at the start of the year as 100 [(1 + r)10 - 1] percent for 10 years, and the interest rate on money would be 100 [(1 + r)1/4 - 1] percent for three months. A higher price on a bond corresponds to a lower interest rate. For example, the price of a T-bill with a face value of $1000 to be paid in a year is 1000/(1 + r) dollars.

3.0 The Individual
In this model, federal authorities set the interest rate on money. The short term interest rate provides a market consensus on monetary policy is likely to be over the next year. If the annual interest rate embodied in the price of one-year T-bills is higher than the annualized interest rate on money, the market price of T-bills is predicting a tightening of monetary policy. The individual allocates his savings partly on his opinion of this consensus. If he thinks, for example, that the monetary authority is not going to tighten that much, he would sell three-month T-bills and buy one-year T-bills, so as to make a profit from speculation when the price of the latter rises.

The individual, one assumes, has some idea of what is a normal long-term interest rate. He expects that over a long enough period, the federal authority's monetary policy will average out, thereby achieving this normal rate. The individual expects the price of T-notes to eventually rise when the current long-term interest rate is above that normal long-term rate and to fall when the current long-term rate is below that normal rate. Here, too, the possibility for speculative gains influences the individual in his allocation of his savings between T-notes and T-bills.

3.0 Markets
Consider a range of the price of T-notes. For a high enough price, those who are bears on this market (who expect the long term interest rate to rise) would dominate the bulls (who expect the long term interest rate to fall). More would be selling than buying, and the price would fall. The opposite is true for a low enough price. The equilibrium price at an instant of time balances bulls and bears:
"In the Treatise [Keynes] pictures the Bulls and Bears of the gilt-edged market going into and out of bonds as they individually come to think that the next price movement will be up or down. In this speculative market the price of bonds and thus their yield, the interest rate, can only settle if opinion is divided, so that those who wish to sell for fear of a fall find their offers matched by the bids of those who wish to buy in hope of a rise. It is thus, as Keynes says, a variety of opinion in the gilt-edged market which gives stability to the interest rate and some control over it to the monetary authorities." -- G. L. S. Shackle, "Simplicity in Keynes's Theory of Money and Employment", The South African Journal of Economics, v. 51, n. 3 (1983): 357-367
Elsewhere Shackle talks about equilibrium in such a speculative market as inherently restless.

4.0 Conclusions and a Policy Implication
I suppose one could express the above model in mathematics, if one were so inclined. One might start with some distribution of agents' beliefs about the conventional long term interest rate, and allow each agent to slowly update their view, maybe with the addition of random noise. (One might draw on Shackle's "The Bounds of Unknowledge" (in Beyond Positive Economics (ed. by J. Wiseman) Macmillan, 1983) in specifying this updating.) And the agents would decide on the distribution of their savings based on their views. Maybe the model should have more types of assets. One would want a model in which a diversity of opinion is maintained among agents, and in which time series for stock equilibria exhibit hysteresis and non-ergodicity. It wouldn't surprise me if somebody has already published such a model.

Keynes had something to say about policy based on this sort of analysis:
"Thus a monetary policy which strikes public opinion as being experimental in character or easily liable to change may fail in its objective of greatly reducing the long-term rate of interest... The same policy, on the other hand, may prove easily successful if it appeals to public opinion as being reasonable and practicable and in the public interest, rooted in strong conviction, and promoted by an authority unlikely to be superseded." -- J. M. Keynes, The General Theory of Employment, Interest and Money (1936): p. 203

Wednesday, June 03, 2009

An Experiment Protocol

1.0 Introduction
The point of the experiment described here is to offer empirical evidence for the importance of the distinction between uncertainty and risk, as put forth by Frank Knight and by John Maynard Keynes. People are not "rational", as "rationality" is defined by neoclassical economists.

As usual, I don't claim much originality except, maybe, in details. Daniel Ellsberg described the experiment below, as well as another. He references Chipman as having conducted experiments much like these. (Although Ellsberg's paper is oft cited and has been republished, Daniel Ellsberg is probably best known for having leaked The Pentagon Papers to the New York Times and others. Nixon's "plumbers" illegally broke into and searched Ellsberg's psychiatrist's office.)

2.0 The Protocol
The experimenter shows the test subject two urns, urn I and urn II. The test subject is shown that urn 1 is empty. The experimenter truthfully assures the test subject that urn II contains 8 balls, with some or none of them red and the remainder black. The test subject sees the experimented put one red and one black ball in urn II. The experimenter also puts in five red and five black balls in urn I in the test subject's presence. The urns are shaken.

So the test subject knows that urn number I contains 5 red and 5 black balls. Urn number II contains 10 balls. All are either red or black. At least one is black, and at least one is red.

The experimenter flips two coins so as to offer a gamble to the test subject. The coin flipping ensures the probability of offering each gamble is one in four. The gambles are described to the test subject:
  • Gamble A: You pay $5 for a draw from urn number I. You choose before the draw whether to play red or black. If a ball is drawn of your color, you receive a payout of $10.
  • Gamble B: You pay $5 for a draw from urn number II. You choose before the draw whether to play red or black. If a ball is drawn of your color, you receive a payout of $10.
  • Gamble C: You pay $5. You choose urn number I or urn number II. A ball is drawn from the urn you selected. If the ball is red, you receive $10.
  • Gamble D: You pay $5. You choose urn number I or urn number II. A ball is drawn from the urn you selected. If the ball is black, you receive $10.

Each test subject goes exactly once, and no test subject is able to observe previous plays by other test subjects (so urn number II cannot be sampled by a test subject).

The hypothesis is that in gambles A and B, statistically equal numbers of people will choose each color, while in gambles C and D, people will prefer to choose urn nmber I.

3.0 To Do
  • Demonstrate mathematically that no assignments of probability in urn number II are compatible with the hypothetical behavior.
  • Decide on a sample size. Perhaps a sequential test can be defined in which the sample size is not known beforehand.
  • Read Craig and Tversky (1995) and Chipman (1960). Where else is Ellsberg referenced?

References
  • J. S. Chipman, "Stochastic Choice and Subjective Probability", in Decisions, Values and Groups (edited by D. Willner), Pergamon Press (1960)
  • Daniel Ellsberg, "Risk, Ambiguity, and the Savage Axioms", Quarterly Journal of Economics, V. 75, N. 4 (Nov. 1961): 643-669
  • Craig R. Fox and Amos Tversky, "Ambiguity Aversion and Comparative Ignorance", Quarterly Journal of Economics, V. 110, N. 3 (1995): 585-603

Tuesday, December 23, 2008

Minsky Versus Sraffa

Kevin "Angus" Grier reminisces about Hyman Minsky's dislike for Piero Sraffa. But he doesn't recall points at issue. Minsky expressed his views in print:
"Given my interpretation of Keynes (Minsky, 1975, 1986) and my views of the problems that economists need to address as the twentieth century draws to a close, the substance of the papers in Eatwell and Milgate (1983) and the neoclassical synthesis are (1) equally irrelevant to the understanding of modern capitalist economies and (2) equally foreign to essential facets of Keynes's thought. It is more important for an economic theory to be relevant for an understanding of economies than for it to be true to the thought of Keynes, Sraffa, Ricardo, or Marx. The only significance Keynes's thought has in this context is that it contains the beginning of an economic theory that is especially relevant to understanding capitalist economies. This relevance is due to the monetary nature of Keynes's theory.

Modern capitalist economies are intensely financial. Money in these economies is endogenously determined as activity and asset holdings are financed and commitments of prior contracts are fulfilled. In truth, every economic unit can create money - this property is not restricted to banks. The main problem a 'money creator' faces is getting his money accepted...

...The title of this session, 'Sraffa and Keynes: Effective Demand in the Long Run', puzzles me. Sraffa says little or nothing about effective demand and Keynes's General Theory can be viewed as holding that the long run is not a fit subject for study. At the arid level of Sraffa, the Keynesian view that effective demand reflects financial and monetary variables has no meaning, for there is no monetary or financial system in Sraffa. At the concrete level of Keynes, the technical conditions of production, which are the essential constructs of Sraffa, are dominated by profit expectations and financing conditions." -- Hyman Minsky "Sraffa and Keynes: Effective Demand in the Long Run", in Essays of Piero Sraffa: Critical Perspectives on the Revival of Classical Theory (edited by Krishna Bharadwaj and Bertram Schefold), Unwin-Hyman (1990)
I gather, from second or third-hand accounts, that debates along these lines became quite acrimonious at the annual summer school in Trieste during the 1980s. I've always imagined Paul Davidson and Pierangelo Garegnani would be the most vocal advocates of the extremes in these debates. And I think of Jan Kregel, Edward Nell, and Luigi Pasinetti as being somewhere in the middle, going off in different directions. I don't know much about monetary circuit theory, but such theory may provide an approach to integrating money into Sraffianism.

Of course, Minsky's theories and Davidson's proposals for national and international reforms are of great contemporary relevance.

Monday, November 10, 2008

SDM: Path-Dependence and Instability

I recently read Peter Dorman's "Waiting for an Echo: The Revolution in General Equilibrium Theory and The Paralysis in Introductory Economics" (Review of Radical Political Economics, V. 33 (2001): pp. 325-333). Dorman claims that, in teaching introductory microeconomics, General Equilibrium Theory (GET) is "one of the back-of-the-book chapters we rarely get to." And if GET is taught, the teaching fails to reflect a "virtual revolution in GET during the past quarter-century". His thesis is that these developments in GET can and should be taught in introductory microeconomics classes.

The Sonnenschein-Debreu-Mantel theorem is one of these developments. This theorem states that almost any excess demand curves in markets for individual goods can be justified by aggregating over individual excess demands. Theory imposes only Walras' law, homogeneity of degree zero, and a technical continuity condition. No other restrictions need arise on the shape of aggregate demand curves.

Why are the SDM results exciting? They imply the general possibility of multiple equilibrium and instability. Or at least, that's what I have taken from the literature. I first thought idiosyncratic Dorman's take on the SDM results. He says that they show the "path-dependence instability of general equilibrium" and the indeterminancy of equilibrium:
"In general equilibrium, each action that alters the distribution of resources among agents (and that would be just about anything) also alters the equilibrium vector of prices. It is not possible to identify an equilibrium seperate from the actions individuals take either in pursuit of in utter ignorance of it."
And he writes:
"The first task facing a principles instructor is to ignore the scholarly debate that has surrounded S-D-M. The original authors demonstrated that out-of-equilibrium exchanges altered the distribution of resources, and, since different individuals have different preferences, also altered the general equilibrium itself. Since then, researchers have been investigating the exact extent of preference differentiation under which this result would hold. This, it seems to me, is an utterly arid line of investigation, and it has no meaningful implications for nonspecialists."

I have heard of indeterminacy, but had not thought of it in the context of the SDM. As I understand the instability implications of the SDM results, they are explored in the context of tâtonnement dynamics. How then, can one talk about path dependence here?

I did come up with some justification after some thought. The SDM results show that any dynamics is possible in GET. And I know of an interesting example of chaos in which the sensitive dependence on initial conditions is connected to a particular fractal structure. Newton's method is a numerical method for solving non-linear equations. One can think of Newton's method as a dynamical system for iteratively mapping a point in the complex plane to a root of an equation, when the method converges. Polynomials, for example, have multiple roots. Color the plane by the roots to which Newton's method maps each point. All points that map to a given root are the same color. For certain simple polynomials, you will have drawn a fractal. (Google also gave me this.) Thus, in certain regions, any infinitesimal change in the initial conditions can cause this dynamical method to tend towards a different equilibrium. This property is independent of any claim that multiple equilibrium lie along a continuum.

Since, according to the SDM results, any dynamics is possible, I guess that some sort of dynamics like I have described for Newton's method is possible in GET. And so one can say that the SDM results show the possibility of path-dependence in economics.

Thursday, August 07, 2008

Hayek and Myrdal Quotations

Hayek had a good argument about the difficulties of central planning. The planners do not have a mechanism for using tacit knowledge distributed among agents. But when it came to describing how contemporary western economies work, Hayek gave up:
"It is important to realize in any investigation of the possibilities of planning that it is a fallacy to suppose capitalism as it exists today is the alternative. We are certainly as far from capitalism in its pure form as we are from any system of central planning. The world of today is just interventionist chaos." -- F. A. Hayek (1948). "Socialist Calculation", in Individualism and Economic Order

On a different topic entirely - I am amused by this Myrdal quote:
"It has been suggested that if one tried to construct a consistent system from Marshall's footnotes and reservations, one would arrive at something very different from the Marshallian system. But it seems to me that if the job were critically, one would not arrive at any system at all." -- Gunar Myrdal, The Political Element in the Development of Economic Theory (Trans. by Paul Streeten) pp. 127-128
One such reservation is Appendix H in the eighth edition of Principles of Economics, titled "Limitations of the use of statical assumptions in regard to increasing returns".

Sunday, July 27, 2008

Two Roads Diverged In A Yellow Wood, And Sorry I Could Not Travel Both And Be One Traveler

1.0 Introduction
Brian Arthur and Paul David, two teachers at Stanford about a decade ago, have attracted a certain amount of popular attention with the concept of path dependence. Arthur, for example, has had a certain amount of influence on policy. This post is an attempt to explain the concept, primarily as it applies to stochastic processes. Path dependence is one way of formalizing the idea that history matters.

2.0 A Stochastic Process
Path dependence relates to events economists choose to model as random. This modeling choice does not imply that economists think such events are necessarily the result of the modeled agents acting capriciously, irrationally, or mistakenly. Consider such childhood games as Odds and Evens or Paper-Rock-Scissors. The optimal strategy for each player is to choose their move randomly. The winner of such games will vary randomly. Notice that apart from the players' choices, these games are deterministic. No dice are being rolled or cards shuffled.

A stochastic process is merely an indexed set of random variables:
{ X( 1 ), X( 2 ), X( 3 ), ... }
The index often represents time. The value of a given one of these random variables is frequently referred to as the state of the process at that time.

I consider an example of a path-dependent stochastic process that does not exhibit certain other properties. This stochastic process can be in one of eight states, {Start, B, C, D, E, F, G, H}. It begins in the Start state. State transition probabilities are indicated by
the fractions in Figure 1. For example, consider the probability distribution for the state of the process at the second time step, X( 2 ). Figure 1 shows that if the process is in the Start state, the probability that it will transition to state B is 1/2. Likewise, given that it is in the Start state, the probability that it will transition to state C is 1/2. Hence the probability distribution of X( 2 ) is:
Pr[ X( 2 ) = B ] = 1/2
Pr[ X( 2 ) = C ] = 1/2

Notice that the probabilities leading out from each state total unity. It is left as an exercise for the reader to confirm that the proability distribution of X( 3 ) is as follows:
Pr[ X( 3 ) = Start ] = 1/3
Pr[ X( 3 ) = B ] = 1/6
Pr[ X( 3 ) = C ] = 1/6
Pr[ X( 3 ) = D ] = 1/6
Pr[ X( 3 ) = G ] = 1/6
I deliberately created this example to exhibit a certain symmetry for the transient states (defined below).

Figure 1: Markov Process State Space

This process exhibits certain properties that are particularly simple, as well as some properties that complicate analysis. Notice that the state transition properties are invariant across time. Given that the process is in the Start state, the probability that it will transition to state B is 1/2, no matter at what time the process may be in the Start state. It does not matter whether we are considering the initial time step or some later time when the process happens to have returned to the Start state.

Furthermore, the process is memoryless. State transition probabilities depend only on the current state, not the history with which the process reached the current state. This property of memorylessness is known as the Markov property. This example is a Markov process.

The Markov property and the assumption of time-inavariant state transition probabilities are simplifying assumptions. One might think relaxation of these assumptions might be one way of showing that "history matters." Since, as will be explained, this example exhibits path dependence, violations of these assumptions are clearly not necessary for path dependence. And Margolis and Liebowitz are incorrect:
"In probability theory, a stochastic process is path dependent if the probability distribution for period t+1 is conditioned on more than the value of the system in period t. ... path independence means that it doesn't matter how you get to a particular point, only that you got there." -- Stephen E. Margolis and S. J. Liebowitz (1988)

An interesting classification of sets of states is available for Markov processes, that of transient and absorbing states. Consider the states {Start, B, C}. By assumption, the process starts in a state within this set. But eventually the process will lie in a state outside this set. Once this happens, the process will never return to this set. States Start, B, and C are known as transient states. On the other hand, consider the states {D, E, F}. Once the process is in a state in this set, the process will never depart from a state in the set. Furthermore, if the process is in a state in this set, it will eventually visit all other states in the set. {D, E, F} is a set of asorbing states. This is not the unique set of absorbing states for this process. {G, H} is also a set of absorbing states.

Consider the problem of estimating the probability distribution over the states at some large number of time step, say X(10,000), after the start. The probability that the process is in a transient state is negligible. One might be tempted to estimate X(10,000) by the proportion of time steps that a single realization of the process spends in each state. A realization might be:
Start, B, D, E, F, D, E, F, E, F, D
The column for the first realization in Table 1 shows the proportion of time spent in each state in this realization as the number of time steps in the realization increases without bound. (Although transient states are observed in a realization, the proportion of time spent in transient states in such an infinite sequence is zero.)
Table 1: Limiting State Probabilities
StateFrom One
Realization
From Another
Realization
Over All
Realizations
Start000
B000
C000
D1/301/6
E1/301/6
F1/301/6
G01/21/4
H01/21/4

If the process were ergodic, the limiting distribution in the first column in Table 1 would be a good estimate of the probability distribution for the state of the process at some time after transient behavior was likely to be completed. In this case, though, another realization might yield the limiting distribution shown for the second realization in Table 1. The probability distribution, in fact, at a given time, as that time increases without bound would have positive probabilities for all non-transient states. The last column in Table 1 shows this limiting probability distribution.

In general, estimates of parameters of the underlying probability distributions can be made across multiple realizations of the process or from a single realization. In a nonergodic process, such estimates will not converge as the number of realizations or the length of the single realization increases.

Another definition of ergodicity involves what states can be eventually reached from each state. In an ergodic process, each state can be eventually reached, with positive probability. In the example, all states can be reached from the transient states Start, B, and C. But states Start, B, C, G and H cannot be reached from states D, E, and F. Suppose an external occurrence changes the state transition probabilities. If the process were previously ergodic, this change could not possibly result in states arising that were previously unobserved.

To re-iterate, a nonergodic process does not have an unique limiting probability distribution. The applicable limiting distribution of any realization of the process depends on the history of that particular realization. Thus, the process exhibits path dependence.

Another branch of mathematics deals with deterministic dynamical systems. Such systems are typically defined by systems of differential or difference equations. Sometimes the solutions of such systems can be such that trajectories in phase space diverge, no matter how close they start. This is known as sensitive dependence on initial conditions, popularly known as the "butterfly effect." (The irritating mathematican character in Jurassic Park mumbles about this stuff.) Notice that path dependence is defined above without drawing on this branch of mathematics. Here, too, Margolis and Liebowitz are mistaken:
”Path dependence is an idea that spilled over to economics from intellectual movements that arose elsewhere. In physics and mathematics the related ideas come from chaos theory. One potential of the non-linear models of chaos theory is sensitive dependence on initial conditions: Determination, and perhaps lock-in, by small, insignificant events." -- Stephen E. Margolis and S. J. Liebowitz (1988)

3.0 Conclusion
Why should economists care about the mathematics of path dependence? First, models of path dependence suggest how to construct economic models that overcome frequently criticized characteristics of neoclassical economics. Neoclassical models often depict equilibria. These equilibria are end states of path independent processes. Ever since Veblen, some economists have objected to such models as being teleological and acausal. Models in which path dependence can arise are causal and show neoclassical economics to be a special case.

This claim that neoclassical economics is merely a special case, dependent on a special case assumption of ergodicity, may lead one to wonder about connections with a theory claimed to be the "General Theory." As a matter of fact, Paul Davidson claims that a consideration of nonergodicity is useful in explicating the economics of Keynes. So a second reason economists should be concerned with nonergodicity and path dependence is to further understand possible approaches to macroeconomics.

Third, some economists, e.g. Brian Arthur, have developed specific models of technological change that exhibit nonergodicity. These models, including those of a Polya urns, show how increasing returns can act as positive feedback and lead to path dependence. Inasmuch as these models cast light on economic history, path dependence can be useful for empirical work.

The theory of path dependence raises an empirical question. Are nonergodic stochastic processes useful for modeling any, or any important, economic time series? The evolution of the QWERTY keyboard seems to be an example of a path dependent process. Apparently there were several 19th century typewriter keyboard layouts. QWERTY became the dominant one, seemingly even after the jamming problem that was the rationale for its introduction had been overcome. The historical evidence suggests that with different early choices, one of the other arrangements could have become dominant. The chances that some one or other of these early arrangements can now become dominant seems quite negligible.

This discussion has been carried out without mentioning efficiency.

References
  • W. Brian Arthur (1989) "Competing Technologies, Increasing Returns, and Lock-In by Historical Events", Economic Journal, V. 99: 116-131.
  • W. Brian Arthur (1990) "Positive Feedbacks in the Economy", Scientific American, 262 (February): 92-99.
  • W. Brian Arthur (1996) "Increasing Returns and the New World of Business", Harvard Business Review.
  • Paul A. David (1985) "Clio and the Economics of QWERTY", American Economic Review 75, 2 (May)
  • Paul A. David "Path Dependence, Its Critics and the Quest for 'Historical Economics'"
  • Paul Davidson (1982-1983) "Rational Expectations: A Fallacious Foundation for Studying Crucial Decision-Making Processes", Journal of Post Keynesian Economics, V. 5 (Winter): 182-197.
  • Stephen E. Margolis and S. J. Liebowitz (1988) "Path Dependence", The New Palgrave Dictionary of Economics and Law, MacMillan.

Sunday, June 29, 2008

It's Herd Behavior, Uh Huh, It's Evolution, Baby

Anybody interested in institutional economics should be interested in evolutionary theory, a theory that I can stand to learn more about. The interest in evolution among institutionalists goes back to Veblen. A more-or-less mid twentieth expression of interest can be seen in Ayres's biography of Thomas Huxley, also known as "Charles Darwin's bulldog". Geoffrey Hodgson is a current institutialist interested in evolution. What evolves in economies? I suggest organizational forms, business processes, and technology, at least.

I never saw anything interesting when operating Tierra on my old computer. Perhaps I understand neither the assembly language nor the visualization well-enough. Or perhaps I should have designed experiments and let it operate for more generations. I was never into Core Wars either. John Conway's Game of Life was more my speed. I don't seem to have executables for any of these for my OS X Macintosh.

But I think Thearling and Ray (1994) describe a neat idea. In Tierra, programs composed of machine instructions reproduce, perhaps with mutations. Memory is not protected, and programs can overwrite one another's code. An ability to more successfully protect one's own code and data and overwrite others is selected for.

One can do repeatable experiments with a computer program. Each generation can be saved, and the simulation can be rerun from any point in time, with random number generators restarted with new seeds. Lenski et al (2003) report such experiments with Avida, a computer simulation much like Tierra. In Avida, evolving computer programs collect energy to run their code. Programs that can do advanced logical operators are more fit. Lenski et al show that the evolution of a complex feature may depend on the prior evolutionary history of an organism providing the potential of the last few steps, even if previous mutations do not increase fitness.

I was surprised to find last week not only that repeatable experiments with evolution have been performed on simulations, but that Richard Lenski has been performing such a repeatable experiment on real-world organisms - namely, E. Coli - since 1988. (I must have missed Carl Zimmer's article, of 26 June 2008 in The New York Times, on the Long Term Evolution Experiment (LTEE).) Anyway, Blount, Borland, and Lenski's 2008 article reports on recent results.

In the LTEE, populations of each generation are isolated in a solution containing glucose for the E. Coli to eat. The isolated solution, I guess, acts like the simulated core memory in Tierra. And the E. Coli of any generation and evolutionary history can be frozen and restarted, just as an image of the computer core memory in a simulation run can be saved and reloaded. One run yielded a mutation that seems to have surprised Linski. This mutation allows the E. Coli to thrive on citrate, whatever that is, in the solution even "under oxic conditions". The ability to sample previous generations and look at other isolated population histories starting from the same initial conditions allows Linski and his colleagues to understand something about this mutation even before genetic sequencing. It is not the result of a single gene mutating, but is dependent on prior mutations in the history. These prior mutations may not have increased fitness themselves, but prepared the E. Coli to become dramatically more well-adapted for their specific environment after a couple more mutations. History matters.

References

Sunday, May 04, 2008

Letters From Soros

Last month, I noted resemblances between Soros' concept of "reflexivity" and Davidson's use of non-ergodicity to formalize the notion of a model economy set in historical time. Davidson drew this point to Soros' attention over a decade ago. Soros has commented on this resemblance.

The following letter has an Open Society Institute letterhead:
February 28, 1997

Professor Paul Davidson
Holly Chair of Excellence in Political Economy
The University of Tennessee Knoxville
College of Business Administration
Department of Economics
Stokely Management Center
Knoxville, Tennessee 37996-0550

Dear Professor Davidson,

Thank you for sending me your book Economics for a Civilized Society. I found your comments on Samuelson's ergodic hypothesis very pertinent.

Yours Sincerely,

George Soros
From the 15-21 March 1997 issue of The Economist:
Sir - In "Palindrome repents" (January 25th) you accuse me of ignorance of economic theory. In particular, you say that my "claim that economics is inherently flawed on some deep epistemological level is just embarrassing." Is it?

Economics aspires to the status of a hard science. Specifically, it seeks to establish universally valid laws similar to 19th-century physics. For this purpose it relies on the concept of equilibrium, similar to the resting place of the pendulum, which is the same irrespective of any temporary perturbation. Paul Samuelson, an economist, called this the "ergodic hypothesis" and considered it indispensable to making economics a hard science.

The trouble is that economics cannot be made into a hard science, because of the reflexive interaction between the participants' thinking and the actual state of affairs. The interaction does not have a determinate outcome, because the outcome is contingent on the participants' expectations, and the participants' decisions do not merely passively discount the future but also actively help to shape it. There is a two-way feedback mechanism that does not lead to a predetermined resting place, but keeps a historical process in motion. Economic theory can protect the false analogy with 19th-century physics only by eliminating reflexivity. It does so by assuming demand and supply as independently given. The result is an axiomatic system that has little relevance to the real world.

You are correct to claim that, in practice, economists have learnt this, in order to deal with the real world. Alan Greenspan's recent Humphrey-Hawkins testimony is a brilliant exercise in reflexivity. But the theory has never been discarded and it serves as the scientific underpinning for the prevailing belief in the magic of the marketplace.

You are also right to claim that markets do not reign supreme; but you cannot deny that there is a powerful body of opinion that passionately believes that they should. You are plain wrong in asserting that I do not know the "big difference" between laisser-faire and totalitarian ideologies. I stated it explicitly in my Atlantic Monthly article and have been guided by it in my philanthropic activities. I can tolerate personal attacks but I must object when they are used to obfuscate valid arguments.

New York
George Soros

Wednesday, April 23, 2008

Soros on Historical Time

The New York Times a couple of weeks ago profiled George Soros, the billionaire financial speculator, philanthropist, and student of Karl Popper's ideas. According to this profile, Soros would like to have an impact on the discipline of economics:
"Now in his eighth decade, [Soros] yearns to be remembered not only as a great trader but also as a great thinker. The market theory he has promoted for two decades and espoused most of his life - something he calls 'reflexivity' - is still dismissed by many economists. The idea is that people's biases and actions can affect the direction of the underlying economy, undermining the conventional theory that markets tend toward some sort of equilibrium." -- Louise Story (2008)
Stiglitz is quoted as saying that Soros might become successful at his goal:
"But Joseph E. Stiglitz, a professor at Columbia who won the Nobel for economics in 2001, said Mr. Soros might still meet success. 'With a slightly different vocabulary these ideas, I think, are going to become more and more part of the center,' said Mr. Stiglitz, a longtime friend of Mr. Soros." -- Louise Story (2008)
I suppose one could debate about whether mainstream economics is as open to these ideas as Stiglitz suggests.

About a decade ago is the first time Soros put these ideas into print, as far as I aware. Here is his then definition of reflexivity:
"In the case of scientists, there is only a one-way connection between statements and facts. The facts about the natural world are independent of the statements that scientists make about them... If a statement corresponds to the facts, it is true; if not, it is false. Not so in the case of thinking participants. There is a two-way connection. On the one hand, participants seek to understand the situation in which they participate. They seek to form a picture that corresponds to reality. I call this the passive or cognitive function. On the other hand, they seek to make an impact, to mold reality to their desires. I call this the active or participating function. When both functions are active at the same time, I call the situation reflexive...

...When both functions are at work at the same time, they may interfere with each other. Through the participating function, people may influence the situation that is supposed to serve as an independent variable for the cognitive function. Consequently, the participants' understanding cannot qualify as objective knowledge...

...Our expectations about future events do not wait for the events themselves; they may change at any time, altering the outcome. That is what happens in financial markets all the time... But reflexivity is not confined to financial markets; it is present in every historical process. Indeed, it is reflexivity that makes a process truly historical...

A truly historical event does not just change the world; it changes our understanding of the world - and that new understanding, in turn, has a new an unpredictable impact on how the world works."-- George Soros (1998: 6-8)
And Soros uses these ideas to criticize the use of equilibrium models in economics.

To me, Soros is expressing the same concept that Post Keynesians call historical time. Post Keynesians reject neoclassical economics. In the failed neoclassical approach, the economy tends towards an economic equilibrium pre-determined by the objective data of tastes, technology, and endowments. Both Soros and the Post Keynesians, with their emphases on expectations, question the objectivity of the data. Today, I turn to Jan Kregel for a statement of the Post Keynesian position:
"There can be no tendency to equilibrium based on a relation between expectations and the objective data of what the consumer will demand and the price he will pay which describes the conditions of equilibrium because the incomes available to consumers will be determined ultimately by the very decisions taken by entrepreneurs on the basis of these expectations.

The post Keynesian approach is thus influenced by Keynes' insistence that the level of output and employment cannot be considered as objective data determining the conditions of equilibrium because they will be endogenously determined by entrepreneurs' decisions... Keynes is concerned with the role of expectations in the coordination of individual production plans in a society consisting of several independent producers whose expectations determine the means available to satisfy an uncertain multiplicity of future demands. Expectations themselves determine the objective facts of the conditions of equilibrium... The problem is not whether the objective data necessary to achieve equilibrium will be reflected in subjective data available to the individual, but the very definition of the objective data. Indeed, even its objectivity is questioned." -- Jan Kregel (1986).
And these ideas accompany an concern with processes set in history. (I have previously mentioned Paul Davidson's use of the concept of non-ergodicity in a formalization of the idea of a process set in history.)

Soros and Post Keynesians like Davidson draw similar practical conclusions from developments of these ideas. The financial system, including internationally, can be a source of economic instability. We need to design new international institutions and conventions to govern finance. The system that has evolved since Richard Nixon abolished the Bretton Woods system doesn't work, as international economic crisis succeeds international economic crisis.

References
  • J. A. Kregel (1986). "Conceptions of Equilibrium: The Logic of Choice and the Logic of Production", in Subjectivism, Intelligibility, and Economic Understanding: Essays in Honor of Ludwig M. Lachmann on his Eightieth Birthday (ed. by Israel M. Kirzner), New York University Press.
  • George Soros (1998). The Crisis of Global Capitalism: Open Society Endangered, Public Affairs
  • Louise Story (2008). "The Face of a Prophet: Soros Craves Respect for His Theories, Not Just His Money", New York Times (11 April) Business, p. 1

Monday, March 03, 2008

Goodbye to "Rational Expectations"

Consider an economic model in which the agents within the model act on decisions based on their understanding of the model. For ease of exposition, assume the models within the heads of the agents all have the same form, and that that form matches the actual model. The agents must estimate the parameters of the model.

Suppose the agents have made some estimate of the model parameter. Their decisions result in the parameters being set in the actual model. And the agents use the data generated from the actual model to make their estimates. A rational expectations equilibrium is said to result when the agents' estimates match the model parameters. A rational expectations equilibrium can be thought of as a fixed point of a function from the agents' estimated parameters to the actual parameters.

Rational estimations is often applied to models of economic time series considered as stochastic processes. An important parameter for a stochastic process is the population mean at a given point in time. One can conceptually describe two types of sample means for a stochastic process:
  • At a single point in time across many realizations of a stochastic process
  • Across time samples for a single realization of a stochastic process
If and only if a stochastic process is ergodic, these two types of sample means converge as the number of realizations and the number of time samples increase.

Some stochastic processes observed in real world economies are non-stationary, for example, if they have a component growing at a constant rate. Non-stationary is sufficient for non-ergodicity, but not necessary. (For an example of a non-ergodic stationary process, consider a Spherically Invariant Random Process (SIRP).) Hence, some real-world processes are non-ergodic.

Agents only have access to a single realization of some processes. They therefore cannot form a sample spatial average for such a process. They only can take statistics, such as a time average, for a time series. And, if that process is non-ergodic, such a sample average will have no tendency to converge to the true model parameter, which is an average across the population of all realizations.

So much for "rational expectations".

Reference
  • Paul Davidson (1982-1983). "Rational Expectations: A Fallacious Foundation for Studying Crucial Decision-Making Processes", Journal of Post Keynesian Economics, 5 (Winter): 182-197.