Wednesday, January 29, 2014

Economics Too Hard For Kartik Athreya

I have been trying to read Kartik Athreya's Big Ideas in Macroeconomics: A Nontechnical View. I find it quite dry. So far, it is all theory. (I guess some might quibble with that, given the overview of results from experimental economics.) There is no history of ideas and no context suggesting that those who might have developed these ideas were any more than disembodied consciousnesses. And no hint is given that whole groups of economists would find these views controversial. (Caveat: he does mention, for example, Ariel Rubinstein and Ricardo Caballero.) For Athreya, Paul Davidson, Wynne Godley, Alan Kirman, and Lance Taylor, for example, just do not exist.

I think Athreya might have misjudged his audience. He says that he is attempting to target two audiences:

  • Advanced undergraduates considering graduate school and beginning graduate students.
  • Popular readers with an interest in macroeconomics.

But the lack of any leavening from a presentation of details of theory will make this book a hard sell for the second audience. Maybe my opinion will change as I read further.

But I want to point out a display of ignorance of the logic of prices in general equilibrium:

"...notice there are likely to be many types of laborers involved in the production of barstools... Thers are also many possible input materials, and different possible production processes. Importantly, the myriad ways in which various inputs can be substituted for each other in barstool production is knowledge that can only be acquired through experience in the field.

In our W[alrasian] C[learing]H[ouse], each furniture maker will, at various prices, carefully consider all the ways in which inputs can be substituted for each other. If, for example, walnut is particularly expensive relative to oak, and oak can easily be substituted for walnut because it won't also necessitate the use of harder-tipped and more expensive saw blades, for instance, the oak will be used. In this way, the experience and almost-inevitably accumulated wisdom of those who have specialized in the production of any given product are brought to bear fully in the industry's use of inputs even though no firms are assumed to communicate with any others within the industry..." [emphasis in original]

As I pointed out many times, prices are not indices of relative scarcity, and neoclassical economists, such as Christopher Bliss, Frank Hahn, and Paul Samuelson have noted the logic of general equilibrium is not that of substitution. (Andreu Mas-Colell has an accessible overview of capital theory.) When will (some) mainstream economists accept their own logic?

Monday, January 27, 2014

Impact Of Piero Sraffa On Industrial Organization

1.0 Introduction

Piero Sraffa, with his 1926 Economic Journal article on the laws of returns, had a great impact on the emerging field of Industrial Organization (I/O). For the purposes of this post, Sraffa's paper can be said to have made two major contributions:

  1. An internal critique of Marshall's theory of partial equilibrium, showing it holds only under the most specious conditions.
  2. Suggestions for how to analyze the wide range of markets between perfect competition and monopoly.

The first contribution is still relevant today, given how the theory of the perfectly competitive firm is still presented in introductory textbooks. One might also argue that how the theories of imperfect and monopolistic competition were developed, they still are vulnerable to Sraffa's critique. In this post, however, I concentrate on a broad historical overview focused on the second contribution above. But Cameron Murray shows that some still find Sraffa's 1920s work of importance for contemporary theorizing about the theory of the firm.

I apologize for lacking references to recent secondary literature. I do not think that the thesis of this post is not well known among historians of economics or the authors of secondary literature.

2.0 Selected Quotes

Sraffa articulated the need for and possibility of theories of market forms between monopoly and perfect competition:

"...when we are supplied with theories in respect to the two extreme cases of monopoly and competition as part of the equipment required in order to undertake the study of the actual conditions in the different industries, we are warned that these generally do not fit exactly one or other of the categories, but will be found scattered along the intermediate zone, and that the nature of an industry will approximate more closely to the monopolist or the competitive system according to its particular circumstances, such as whether the number of autonomous undertakings in it is larger or smaller, or whether or not they are bound together by partial agreements, etc. We are thus led to believe that when production is in the hands of a large number of concerns entirely independent of one another as regards control, the conclusions proper to competition may be applied even if the market in which the goods are exchanged is not absolutely perfect, for its imperfections are in general constituted by frictions which may simply retard or slightly modify the effects of the active forces of competition, but which the latter ultimately succeed in substantially overcoming. This view appears to be fundamentally inadmissible. Many of the obstacles which break up that unity of the market which is the essential condition of competition are not of the nature of 'frictions,' but are themselves active forces which produce permanent and even cumulative effects. They are frequently, moreover, endowed with sufficient stability to enable them to be made the subject of analysis based on statical assumptions." -- p. 542

He stated some basic ideas developed in the theory of monopolistic competition:

"The causes of the preference shown by any group of buyers for a particular firm are of the most diverse nature, and may range from long custom, personal acquaintance, confidence in the quality of the product, proximity, knowledge of particular requirements and the possibility of obtaining credit, to the reputation of a trade-mark, or sign, or a name with high traditions, or to such special features of modelling or design in the product as-without constituting it a distinct commodity intended for the satisfaction of particular needs-have for their principal purpose that of distinguishing it from the products of other firms.

What these and the many other possible reasons for preference have in common is that they are expressed in a willingness (which may frequently be dictated by necessity) on the part of the group of buyers who constitute a firm's clientele to pay, if necessary, something extra in order to obtain the goods from a particular firm rather than from any other." -- p. 544

He described what could be seen as a forerunner of the theroy of kinked demand curves:

"...the forces which impel producers to raise prices are much more effective than those which impel them to reduce them; and this not merely owing to the fear which every seller has of spoiling his market, but mainly because an increase of profit secured by means of a cut in price is obtained at the cost of the competing firms, and consequently it impels them to take such defensive action as may jeopardise the greater profits secured; whereas an increase of profit obtained by means of a rise in prices not only does not injure competitors but brings them a positive gain, and it may therefore be regarded as having been more durably acquired. An undertaking, therefore, when confronted with the dual possibility of increasing its profits by raising its selling prices, or by reducing them, will generally adopt the first alternative unless the additional profits expected from the second are considerably greater." -- p. 548

Sraffa is also a forerunner of the theory of contestable markets, in which one analyzes the effects on existing firms of potential entrants into their markets.

"It should be noted that in the foregoing the disturbing influence exercised by the competition of new firms attracted to an industry the conditions of which permit of high monopolist profits has been neglected. This appeared justified, in the first place because the entrance of new-comers is frequently hindered by the heavy expenses necessary for setting up a connection in a trade in which the existing firms have an established goodwill - expenses which may often exceed the capital value of the profits obtainable; in the second place, this element can acquire importance only when the monopoly profits in a trade are considerably above the normal level of profits in the trade in general, which, however, does not prevent the prices from being determined up to that point in the manner which has been indicated."-- p. 549

I suppose I could also quote Sraffa's suggestion that developments along some of these lines would lead to models with determinate solutions. To summarize, you can see in this paper an outline of a program for the I/O field.

3.0 Impact on Economists Developing I/O

Sraffa was not a voice crying in the wilderness, ignored by economists of his day and thereafter. His paper was one contribution, among many in the 1920s, attempting to articulate the logical requirements for a theory of perfect competition. Sraffa was not even alone in expressing skepticism that one could confidently connect Marshall's theory to the empirical facts. I think of, for example, what has come to be known as the "empty economic boxes" debate.

Edward Chamberlin and Joan Robinson, with their 1933 books on, respectively, monopolistic and imperfect competition, is an example of simultaneous discovery in I/O. Richard Kahn provided Robinson quite a bit of help with her book and was also working on the theory of imperfect competition, if I recall correctly, in his thesis. Kahn and Robinson were directly inspired by Sraffa and interacted with him in Cambridge.

Joe S. Bain and Paolo Sylos Labini provide a later example of simultaneous discovery in I/O. They develop what has become known as "old" I/O, as opposed to more game-theoretic approaches. Sylos Labini, at least, thought of himself as following a Sraffian tradition inasmuch as he was attempting to develop I/O in keeping with a revival of classical political economy. But this observation takes me into Sraffa's later work and beyond the scope of this post.

Reference
  • Franco Modigliani (1958). New Developments on the Oligopoly Front, Journal of Political Economy, V. 66, no. 3 (Jun.): pp. 215-232.
  • Piero Sraffa (1926). The Laws of Returns under Competitive Conditions, Economic Journal, V. 36, no. 144 (Dec.): pp. 535-550.
  • Paolo Sylos Labini (1995). Why the interpretation of the Cobb-Douglas production function must be radically changed, Structural Change and Economic Dynamics, V. 6, no. 4 (Dec.): pp. 485-504.

Monday, January 13, 2014

Dennis Robertson's "Wage Grumbles"

To simplify, the factors of production are land, labor, and capital. The marginal productivity of labor is the extra output produced by an infinitesimal increase in labor, holding the quantity of all other factors constant. What does it mean to hold the quantity of capital constant? Dennis Robertson thought about this:

"If ten men are to be set to dig a hole instead of nine, they will be furnished with ten cheaper spades instead of nine more expensive ones; or perhaps if there is no room for him to dig comfortably, the tenth man will be furnished with a bucket and sent to fetch beer for the other nine." -- Dennis Robertson (1931).

I do not know that I have ever read Robertson. But I have seen the above passage often quoted (e.g., in Miller 2000) or alluded to (e.g. in Harcourt 2014).

Anyways, here we see, in a micro-economic context, a constant quantity of capital, measured in numeraire units, with a variable form. The Cambridge Capital Controversy showed this notion to be untenable. And this quote is another demonstration that the CCC was about more than macroeconomic models with aggregate production functions, such as the Solow model of economic growth. We also see that, once, some did not find odd the idea of beer breaks.

References
  • G. C. Harcourt (2014). Cambridge-Style Criticism of the Marginal Productivity Theory of Distribution, Proceedings of the American Economic Association. Philadelphia, PA (3-5 January).
  • Richard A. Miller (2000). Ten Cheaper Spades: Production Theory and Cost Curves in the Short Run, Journal of Economic Education (Spring): pp. 119-130.
  • Dennis W. Robertson (1931). Wage-grumbles, Economic Fragments. [I DON'T KNOW THAT I EVER READ THIS.]

Saturday, January 11, 2014

Economics And Physics: A Disanalogy

John Scalzi's Cat Looks At A Hugo Award

Suppose you criticize current neoclassical teaching in introductory microeconomics. A defender might reply that it is common to teach incorrect models in introductory classes. Only when the students have that background can they go on to the more sophisticated teaching. For example, physicists teach Newtonian mechanics. They do not start with quantum dynamics and the special and general theories of relativity.

I doubt this defense. Physics can roughly state the domain of applicability of Newtonian physics - medium size dry goods. Physicists have had empirical success within that domain to an astonishing degree of precision. I doubt economists can point to a success where they can notice tiny variations from their predictions analogous to the deviation of the perihelion of the orbit of Mercury of 43 seconds of arc per century or the deflection of the location of stars observed during a solar eclipse. But I do not want to argue about the relative empirical success of simplified theories in economics and physics.

Rather, I want to point out a large difference in the status of such introductory theories in popular culture. A large body of literature exists, popular among schoolchildren and others, pointing out the limitations of Newtonian mechanics. I refer to Science Fiction. The speed limit imposed by the speed of light, for example, is a common trope in SF. It is true that some authors do not hold to it. But they often insist that when their characters violate the constraints of the theory, they utter some jargon: "tachyon nexus", "wormhole", "space warp". Furthermore, SF authors tell many stories where these limits play out, for example, with generation ships that may accelerate and decelerate and in which the crew are quite aware of the difference between their slowed-down local time and the time of the folks back on planet earth. I think that SF back in the forties and fifties may have been more prone to contain lectures about these matters. Nevertheless, I think many students of introductory physics are aware that the truths of Newtonian theory, which they can see played out in the laboratory, do not extend to all of the universe of physics.

I suggest even if you can find accounts in popular culture of, say, implications of game theory inconsistent with introductory microeconomics teaching, you cannot find something as popular and pervasive for economics as SF is for physics. The culture resists somebody preaching physics 101 as an explanation for everything physical.

Monday, January 06, 2014

Noah Smith Tells Us Academic Economists Are (Mostly) Ignoramuses

Noah Smith writes, "...An audience of academics wouldn't know an Austrian from a Post-Keynesian." So he creates a presentation organized around the pettiness and cronyism of economists. (His post from last year had some problems. For example, he cannot tell the difference between a Marxist and an anarchist.) If Noah wanted, he could have attended a session in which Geoff Harcourt introduced him to some Post Keynesian ideas. But maybe such knowledge would be of no help to his career.

Wednesday, January 01, 2014

Updated Paper: "On the Loss From Trade"

I have updated my paper, On the Loss From Trade, available for download from the Social Science Research Network (SSRN). Changes include:

  • Correction of a mistake in calculating price Wicksell effects.
  • Improvement in the modeling of utility-maximization so as to rationalize specific values of the interest rates on Neoclassical premises.
  • Expanded literature review, just so the reader knows that, for example, I am aware of other theories for explaining the pattern of trade, beyond or opposed to the theory of comparative advantage.
  • Lots of minor changes in wording and exposition, most of which I cannot recall offhand.

Friday, December 27, 2013

Steve Keen: Economists Are "Insufficiently Numerate"

"Curiously, though economists like to intimidate other social scientists with the mathematical rigor of their discipline, most economists do not have this level of mathematical education...

...One example of this is the way economists have reacted to 'chaos theory'... Most economists think that chaos theory has had little or no impact - which is generally true in economics, but not at all true in most other sciences. This is partially because, to understand chaos theory, you have to understand an area of mathematics known as 'ordinary differential equations.' Yet this topic is taught in very few courses on mathematical economics - and where it is taught, it is not covered in sufficient depth. Students may learn some of the basic techniques for handling what are known as 'second-order linear differential equations,' but chaos and complexity begin to manifest themselves only in 'third order nonlinear differential equations.' Steve Keen (2011). Debunking Economics: The Naked Emperor Dethroned?, Revised and expanded ed. Zed Books, p. 31

The above quotes are also in the first edition. Before commenting on this passage, I want to re-iterate my previously expressed belief that some economists, including some mainstream economists, understand differential and difference equations.

I misremembered this comment as being overstated for polemical purposes. But, in context, I think it is clear to those who know the mathematics.

I took an introductory course in differential equations decades ago. Our textbook was by Boyce and DiPrima. As I recall, we were taught fairly cookbook techniques to solve linear differential equations. These could be first order or second order and homogeneous or non-homogeneous. They could also be systems of linear differential equation. I recall some statement of an existence theorem for Initial Value Problems (IVPs), although I think I saw a more thorough proof of some such theorem in an introductory1 real analysis course. We might have also seen some results about the stability of limit points for dynamical systems. Keen is not claiming that economists do not learn this stuff; this kind of course is only a foundation for what he is talking about.

I also took a later applied mathematics course, building on this work. In this course, we were taught how to linearize differential equations. We definitely were taught stability conditions here. If I recall correctly, the most straightforward approach looked only at sufficient, not necessary conditions. We also learned perturbation theory, which can be used to develop higher order approximations to nonlinear equations around the solutions to the linearized equations. One conclusion that I recall is that the period of an unforced pendulum depends on the initial angle, despite what is taught in introductory physics classes2. I do not recall much about boundary layer separations, but maybe that was taught only in the context of Partial Differential Equations (PDEs), not Ordinary Differential Equations (ODEs). This is still not the mathematics that Keen is claiming that economists mostly do not learn, although it is getting there.

You might also see ordinary differential equations in a numerical analysis course. Here you could learn about, say, the Runge-Kutta method. And the methods here can apply to IVPs for systems of non-linear equations3. I believe in the course that I took, we had a project that began to get at the rudiments of complex systems. I think we had to calculate the period of a non-linear predator-prey system. I believe we might have been tasked with constructing a Poincaré return map.

According to Keen, a sufficiently numerate economist should know the theory behind complex dynamical systems, chaos, bifurcation analysis, and catastrophe theory4. I think such theory requires an analysis able to examine global properties, not just local stability results. And one should be interested in the topological properties of a flow, not just the solution to a (small number of) IVPs. Although this mathematics has been known, for decades, to have applications in economics, most economics do not learn it. Or, at least, this is Keen's claim.

Economists should know something beyond mathematics. For example, they should have some knowledge of the sort of history developed by, say, Fernand Braudel or Eric Hobsbawm. And they should have some understanding of contemporary institutions. How can they learn all of this necessary background and the needed mathematics5, as well? I do not have an answer, although I can think of three suggestions. First, much of what economists currently teach might be drastically streamlined. Second, one might not expect all economists to learn everything; a pluralist approach might recognize the need for a division of labor within economics. Third, perhaps the culture of economics should be such that economists are not expected to do great work until later in their lifetimes. I vaguely understand history is like this, while mathematics is stereotypically the opposite.

Footnotes
  1. As a student, I was somewhat puzzled by why my textbooks were always only Introductions to X or Elements of X. It took me quite some time to learn the prerequisites. How could this only be an introduction? Only later work makes this plain.
  2. Good physics textbook are clear about linear approximations to the sine function for small angles. Although our textbook motivated perturbation theory in the context of models of solar systems, I have never seen perturbation theory applied here in a formal course. Doubtless, astrophysicists are taught this.
  3. Stiff differential equations is a jargon term that I recall being used. I do not think I ever understood what it meant, but I am clear that the techniques I think I have mostly forgotten were not universally applicable without some care.
  4. Those who have been reading my blog for a while might have noticed I usually present results for the analysis of non-linear (discrete-time) difference equations, not (continuous-time) differential equations.
  5. There are popular sciences books about complex systems.

Monday, December 23, 2013

Alan Greenspan, Fool or Knave?

Robert Solow quotes Greenspan's new book:

"'In a free competitive market,' [Greenspan] writes, 'incomes earned by all participants in the joint effort of production reflect their marginal contributions to the output of the net national product. Market competition ensures that their incomes equal their "marginal product" share of total output, and are justly theirs.'" -- Robert M. Solow

I am not going to waste my time reading banal balderdash from Greenspan. Solow feels the same way about Ayn Rand:

"I got through maybe half of one of those fat paperbacks when I was young, the one about the architect. Since then I have found it impossible to take Ayn Rand seriously as a novelist or a thinker." -- Robert M. Solow

Anyway, as I have explained repeatedly, marginal productivity is not a theory of distribution, let alone justice. Firstly, in a long-run model, endowments of capital goods are not givens and marginal productivity conditions fail to pin down the functional distribution of income. A degree of freedom remains, which one might as well take to be the interest rate.

Second, ownership of capital goods does not contribute to the product, even though decisions must be made how to allocate capital, both as finance and as physically existing commodities. The New Republic is written for a popular audience, and Solow is plainly trying to avoid technicalities. Is his comment about property in the following an echo of my - actually, Joan Robinson's - point, albeit mixed in with other stuff, including comments about initial positions:

"Students of economics are taught that ... the actual outcome, including the relative incomes of participants, depends on 'initial endowments,' the resources that participants bring when they enter the market. Some were born to well-off parents in relatively rich parts of the country and grew up well-fed, well-educated, well-cared-for, and well-placed, endowed with property. Others were born to poor parents in relatively poor or benighted parts of the country, and grew up on bad diets, in bad schools, in bad situations, and without social advantages or property. Others grew up somewhere in between. These differences in starting points will be reflected in their marginal products and thus in their market-determined incomes. There is nothing just about it." -- Robert M. Solow

As far as I am concerned, Greenspan's job, for decades, has been ensuring, on behalf of the rulers of the United States, that workers do not get too big for their britches.

Wednesday, December 18, 2013

Period Doubling As A Route To Chaos

Figure 1: An Example of Temporal Dynamics for the Logistic Equation
1.0 Introduction

This post illustrates some common properties of two dynamical systems, chosen out of a large class of such systems. Two quite different functions are drawn, and I demonstrate that, qualitatively, certain behavior arising out of these functions looks quite alike. Furthermore, I point out a mathematical argument that a certain quantitative constant arises for both functions.

I do not claim that the iterative processes here characterize any specific economic model (but see here and here) or physical process. Feigenbaum (1980) mentions "the complex weather patterns of the atmosphere, the myriad whorls of turmoil in a turbulent fluid, [and] the erratic noise in an electronic signal." Such processes have an emergent macro behavior consistent with a wide variety of micro mechanisms. The mathematical metaphors presented in this post suggest that if economic phenomena were described by complex dynamic processes, economists should then reject microfoundations, reductionism, and strong methodological individualism.

2.0 The Logistic Equation

This post is about the analysis of a sequence x0, x1, x2, ... in discrete time. Successive points in this sequence are defined by the repeated iteration of a function fi. (The index i allows one to specify a specific function.) The first few terms of the time series are defined as follows, for a given index i and a given initial value x0:

x1 = fi(x0)
x2 = fi(x1) = fi(fi(x0))
x3 = fi(x2) = fi(fi(fi(x0)))

The logistic function is defined as follows:

f1(x) = a x(1 - x), 0 < a < 4.

Note the parameter a. For a given value of a in the indicated region, the long term behavior of the iterative process defined above is independent of the initial value. This long term behavior varies dramatically, however, with a. In other words, the long term behavior exhibits a kind of dynamic stability, but is structurally unstable.

The behavior of such a sequence can be nicely illustrated by certain diagrams. Figure 1, above, displays the temporal dynamics for one sequence for one value of the parameter a and one initial value. The abscissa and the ordinate in this diagram both range from zero to unity. The 45-degree line then slopes upward to the right from the point (0, 0) to the point (1, 1). Any distance measured upward on the axis for the ordinate can be reflected through the 45 degree line to project the same distance horizontally on the axis for the abscissa. That is, draw a line horizontally from the Y-axis rightward to the 45 degree line. Then draw a vertical line downward from that intersection with the 45 line to the X axis. You will have measured the same distance along both the abscissa and ordinate.

Values for the time series are shown in the diagram by vertical lines. When projected downward to the axis for the abscissa, one will have a plot of x0, x1, x2, etc. In the case shown in Figure 1, the initial value, x0, is 1/2. The logistic function is shown as the parabola opening downward. A line is drawn upward from the axis for the abscissa to intercept the logistic function. The value of the ordinate for this point is x1. To find this value, as measured on the abscissa, a line is drawn leftward from the point of interception with the logistic function to the 45 degree line. Next, draw a line downward from this point on the 45 degree line to the logistic function. The value of the ordinate for this new point on the logistic function is then x2. The step function in Figure 1 going down to the labeled point is a visual representation of the entire time series. Can you see that, in the figure, all times series for the given value of the parameter a, no matter the initial value, will converge to the labeled point? In the jargon, the times series for the logistic function for this value of a is said to have a single stable limit point.

As a matter of fact, the long term behavior of every time series for the logistic function is generically independent of the initial value. It makes sense then, not to plot the first, say, 20,000 points of the time series and only plot the next say 5,000 points. This would lead to a boring graph for Figure 1; the only point in the non-transient part of the time series would be at the stable limit point. Figure 2 shows a more interesting case, for a larger value of the parameter a. Notice the upside-down parabola now rises to a higher value. Because of the form of the logistic function, the plotted function remains symmetrical around x = 1/2.) For the parameter value used for Figure 2, no stable limit points exist for the time series. Rather, the time series converges to a limit cycle of period 3. That is, the cycle illustrated with the structure with the black lines has three vertical lines and repeats endlessly.

Figure 2: A Cycle with Period 3 for the Logistic Equation

Figures 1 and 2 demonstrate that the limiting behavior of an iterative process for the logistic equation varies with the parameter a. Figure 3 displays this variation from a value of a somewhere under 3.0 to 4.0. In Figure 3, the value of a is plotted along the abscissa. For each value of a, non-transient values of a time series are plotted along the ordinate. To the left of the figure, the time series converges to a single stable limit point. Somewhere to the right, this limit point becomes unstable, and the limiting behavior consists of a cycle of period 2. Moving further to the right - that is, increasing a, limit cycles of period 4, 8, 16, etc. appear. The limit cycle of period 3 shown in Figure 2 corresponds to a parameter value of a somewhere to the center left of the region shown in the blown-up inset.

Figure 3: Structural Dynamics for the Logistic Equation

In some sense, this is recreational mathematics. Computers these days make it fairly easy to draw a more complete representation of Figure 4 in May (1976). The blow-up in Figure 3 demonstrates that the structural dynamics for the logistic function is fractal in nature. We see the same shape repeated on increasingly smaller and smaller scales. Chaos arises for parameter values of a between the period doubling cascade and the period-3 cycle. (Chaotic behavior is shown in Figure 3 by the dark shaded regions.)

3.0 A Exponential-Logistic Equation

I repeated the above analysis for what I am calling an exponential-logistic function:

f2(x) = (x/c) ea(1 - x), 0 < a

where:

c = 1, if a - ln a ≤ 1
c = -1/(a ea - 1), if 1 < a - ln a

This exponential-logistic function was suggested to me by a function in May (1976). I introduced the scaling provided by c such that the maximum value of this function never exceeds unity. This function, like the logistic function, is parametrized by a single parameter, which I am also calling a. Figure 4 shows the non-transient behavior for a specific value of the parameter a for the exponential-logistic function. In this case, a stable limit cycle of period 32 arises.

Figure 4: A Cycle with Period 32 for the Exponential-Logistic Equation

Notice the exponential-logistic function is generally not symmetric around any value of x; one tail is heavier than the other. Furthermore, it only has a zero at the origin; nothing corresponds to the zero at x = 1 in the logistic function. So, in some sense, it has a quite different form from the logistic function. Yet, as shown in Figure 5, the structural dynamics for iterative processes for the exponential-logistic function are qualitatively similar to the structural dynamics arising from the logistic function. We see the same shapes in Figures 3 and 5, albeit distorted in some sense.

Figure 5: Structural Dynamics for the Exponential-Logistic Equation
4.0 A Feigenbaum Constant

I now report on some quantitative numerical experiments. Table 1, in the second column, shows the smallest value of the parameter a for which I was able to find a limit cycle of the given period for the logistic equation. Cycles of an infinite number of periods - that is, for all positive integer powers of two (2, 4, 8, 16, ...) - exist in the period-doubling region labeled in Figure 3. As suggested by Table 2, the distance between values of a at which period-doubling occurs gets smaller and smaller. In fact all these limit cycles arise before a = 3.5700..., the point of accumulation of a at which chaos sets in. (I do not fully understand the literature on how to calculate the period of limiting cycles for a. I therefore do not report values of a for larger periods than shown in the table, since I do not fully trust my implementation of certain numeric methods.)

Table 1: Period Doubling in the Logistic Equation
PeriodaDifferenceRatio
22.9999078
43.44945770.4495504.7510
83.54407890.0946214.6556
163.56440290.0203244.6669
323.56875790.0043554.6665
643.56969110.0009334.6666
1283.56981090.000200

Table 1, above shows, in the third column, the difference between values of a at which period-doubling occurs. The fourth column shows the ratio of successive difference. Theoretically, this ratio converges to δ = 4.669201609... My numeric exploration has found this constant to at least two significant figures.

The convergence of this ratio, over limit cycles for periods of powers of two, is not limited to the logistic equation. Table 2 reports the result of a numeric experiment with the exponential-logistic equation. Here too, the constant δ has been found to two significant figures. Interestingly, the ratio would theoretically converge to the same constant if the two tables were infinitely extended. In fact, δ is a universal mathematical constant, like π or e.

Table 2: Period Doubling in the Exponential-Logistic Equation
PeriodaDifferenceRatio
22.7180077
44.60167401.8836662.9506
85.24007860.6384054.2473
165.39038560.1503074.5651
325.42331070.0329254.6456
645.43039820.007087
5.0 Conclusion

The above analysis can be generalized to many other functions, albeit I do not fully understand how to characterize the class of such functions. Feigenbaum states that the period doubling route to chaos is not limited to one-dimensional processes. I believe it also arises in continuous time systems, as defined by certain non-linear differential equations. Do you find it surprising that a universal constant with wide applicably to physical processes (like year-by-year changes in the population demographics of certain species), has been discovered in the lifetime of many now alive?

References
  • Keith Briggs (1991). A Precise Calculation of the Feigenbaum Constants, Mathematics of Computation, V. 57, No. 195 (July): pp. 435-439.
  • Mitchell J. Feigenbaum (1980). Universal Behavior in Nonlinear Systems, Los Alamos Science (Summer): pp. 4-27.
  • Tien-Yien Li & James A. Yorke (1975). Period Three Implies Chaos, American Mathematical Monthly, V. 82, No. 10 (Dec.): pp. 985-992.
  • Robert M. May (1976). Simple Mathematical Models with Very Complicated Dynamics, Nature, 261: pp. 459-467.

Tuesday, December 17, 2013

Purge At Amsterdam?

I have noticed that the recent history of economics has been impacted by various purges in various prominent economics departments. I think of, for example, Harvard, Rutgers, and Notre Dame. I had not noticed this one when it was going on:

"For most of my time over ten years at the University of Amsterdam my research and that of my colleagues was strongly supported. (I taught three courses every second fall term, and took leave from Marquette.) Unfortunately over the last two years people in leadership positions there at the faculty of economics decided that the history and methodology of economics (HME) was not important, and in conditions of a financial emergency associated with chronic budget shortfalls closed down the HME group. That included sacking my very accomplished and, in our field, well-respected colleagues Marcel Boumans and Harro Maas, who had been associate professors there for many years, and ending the chair position in HME, which I held, which had been at the faculty for decades. We had six courses in the history and methodology of economics; engaged and enthusiastic students; a research group of up to a dozen people; a master degree in HME; PhD students; and a required methodology course for bachelor students. I do not think there was a better program in the world in our field. We also had great interaction with the London School of Economics, the history of economics people at Duke University, history of economics people in Paris, and the Erasmus Institute for Philosophy and Economics. The HME group was internationally recognized, and attracted students from across the world. Our financial footprint, in fact, was quite small compared to other groups, and by a number of measures of output per person we were more productive than many other research groups at Amsterdam.

Since I fully believe the faculty financial emergency could have been addressed without eliminating the group, I can only put what happened down to prejudice against our field, plus the usual on-going territorial aggrandizing that has been a key factor in the elimination of history of economics from most American universities. It is interesting to me also, that with a few exceptions, members of the economics faculty at Amsterdam made no effort on the HME group’s behalf to resist what happened or even personally expressed regret or concern to those who lost their jobs. I find this reprehensible.

The loss of this program was a blow to our field. There are now few places in the world training PhD students in history and/or methodology of economics. So in the final analysis the situation for economics and philosophy is mixed: considerable achievement with an uncertain future. Great weight, in my view, should be placed on restoring PhD training in the field, something that is being done, for instance, through generous grants from the Institute for New Economic Thinking at Duke University under Bruce Caldwell." -- John B. Davis (2012). Identity Problems: An interview with John B. Davis, Erasmus Journal for Philosophy and Economics, V. 5, Iss. 2 (Autumn): pp. 81-103.

Wednesday, December 11, 2013

Reminder: Reductionism Is Bad Science

I want to remind myself to try to download the following in a couple of weeks:

Abstract: Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis. -- Erik P. Hoel, Larissa Albantakis, and Giulio Tononi (2013). Quantifying Causal Emergence Shows that Macro Can Beat Micro, Proceedings of the National Academy of Sciences, V. 110, no. 49.

As far as I can tell, the above article is not specifically about economics. I do not understand download policy for the Proceedings of the National Academy of Sciences. I gather that you must be registered to download articles from the current issues, but can download back issues with no such restriction.

Hat Tip: Philip Ball

Monday, December 09, 2013

Honest Textbooks

1.0 Introduction

Every teacher, I guess, of an introductory or intermediate course has a struggle with how to teach material that requires more advanced concepts, outside the scope of the course, to fully understand. I think it would be nice for the textbooks not to foreclose the possibility of pointing out this requirement. I here provide a couple of examples from some mathematics textbooks that I happen to have.

2.0 Probability

Hogg and Craig (1974) is a book on probability and statistics. I have found many of their examples and theorems of use in a wide variety of areas. They usually do not explain how many of these ideas can be expanded to an entire applied course.

2.1 Borel Sets and Non-Measurable Sets

An axiomatic definition of probability is a fundamental concept for this book. Hogg and Craig recognize that they do not give a completely rigorous and general definition:

Let C denote the set of every possible outcome of a random experiment; that is, C is the sample space. It is our purpose to define a set function P(C) such that if C is a subset of C, then P(C) is the probability that the outcome of the random experiment is an element of C...

Definition 7: If P(C) is defined for a type of subset of the space C, and if,

  • P(C) ≥ 0,
  • Let C be the union of C1, C2, C3, ... Then P(C) = P(C1) + P(C2) + P(C3) + ..., where the sets Ci, i = 1, 2, 3, ..., are such that no two have a point in common...
  • P(C) = 1,

then P(C) is called the probability set function of the outcome of the random experiment. For each subset C of C, the number P(C) is called the probability that the outcome of the random experiment is an element of the set C, or the probability of the event C, or the probability measure of the set C.

Remark. In the definition, the phrase 'a type of subset of the space C' would be explained more fully in a more advanced course. Nevertheless, a few observations can be made about the collection of subsets that are of the type... -- Hogg and Craig (1974): pp. 12-13 (Notation changed from original).

2.2 Moment Generating and Characteristic Functions

Hogg and Craig work with moment generating functions throughout their book. In the chapter in which they introduce them, they state:

Remark: In a more advanced course, we would not work with the moment-generating function because so many distributions do not have moment-generating functions. Instead, we would let i denote the imaginary unit, t an arbitrary real, and we would define φ(t) = E(eitX). This expectation exists for every distribution and it is called the characteristic function of the distribution...

Every distribution has a unique characteristic function; and to each characteristic function there corresponds a unique distribution of probability... Readers who are familiar with complex-valued functions may write φ(t) = M(i t) and, throughout this book, may prove certain theorems in complete generality.

Those who have studied Laplace and Fourier transforms will note a similarity between these transforms and [the moment generating function] M(t) and φ(t); it is the uniqueness of these transforms that allows us to assert the uniqueness of each of the moment-generating and characteristic functions. -- Hogg and Craig (1978): pp. 54-55.

3.0 Fourier Series

Lin and Segel (1974) provides a case study approach to applied mathematics. They introduce certain techniques and concepts in the course of specific problems. Fourier analysis is introduced in the context of the heat equation. They then look at more generals aspects of Fourier series and transforms. They state:

Suppose that we now pose the following problem, which can be regarded as the converse to Parseval's theorem. Given a set of real numbers a0, am, bm, m = 1, 2, ..., such that the series

(1/2) a02 + {[a12 + b12] + [a22 + b22] + ...}

is convergent, is there a function f(x) such that the series

(1/2) a0 + {[a1cos(x) + b1sin(x)]
+ [a2cos(2x) + b2sin(2x)] + ...}

is its Fourier series?

An affirmative answer to this question depends on the introduction of the concepts of Lebesque measure and Lebesque integration. With these notions introduced, we have the Riesz-Fisher theorem, which states that (i) the [above] series ... is indeed the Fourier series of a function f, which is square integrable, and that (ii) the partial sums of the series converge in the mean to f.

The problem we posed is a very natural one from a mathematical point of view. It appears that it might have a simple solution, but it is here that new mathematical concepts and theories emerge. On the other hand, for physical applications, such a mathematical question does not arise naturally. -- C. C. Lin & L. A. Segel (1974): p. 147 (Notation changed from original).

4.0 Discussion

Here is a challenge: point out such candid remarks in textbooks in your field. I suspect many can find such comments in many textbooks. I will not be surprised if some can find some in mainstream intermediate textbooks in economics. Teaching undergraduates in economics, however, presents some challenges and tensions. I think of the acknowledged gap between undergraduate and graduate education. Furthermore, I think some tensions and inconsistencies in microeconomics cannot be and are never resolved in more advanced treatments. Off the top of my head, here are two examples.

  • The theory of the firm requires the absence of transactions costs for perfect competition to prevail. But under the conditions of perfect competition, firms would not exist. Rather workers would be independent contractors, forming temporary collectives when convenient.
  • Under the theory of perfect competition, as taught to undergraduates, firms are not atomistic. Thus, when taking prices as given, the managers are consistently wrong about the response of the market to changes they may each make to the quantity supplied. On the other hand, when firms are atomistic and of measure zero, they do not produce at the strictly positive finite amount required by the theory of a U-shaped average cost curve.

My archives provide many other examples of such tensions, to phrase it nicely.

References
  • Robert V. Hogg & Allen T. Craig (1978). Introduction to Mathematical Statistics, 4th edition, MacMillan.
  • C. C. Lin & L. A. Segel (1974). Mathematics Applied to Deterministic Problems in the Natural Sciencs, Macmillan

Monday, December 02, 2013

A New Order In Economics From Manchester

Some links:

By the way, Ian Steedman, a leading Sraffian economist, was at the University of Manchester not too long ago. And, I believe, he did supervise a number of doctorate theses from students at Manchester. So the closing of the doors to form the current monoculture happened only over the last decade, I guess.

Update: Originally posted on 6 November 2013. Updated on 12 November 2013 to include more links.

Update: Updated on 2 December 2013 to include more links.

Friday, November 29, 2013

Who Wants To Be A Millionaire?

Current Dollars For Millionaires Of Various Eras

How much would you need to have today to live like a millionaire in 1920? Over 10 million dollars, according to the chart1. On the other hand, a millionaire in 1980 would only need a bit less than 1.2 million dollars today to live in comparable luxury.

Footnote
  1. Obviously, any comparison of income over such a long period can be only a rough and ready guide, not an exact function yielding precise results suitable for the application of the the differential calculus. The Bureau of Labor Statistics (BLS) provides information on how the Consumer Price Index (CPI) is computed and how the components in a typical consumption basket changes over time.

Saturday, November 23, 2013

On "Neoclassical Economics"

I just want to document here some usages of the phrase, "Neoclassical economics". I restrict myself to literature in which the use is in the nature of aside, including within critiques of mainstream and neoclassical economics. Such documentation can be multiplied indefinitely. I do not quote from literature (e.g., Colander, Holt, and Rosser 2004; Davis 2008; Lawson 2013; Lee and Lavoie 2012; Varoufakis 2012) which focuses on the meaning of the word and on the sociology of economics. I find it quite silly to attempt to refute a critique of mainstream economics by complaining, without any other argument, that the word 'neoclassical' appears in that critique.

"In the last dozen years what before was simply known as economics in the nonsocialist world has come to be called neo-classical economics. Sometimes, in tribute to John Maynard Keynes's design for government intervention to sustain purchasing power and employment, the reference is to Keynesian or neo-Keynesian economics. From being a general and accepted theory of economic behavior, this has become a special and debatable inter-pretation of such behavior. For a new and notably articulate generation of economists a reference to neoclassical economics has become markedly pejorative. In the world at large the reputation of economists of more mature years is in decline." -- John Kenneth Galbraith (1973).
"One further matter merits consideration before we get down to business. I often refer to neoclassical theory and I had better make clear what I do and do not mean by this designation. For present purposes I shall call a theory neoclassical if (a) an economy is fully described by the preferences and endowments of agents and by the production sets of firms; (b) all agents treat prices parametrically (perfect competition); and (c) all agents are rational and given prices will take that action (or set of actions) from amongst those available to them which is best for them given their preferences. (Firms prefer more profit to less.)" -- Frank Hahn (1984).

"Let us attempt to identify the key characteristics of neoclassical economics; the type of economices that has dominated the twentieth century. One of its exponents, Gary Becker (1967a, p. 5) identified its essence when he described 'the combined assumptions of maximizing behavior, market equilibrium, and stable preferences, used relentlessly and unflinchingly.' Accordingly, neoclassical economics may be conveniently defined as an approach which:

  1. Assumes rational, maximizing behaviour by agents with given and stable preference functions,
  2. Focuses on attained, or movements towards, equilibrium states, and
  3. Is marked by an absence of chronic information problems.

Point (3) requires some brief elaboration. In neoclassical economics, even if information is imperfect, information problems are typically overcome by using the concept of probabilistic risk. Excluded are phenomena such as severe ignorance and divergent perceptions by different individuals of a given reality. It is typically assumed that all individuals will interpret the same information in the same way, ignoring possible variations in the cognitive frameworks that are necessary to make sense of all data. Also excluded is uncertainty, of the radical type explored by Frank Knight (1921) and John Maynard Keynes (1936).

Notably, these three attributes are interconnected. For instance, the attainment of a stable optimum under (1) suggests an equilibrium (2); and rationality under (1) connotes the absence of severe information problems alluded to in (3). It can be freely admitted that some recent developments in modern economic theory - such as in game theory - reach to or even lie outside the boundaries of this definitions. Their precise placement will depend on inspection and refinement of the boundary conditions in the above clauses. But that does not undermine the usefulness of this rough and ready definition.

Although neoclassical economics has dominated the twentieth century, it has changed radically in tone and presentation, as well as in content. Until the 1930s, much neoclassical analysis was in Marshallian, partial equilibrium mode. The following years saw the revival of Walrasian general equilibrium analysis, an approach originally developed in the 1870s. Another transformation during this century has been the increasing use of mathematics, as noted in the preceding chapter. Neoclassical assumptions have proved attractive because of their apparent tractability. To the mathematically inclined economist the assumption that agents are maximizing an exogeneously given and well defined preference function seems preferable to any alternative or more complex model of human behaviour. In its reductionist assumptions, neoclassical economics has contained within itself from its inception an overly formalistic potential, even if this took some time to become fully realized and dominant. Gradually, less and less reliance has been placed on the empirical or other grounding of basic assumptions, and more on the process of deduction from premises that are there simply because they are assumed.

Nevertheless, characteristics (1) to (3) above have remained prominent in mainstream economics from the 1870s to the 1980s. They define an approach that still remains ubiquitous in the economics textbooks and is taught to economics undergraduates throughout the world." -- Geoffrey M. Hodgson (1988).

"The creators of the neoclassical model, the reigning economic paradigm of the twentieth century, ignored the warnings of nineteenth-century and still earlier masters about how information concerns might alter their analyses- perhaps because they could not see how to embrace them in their seemingly precise models, perhaps because doing so would have led to uncomfortable conclusions about the efficiency of markets. For instance, Smith, in anticipating later discussions of adverse selection, wrote that as firms raise interest rates, the best borrowers drop out of the market. If lenders knew perfectly the risks associated with each borrower, this would matter little; each borrower would be charged an appropriate risk premium. It is because lenders do not know the default probabilities of borrowers perfectly that this process of adverse selection has such important consequences." -- Joseph E. Stiglitz (2002).
References
  • David Colander, Richard P. F. Holt, and J. Barkley Rosser, Jr. (2004). The changing face of mainstream economics, Review of Political Economy, V. 16, No. 4: pp. 485-499.
  • John B. Davis (2008). The turn in recent economics and return of orthodoxy, Cambridge Journal of Economics, V. 32: pp. 349-366.
  • John Kenneth Galbraith (1973). Power and the Useful Economist, American Economic Review, Presidential address at the 85th annual meeting of the American Economic Association in Toronto, Canada in December 1972.
  • Frank Hahn (1982). The neo-Ricardians, Cambridge Journal of Economics, V. 6: pp. 353-374.
  • Tony Lawson (2013). What is this 'school' called neoclassical economics?, Cambridge Journal of Economics, 2013.
  • Fred Lee and Marc Lavoie (editors) (2012). In Defense of Post-Keynesian and Heterodox Economics: Responses to their Critics, Routledge. [I've not read the book, but have read some chapters published seperately.]
  • Geoffrey M. Hodgson (1999). False Antagonisms and Doomed Reconcilations, Chapter 2 in Evolution and Institutions: On Evolutionary Economics and the Evolution of Economics, Edward Elgar.
  • Joseph E. Stiglitz (2002). Information and the Change in the Paradigm in Economics, American Economic Review, V. 92, N. 3 (June): pp. 460-501.
  • Yanis Varoufakis (2012). A Most Peculiar Failure: On the dynamic mechanism by which the inescapable theoretical failures of neoclassical economics reinforce its dominance.

Tuesday, November 19, 2013

Thoughts On Davis' Individuals and Identity in Economics

I have previously gone on about multiple selves, also known as Faustian agents. I had not considered how an individual manages these selves in making plans and decisions. My point was to apply Arrow's impossibility theorem at the level of the single agent, thereby demonstrating the necessity of some argument for characterizing an individual by a utility function.

Consider many individuals interacting in a market, each being composed of multiple selves. What, in the analysis, groups together sets of these multiple selves to identify individuals? This problem, and similar problems with many other decision theory analyses, is the theme of John D. Davis' 2011 book, Individuals and Identity in Economics.

By the way, an interesting issue arises with multiple selves interacting through time. One might justify hyperbolic discounting by thinking of an individual as composed of a different self at each moment in time. Why should these selves make consistent plans? Might one self start an action based on a plan for future actions, only to have a future self revise or reject that plan? This is the third or fourth time I have started reading Davis' book. Anyways, on pages 41 and 42, Davis writes:

"...[Herbert] Simon's recommendation to abandon the standard utility function framework was not influential in economics, but Lichtenstein and Slovic's demonstration of preference reversals was. Most economists initially dismissed it on a priori grounds, but David Grether and Charles Plott believed that they could go farther and demonstrate that preference reversals could not possibly exist. They identified thirteen potential errors in psychologists' preference reversal experimental methodology and accordingly set out to show that preference reversals were only an artifact of experimental design. Nonetheless, they ended up confirming their existence as well as Simon's judgement of utility functions:

'Taken at face value the data are simply inconsistent with preference theory and have broad implications about research priorities in economics. The inconsistency is deeper than mere lack of transitivity or even stochastic transitivity. It suggests that no optimization principles of any sort lie behind the simplest of human choices and that uniformities in human choice behavior which lie behind market behavior may result from principles which are of a completely different sort from those generally accepted.'(Grether and Plott 1979, 623; emphasis added)

Published in the American Economic Review, this was a momentous admission for economists. However, for many psychologists the debate was already long over, and research had moved on to which theories best explained preference construction. James Bettman published what is regarded as the first theory of preference construction in the same year Grether and Plott's paper appeared (Bettman 1979), a major review of preference construction theories appeared in 1992 (Payne and Bettman 1992), and Lichtenstein and Slovic's retrospective volume appeared in 2006 (Lichtenstein and Slovic 2006). As Slovic put it in 1995, 'It is now generally recognized among psychologists that utility maximization provides only limited insight into the processes by which decisions are made' (Slovic 1995, 365). Grether and Plott, interestingly, extended their own critique of standard rationality to Kahneman and Tverky's proposed prospect theory replacement, implicitly highlighting the difference between the two currents in Edwards' B[ehavioral] D[ecision] R[esearch] program.

'We need to emphasize that the phenomenon causes problems for preference theory in general, and not for just the expected utility theory. Prospect theory as a special type of preference theory cannot account for the results.' (Grether and Plott 1982; 575)

So, given the data and what economists have said years ago about it in the most prominent and most prestigious economics journal in America, one can expect mainstream economists today to have rejected utility theory, revealed preference theory, prospect theory, and the usual old textbook derivation of market demand curves and factor supply curves. Right?

Saturday, November 09, 2013

Mainstream And Non-Mainstream Economics: Research Areas Transgressing The Boundaries

1.0 Introduction

Mainstream and non-mainstream economics can be read as sociological categories, defined by what conferences economists attend, in which journals they publish, and through patterns of referencing. One might expect the intellectual content of the theories put forth by mainstream and non-mainstream economists to cluster, too. In some sense, non-mainstream economists are also automatically heterodox, where heterodoxy refers to the content of theories. For example, heterodox economists tend to prefer theories in which agents are socially embedded and constituted, in some sense, by society (instead of being pre-existing, asocial monads).

The point of this post, though, is to illustrate that the boundary between mainstream and non-mainstream economists is not hard and fast, at least as far as ideas go. I point out two-and-a-half areas where both categories of economists are developing similar ideas.

2.0 Complex Economic Dynamics

Economic models are available which exhibit complex, non-linear dynamics, including chaos. Richard Goodwin, Steve Keen, and Paul Ormerod are some self-consciously non-mainstream heterodox economists who have developed such models. Jess Benhabib and John Geanakoplos are some authoritative mainstream economists on certain models of this type. I also want to mention some researchers who I do not feel comfortable putting in either category. As I understand it, J. Barkley Rosser, Jr. makes an effort to talk to both mainstream and non-mainstream economists. I do not know enough about, for example, Anna Agliari to say what she would say about these categories. And Donald G. Saari is a mathematician interested in social science; so I am not sure how these categories would apply to him, if at all.

3.0 Multiple Selves

I have previously commented on theories of multiple selves, also known as Faustian agents. I particularly like the conclusion, from the Arrow impossibility theorem, that an agent's preferences cannot necessarily be characterized by a utility function, given a theory of modular minds.

I do not think I know enough about these theories to talk authoritatively on this subject. Specifically, I have some dim awareness that a large literature exists here about time (in)consistency of decisions. But I am aware that this is a topic of research among both non-mainstream and mainstream economists. I cite John B. Davis, Ian Steedman, and Ulrich Krause as non-mainstream, heterodox economists with literature in this area. And I cite E. Glen Weyl as a mainstream economist also with literature here.

4.0 Choice Under Uncertainty

Keynes distinguished between risk and uncertainty. Post Keynesian economists have famously developed this theme. Works seen as part of mainstream economics in their time also distinguish between risk and uncertainty, for example:

"...Let us suppose that a choice must be made between two actions. We shall say that we are in the realm of decision making under:

  • Certainty if each action is known to lead invariably to a specific outcome...
  • Risk if each action leads to one of a set of possible outcomes, each outcome occurring with a known probability. The probabilities are assumed known to the decision maker...
  • Uncertainty if either action or both has as its consequences a set of possible specific outcomes, but where the probabilities of these outcomes are completely unknown or are not even meaningful."

-- R. Duncan Luce and Howard Raiffa, Games and Decisions, Harvard University (1957): p. 13.

I only feel entitled to count this as half an example. I find that other literature on the foundations of decision theory is also clear on assumptions about known outcomes and probabilities necessary to characterize a situation of risk. But I do not know of contemporary mainstream economists researching choice under uncertainty (as opposed to risk). I think elements of Chapter 13 of Luce and Raiffa, on decision making under uncertainty, has entered the teaching of business schools targeted towards, for example, corporate managers.

5.0 Reflections

I do not think that this post has demonstrated an openness in mainstream economics. Further work would need to show an awareness among mainstream researchers of parallel work by non-mainstream economists, a willingness to critically engage that work, and a willingness to cite it in mainstream literature. Furthermore, one would like to show that the implications of such work is transitioning into the teaching of economists at all levels. I have seen some economists verbally affirm that economies are complex dynamic systems and then ignore the implications of such a claim. Some economists - for example, Yanis Varoufakis - have expressed skepticism that cutting edge mainstream economics research, in which unique deterministic outcomes do not obtain, can be successfully transitioned. Nevertheless, I find the parallel research noted above to be intriguing.

Tuesday, October 29, 2013

Immanuel Kant, Crank

One intellectually bankrupt technique to rationalize non-engagement with an argument is to complain that the ones putting forth the argument compare themselves to Nicolaus Copernicus or Galileo:

"So the central laws of the movements of the heavenly bodies established the truth of that which Copernicus, first, assumed only as a hypothesis, and, at the same time, brought to light that invisible force (Newtonian attraction) which holds the universe together. The latter would have remained forever undiscovered, if Copernicus had not ventured on the experiment-contrary to the senses but still just-of looking for the observed movements not in the heavenly bodies, but in the spectator. In this Preface I treat the new metaphysical method as a hypothesis with the view of rendering apparent the first attempts at such a change of method, which are always hypothetical...

This attempt to introduce a complete revolution in the procedure of metaphysics, after the example of the geometricians and natural philosophers, constitutes the aim of the Critique of Pure Speculative Reason." -- Immanuel Kant, Critique of Pure Reason, Preface to Second Edition.

As I understand it, Kant's revolution was not to look for what must humans be to understand an external, given, phenomenal reality. Rather, he asked, what must the phenomena be such that we could observe it, given the properties of our understanding. But I have never read far in this particular book. Nor, given non-Euclidean geometry and other readings in philosophy, do I expect to agree with it.

Saturday, October 26, 2013

An Alternative Economics

Who are the greatest economists of, say, the last fifty years? I suggest the shortlist for many academic economists would include Kenneth Arrow, Milton Friedman, and Paul A. Samuelson. Imagine1 a world in which the typical academic economist would be inclined to name the following as exemplars:

  • Fernand Braudel
  • John Kenneth Galbraith
  • Albert Hirschman
  • Gunnar Myrdal
  • Karl Polanyi.

These economists did not insist on using closed, formal, mathematical models2 everywhere3. They tended to present their findings in detailed historical and qualitative accounts4.

Footnotes
  1. Maybe the history of political economy is overdetermined, in some sense. So I am not sure what this counterfactual would entail.
  2. They did use or, at least, comment on small mathematical models, where appropriate. For example, both Hirshman and Myrdal had cautions about the Harrod-Domar model.
  3. I have been reading a little of Tony Lawson.
  4. As far as I am concerned, these are accounts of empirical research.

Friday, October 25, 2013

Raj Chetty Needs Your Help

I have argued before that whether or not economics is a science is an uninteresting question. Rather, one should be concerned with the quality of arguments economists put forth, how they engage one another, and how they respond to empirical evidence. As regard to the quality of arguments, Raj Chetty's performance contradicts his thesis. (I realize I am talking about an article meant for the general public, not a journal article meant for other members of his profession). Chetty writes:

"...At first blush, Mr. Shiller's thinking about the role of 'irrational exuberance' in stock markets and housing markets appears to contradict Mr. Fama's work showing that such markets efficiently incorporate news into prices.

What kind of science, people wondered, bestows its most distinguished honor on scholars with opposing ideas? 'They should make these politically balanced awards in physics, chemistry and medicine, too,' the Duke sociologist Kieran Healy wrote sardonically on Twitter.

But the headline-grabbing differences between the findings of these Nobel laureates are less significant than the profound agreement in their scientific approach to economic questions, which is characterized by formulating and testing precise hypotheses. I'm troubled by the sense among skeptics that disagreements about the answers to certain questions suggest that economics is a confused discipline, a fake science whose findings cannot be a useful basis for making policy decisions.

That view is unfair and uninformed. It makes demands on economics that are not made of other empirical disciplines, like medicine, and it ignores an emerging body of work, building on the scientific approach of last week's winners, that is transforming economics into a field firmly grounded in fact.

It is true that the answers to many 'big picture' macroeconomic questions — like the causes of recessions or the determinants of growth — remain elusive. But in this respect, the challenges faced by economists are no different from those encountered in medicine and public health. Health researchers have worked for more than a century to understand the 'big picture' questions of how diet and lifestyle affect health and aging, yet they still do not have a full scientific understanding of these connections. Some studies tell us to consume more coffee, wine and chocolate; others recommend the opposite. But few people would argue that medicine should not be approached as a science or that doctors should not make decisions based on the best available evidence..." -- Raj Chetty, Yes, Economics Is a Science, New York Times, 21 October 2013: p. A21.

Chetty's argument is chalk and cheese, apples and oranges. Chetty brings up the existence of open questions in medicine. (I find it easy to decide which studies about health to believe; I pick the ones that tell me to do what I want to do anyways.) But the complaint about this year's Nobel is not about the existence of open questions in economics. It is rather wonder at the simultaneous award of the top prize to two, namely Eugene Fama and Robert Shiller, for apparently making opposite statements about how markets work. (I am ignoring Lars Peter Hansen in this post.) As I understand it, you do not usually get a Nobel in Physics, Chemistry, or Medicine without empirical evidence for your claims. Later empirical findings might overturn Nobel work, but, if so, the prize would not be shared in the same year for both establishing and overturning a theory. So Chetty's argument is a blatant non-sequitur.

What is needed is an argument why the specific work of the researchers sharing the award is complementary, not contradictory. You can find some such arguments on the Internet, but Chetty skates right by this need at too high a level of abstraction to be useful. (Readers should feel free to offer other links in the comments for such arguments.) I do not think this analogy works in the details, but imagine a prize at the relevant time simultaneously honoring Tycho Brahe and Johannes Kepler. Tycho had a mistaken halfway, sort of heliocentric model of solar system, and Kepler had a correct, geocentric model with the planets having elliptic orbits. But Tycho would not be being honored for his mistaken model. Rather, as I understand it, he collected more useful and precise observations of the planets than had been done by anybody before him. And Kepler used these observations to develop and verify his theory. If Tycho's work was not available, Kepler would not have succeeded.

(Note that work by economists with Randomized Control Trials (RCTs), natural experiments, and Instrumental Variables (IVs) is irrelevant to any points I have made above.)

Off topic aside: I want to note this post from Unlearning Economics, the Post-Crash Economics Society at the University of Manchester, an article about this society in The Guardian, and an organization called Rethinking Economics.