Saturday, September 28, 2013

What's Bayesianism Got To Do With It? A Software Reliability Example

Figure 1: The Musa-Okumoto Model Fit To Failure Data
1.0 Introduction

I was partly inspired for this post by a dispute between Matheus Grasselli and Philip Pilkington over the applicability of Bayesian statistics to non-ergodic processes. I happen to know of certain parametric software reliability models that describe software failures as non-stationary stochastic processes. Bev Littlewood is a prominent proponent of Bayesian models in software reliability engineering. And he has also recommended a certain approach to accessing the goodness of fit for software reliability models, including ones not originally put forth as Bayesian models.

Here I provide an example of an application of this approach for assessing goodness of fit. I do not see what Bayesianism has to do with this approach. It is not expounded in terms of prior and posterior distributions. I do see that this approach considers the model(s) being assessed as part of a larger system for making predictions, with the parameter estimates on which those predictions are based being updated with each new observation. Is this approach, therefore, Bayesian?

2.0 Failure Data

The data used for illustration here consists of 367 failures observed over 73 weeks, and they were provided by the Jet Propulsion Laboratory (JPL). The system under test was:

"A facility for tracking and acquiring data from earth resource satellites in high-inclination orbits... About 103,000 uncommented Source Lines Of Code in a mixture of C, Fortran, EQUEL, and OSL... Failure data were obtained from the development organization's anomaly reporting system during software integration and test." -- Nikora and Lyu (1996).

The blue line in Figure 1 above represents the data. This is a step function, where each step occurs after an interval of one week. The height of each step represents the number of failures observed in the week ending at the start of the step. Figure 1 also shows a point estimate of the mean value function, and 90% confidence intervals, based on fitting the Musa-Okumoto software reliability model to the data.

3.0 The Musa-Okumoto Model

As in other software reliability models, the Musa-Okumoto model relies on a distinction between failures and faults. A failure is behavior, including unexpected termination, observed while the software system under test is running. A failure occurs when the system is observed to behave other than specified. A fault, also known as a bug, is a static property of the system. A failure occurs when the inputs to the system cause altered behavior from the software entering a state in which a bug is tripped.

The Musa-Okuomoto model is an example of a software reliability model based on the assumption that the system under test contains an infinite number of faults. No matter how many faults have been found and removed, more faults always exist. Faults found later, however, tend to have an exponentially decreasing impact on the failure rate. More formally, the assumptions of the model are:

  1. The numbers of failures observed in non-overlapping time intervals are stochastically independent.
  2. The probability of one failure occurring in a small enough time interval is roughly proportional to the length of the time interval.
  3. The probability of more than one failure occurring in a small enough time interval is negligible.
  4. Detected faults are immediately corrected.
  5. No new faults are introduced during the fault-removal process.
  6. Aside from such repair of faults, the software is unchanged during test. No new functionality is introduced.
  7. Testing is representative of operational usage.
  8. The failure rate decreases exponentially with the number of removed faults.

The first three assumptions imply that failures occur as a Poisson Process, where a Poisson Process describes events happening randomly in time, in some sense. The sixth and seventh assumptions are unlikely to be met for the data being analyzed here. For best use of these kinds of software reliability models, system test should be designed to randomly select inputs from a specified operational profile. Nevertheless, these sorts of models have often been applied to test data observed naturally. The remaining assumptions imply that failures occur as a Non-Homogeneous Poisson Process (NHPP), where the failure rate is the following function of the mean value function:

λ(t) = λ0 em(t)

where:

  • λ(t) is the failure rate, in units of failures per week. (The distinction between the failure rate and the failure intensity is moot for Poisson Processes.)
  • m(t) is the mean value function, that is, the number of failures expected by week t.
  • λ0 is the initial failure rate.
  • θ is the failure rate decay parameter.

Figure 2 plots the above function for the data for the system under test, with parameter estimates being based on all of the data. The mean value function is plotted with the data along the abscissa for each week by taking the cumulative number of failures observed by the end of that week. The corresponding ordinate value for the data is the quotient of the number of failures observed in each time interval (namely, a week) and the length of the time interval. The graph shows quite a bit of variation on this scale, clustered around the model estimate. A slight improvement in software reliability is noticeable, but the convexity in the estimate from the model is hard to see.

Figure 2: The Failure Rate As A Function Of The Mean Value Functon

4.0 The U-Test: How Am I Doing

An attempt was made to estimate the model parameters after each week. For this model, maximum likelihood estimates are found by an iterative algorithm. The iterative algorithm did not converge at the end of week 49, leaving the twenty four parameter estimates shown in the table in Figure 3. Figure 3 also shows percent variability of the parameter estimates. (Percent variability is the ratio of the absolute difference of the estimates for two successive weeks to the estimate for the earlier of the two weeks.) The model estimates seem to have settled down towards the end of test period. They are unlikely to vary much over succeeding weeks if testing is to continue.

Figure 3: Percent Variability In Parameter Estimates

In the Musa-Okumoto model, the number of failures observed in a given week is a realization of a random variable from a Poisson distribution. The mean of this distribution, for given model parameters, varies with the week. The expected number of failures declines in later weeks. In the approach illustrated here, the model parameters are re-estimated in each successive week. Specifically, define ui; i = 1, 2, ..., 23; as the value of the CDF for the number of failures in week 50 + i, where:

  • The CDF is evaluated for the number of failures actually observed in the given week.
  • The CDF is calculated with estimates of the model parameters based on the number of failures observed in all weeks up to, but not including the given week.

For example, u1 is an estimate of the CDF for the number of failures found in week 51, based on parameter estimates found with the number of failures observed in weeks 1 through 50. This procedure yields a sequence of (temporally ordered) numbers:

u1, u2, ..., u23

Each ui, being a probability, is between zero and one. These numbers, in effect, provide a measure of how well the model, with its estimation procedure, is predicting failures of the system under test. Under the null hypothesis that the model describes the test process generating the failure data, the sequence of uis is the realization of a random sample from a probability distribution uniformly distributed on the interval [0, 1].

The Kolmogorov-Smirnov statistic provides a formal test that the uis are realizations of identical and independently distributed random variables from a uniform distribution. Figure 4 compares and contrasts the CDF from a uniform distribution with the empirical probability distribution found from the data. The theoretical distribution for the uniform distribution is merely a 45 degree line segment sloping upward from the origin to the point (1, 1). That is, the probability that a realization of a random variable uniformly distributed on [0, 1] will be less than some given value u is merely that given value.

Figure 4: The U-Test For The Example

The empirical CDF found from the Musa-Okumoto model applied to the data is a step function. The probability that a random variable from this distribution will be less than a given value u is the proportion of uis in the data less than that given value. The exact location of the step function is found as follows. First, sort the sample values in increasing order:

u(1)u(2) ≤ ... ≤ u(23)

The sorted values are known as order statistics. (Note the temporal order is lost by this sorting; this test does not reject the model if, for example, overestimations of failures in earlier weeks average out underestimations of failures in later weeks.) For this data, u(1) is approximately 0.007271, and u(23) is approximately 0.939. Plot these order statistics along the abscissa. A step occurs at each plotted order statistic, and each step is, in this case, of size 1/(23 + 1) (approximately 0.042) up.

The Kolmogorov-Smirnov statistic is the maximum vertical distance between the theoretical CDF, specified by the null hypothesis, and the empirical CDF, as calculated from the data. In the figure, this maximum vertical distance is 0.18869, found for the second step up, at u(2) (which is approximately 0.23). The p-value of 34.2% is the probability that the empirical CDF, for a random sample of 23 points drawn from a uniform sample, will be at least this far above or below the empirical CDF. Since the p-value exceeds the pre-selected conventional level of statistical significance of 10%, the data does not allow one to conclude that the model does not fit the data. (One should set a fairly low level of statistical significance so as to increase the power of the test.) Thus, one accepts that the Musa-Okumoto model is successful in predicting failures in weeks one step ahead.

5.0 Conclusions

The above analysis suggests that, despite the violation of model assumptions in the test process, JPL could have used the Musa-Okumoto model as an aid in managing this test phase of the tracking and data acquisition system considered here.

In what sense, if any, is the method illustrated here for analyzing the model's goodness-of-fit Bayesian?

References
  • Sarah Brocklehurst and Bev Littlewood (1996). Techniques for Prediction Analysis and Recalibration, in Handbook of Software Reliability (edited by M. R. Lyu), McGraw-Hill.
  • Grasselli, Matheus (2013). Keynes, Bayes, and the law.
  • Nikora, Allen P. and Michael R. Lyu (1996). Software Reliability Measurement Experience, in Handbook of Software Reliability (edited by M. R. Lyu), McGraw-Hill.
  • Pilkington, Philip (2013). Holey Models: A Final Response to Matheus Grasselli.

Wednesday, September 25, 2013

S. Abu Turab Rizvi, Not Steve Keen

Notice the point about the need in neoclassical theory to assume that preferences satisfy Gorman's assumptions of identical and homothetic (non-varying with income) preferences:

"Extensions of the basic arbitrariness results to configurations of preferences and endowments which are in no way 'pathological', and are in fact more and more restrictive, indicate the robustness of S[onnenschein-]M[antel-]D[ebreu] theory.

For instance, an assumption which is often made to improve the chances of meaningful aggregation is that of homothetic preferences, which yields linear Engel curves and so no complicated income effects at the level of the individual. However, with only a slight strengthening of the requirements of Debreu's theorem, Mantel (1976) showed that the assumption of homotheticity does not remove the arbitrariness of A[ggregate] E[xcess] D[emand functions].

Moreover, the possibility that consumers have to be very different, or that unusual endowment effects need to take place, in order for SMD-type results to hold was refuted by Kirman and Koch (1986): even supposing consumers to have identical preferences and collinear endowments does not remove the arbitrariness of the AEDs. Of course, if preferences are simultaneously identical and homothetic, AED is a proportional magnification of individual excess demand (Gorman, 1953; Nataf, 1953) and the whole economy behaves as if it were an individual (obeying the Weak Axiom of Revealed Preference in the aggregate), but this is an extremely special situation. The Mantel and Kirman-Koch theorems effectively countered the criticism of SMD theory raised by Deaton by showing that the primitives can be arrayed in ways which on the face of it are very congenial towards generating well-behaved results, yet the arbitrariness property of AEDs remains." -- S. Abu Turab Rizvi, "The Microfoundations Project in General Equilibrium Theory", Cambridge Journal of Economics, V. 18 (1994): p. 362.

In short, neoclassical economists have proved (by contradiction, in some sense) that neoclassical microeconomics is not even wrong and that methodological individualism has failed.

Monday, September 23, 2013

On The Ideological Function Of Certain Ideas Of Friedman And Barro

A Simple Keynes-Like Model
1.0 Introduction

I want to comment on an ideology that would lead to an acceptance of:

  • Milton Friedman's Permanent Income Hypothesis (PIH)
  • Robert Barro's so-called theory of Ricardian Equivalence

My claim is that Friedman and Barro were each responding, in their own way, to the (policy implications suggested by) Keynes' General Theory. So first, I outline, very superficially, some ideas related to the General Theory. I then briefly describe how Friedman and Barro each tried to downplay these ideas, before finally concluding.

I have a number of inspirations for this post, including Robert Waldmann's assertion that the denial of the PIH is consistent with the data; Brad DeLong noting Simon Wren-Lewis and Chris Dillow commenting on the incompetence of, say, Robert Lucas; and Josh Mason pointing out the nonsense that is taught in graduate macroenonomics about the government budget constraint and interest rates.

2.0 Governments Can End Depressions

The figure above illustrates some basic elements of Keynes' theory. This specification of a discrete-time, dynamic system includes an accounting identity for a closed economy, namely, that national income in any time period is the sum of consumption spending, investment, and government spending. And it includes a behavioral relation, namely, a dynamic formulation of a consumption function. In this system, consumption is the sum of autonomous consumption and a term proportional to national income in the previous period. One should assume that the parameter b lies between zero and one.

A policy consequence follows: government can lift the economy out of a depression by spending more. Government spending increases national income immediately. Through the consumption function, it has a positive feedback on next period's income, as well.

3.0 The Permanent Income Hypothesis

Suppose you are hostile to this policy conclusion and, like the current Republican party in the USA, dislike your fellow countrymen. How might you suggest a theoretical revision to the system structure to mitigate the influence of current government spending? One possibility is to suggest more terms enter the consumption function. With the proper manipulation, current government spending will have a smaller impact, since current income will have a smaller impact on consumption.

So, suppose the consumption function does not contain a term multiplying b by income lagged one period. Instead, assume b multiplies an unobserved and (directly) unobservable state variable which, in turn, is an aggregate of current income lagged multiple periods (Yt - 1, Yt - 2, ..., Yt - n). Call this state variable "permanent income", and assume the aggregation is a matter of forming expectations about this variable based on a number of past values of income.

This accomplishes the goal. Current government spending can directly affect current income. But to have the same size impact as before on future income, it would have to be maintained through many lags. The policy impact of increased government spending is attenuated in this model, as compared to the dynamic system illustrated in the figure.

4.0 "Ricardian" Equivalence

One can go further with unobserved state variables. Suppose that households consume based less on recent income, but, once again, on expected values of future income. And suppose that consumers operate under the mainstream economist's mistaken theory of a government budget constraint. So consumers expect increased income today, if it results from increased government spending, to be accompanied by some combination of future decreased government spending and increased taxes. So the same current upward shock to the system causes an expectation of a future downward shock.

This is the theory of Ricardian equivalence. And, like the PIH, it suggests that Keynesian effects are not as dependable as otherwise would be the case.

5.0 Conclusion

The above story portrays economics as driven by results favorable to the biases and perceived self-interests of the extremely affluent. One would hope that academic economics is not entirely like this.

Tuesday, September 17, 2013

Seth Ackerman And Mike Beggs On The Intellectual And Moral Bankruptcy Of Mainstream Economics

Seth Ackerman and Mike Beggs have an interesting article in Jacobin. They note:

  • Marginalist economics arose as a reaction against the analysis of class conflict in classical economics.
  • Greg Mankiw's defense of the 1% is deeply muddled.
  • The Arrow-Debreu model's use as a benchmark reflects a "broken" "belief system".
  • Marx and Keynes developed different, but perhaps compatible views of political economy.
  • Post Keynesians developed anti-neoclassical elements in Keynes' theory and revived the surplus-based approach to political economy.

I'll quote the last two paragraphs here:

"But in the long run, radicals need something more from their economics. Class conflict is at the heart of the capitalist economy and the capitalist state, yet neoclassical economics will not acknowledge the fact. How, then, should we think about economics as a discipline and the question of inequality as its subject? At an individual level, there are truly great economists working in the mainstream — some harboring deeply humane instincts, and some even with good politics. As a body of knowledge, economics yields a flood of invaluable empirical data and a trove of sophisticated tools for thinking through discrete analytical questions.

But as a vision of capitalist society, mainstream economics is simply hollow at its core — and the hollow place has been filled up with a distorted bourgeois ideology that does nothing but impede our understanding of the social world." -- Seth Ackerman & Mike Beggs

Monday, September 16, 2013

International Trade References

If I amend my paper, I might want to say something about:

  • Bajona, Claustre and Timothy J. Kehoe (2006). Demographics in Dynamic Heckscher-Olin Models: Overlapping Generations Versus Infinitely Lived Consumers, Staff Report 377, Federal Reserve Bank of Minneapolis.
    • My model does not consider transition from autarky to a stationary with-trade equilibrium; this paper does in a model with a somewhat different structure.
    • This paper models dynamics by analyzing intertemporal equilibrium paths. The worth of such analysis is questionable since in any disequilibrium approach to such paths, the initial quantities of capital goods, taken as data in the model, vary. Any time to approach equilibrium is too long.
    • It is clear on the distinction between international trade in financial capital ("bonds" in the model) and international trade in capital goods (intermediate goods in the model).
    • The model could be improved by considering the ambiguity of a given endowment of capital in models with multiple capital goods. Does the given quantity of capital consist of a vector of intermediate goods, as in intertemporal equilibrium, or a numeraire quantity, as in the traditional and textbook HOS model?
    • The model could also be improved by recognizing the impossibility, in general, of classifying commodities as "labor-intensive" and "capital-intensive". Is this issue orthogonal to the issue of factor-intensity reversals in models of international trade?
  • Bhagawti, J. (1971). The Generalized Theory of Distortions and Welfare, in The Generalized Theory of Distortions and Welfare. Considers how the theory of comparative advantage does not justify laissez faire in international trade, given price distortions. Bhagwati was clearly ignorant of the fact that a positive interest rate and the existence of capital goods destroys the case for laissez faire. So he makes arguably incorrect statements: "...for a perfectly competitive system with no monopoly power in trade, ...the economic system will operate with technical efficieny (i..e., on the 'best' production possibility curve...">
  • Brewer, Anthony (1985). Trade with Fixed Real Wages and Mobile Capital, Journal of International Economics, V. 18, Iss. 1-2: pp. 177-186. Contains some neat (counter) examples, either with labor being used to directly produce consumer goods or with a circular structure of production. In other words, the structure of my example is distinct.
  • Dixit, A. and V. Norman (1980). Theory of International Trade, Cambridge Economic Handbooks. This supposedly demonstrates that, without a fixed interest rate distortion, free trade dominates autarky. Do I care about theorems that have no practical application in economics? On the other hand, if they explicitly mention capital and interest rates, I suppose I should reference this.
  • Lipsey, R. G. and Kelvin Lancaster (1956-1957). The General Theory of Second Best, Review of Economic Studies, V. 24, No. 1: pp. 11-32. They use international trade and tariffs as one of their examples. I do not know that they use the phrase "price distortion", or point out that the mere existence of capital goods with a positive interest rate constitutes a price distortion.
  • Parrinello, Sergio (2000). The "Institutional Factor" in the Theory of International Trade: New vs. Old Trade Theories. Comments on Krugman's new trade theory.
  • Prasch, Robert E. (1996) Reassessing the Theory of Comparative Advantage, Review of Political Economy, V. 8, Iss. 1. Is this article fairly summarized as being a criticism of the realism of assumptions?

Do I want to say something about surveys of economists showing their overwhelming support for "free trade"? Do I want to say something about how comparative advantage is an empirical failure in providing a straight-forward explanation of patterns of trade? That is, do I want to say something about the Leontief paradox? Do I need to reference additional textbooks on trade, e.g., Krugman, Obstfeld, and Melitz?

Do I need to even close my numeric example with utility maximization? I have found some economists struggle with the very concept of an open model. Certainly closing the model in a neoclassical manner makes my case quite strongly. I want to ensure that I am claiming merely to produce a simple numeric example, to simplify accepted ideas in the research literature that contradict textbook teaching.

Wednesday, September 11, 2013

Wynne Godley On Front Business Page Of New York Times

The New York Times, even outside of their editorial pages, seems to think their readership should know about the non-mainstream economists I generally like:

I predict that this profile of Godley will get a more positive response from Post Keynesians and advocates of endogenous money than their profile of Warren Mosler did. One caveat: I think Godley was more about using his stock-flow consistent modeling to identify unsustainable trends, than to quantitatively predict the course of, say, Gross Domestic Product (GDP) over the next n quarters. (He also accepted the conclusions of the Cambridge Capital Controversy.)

Update: I should have noticed that the Jonathan Schlefer is the author of the article on Godley. L. Randall Wray comments.

Tuesday, September 10, 2013

Utility Maximization For A Numeric Example Of International Trade

Overlapping Generations
1.0 Introduction

The theory of comparative advantage does not justify free trade in consumer goods. The mainstream textbook presentation is just logically mistaken. I have proven these claims in a paper building on work staring from a third of a century ago. My paper provides a numeric example. I have previously presented the production side of another numeric example here. My paper concludes with an utility-maximizing closure, but I have decided that this part of my paper could be improved. Accordingly, this post provides a simple overlapping generations example to combine with my previous numeric example in the blog post.

2.0 Overlapping Generations

Accordingly, consider an Overlapping Generations (OLG) model in which each agent lives for two years. Each agent works in the first year of their life and is retired in the second year. They are paid their wages at the end of the year in which they work. They can choose to save some of their wages for consumption at the end of the second year of their life.

Suppose each agent has the following utility function:

U(x20, x21, x40, x41) = (x20 x40)γ(x21 x41)1/2 - γ, 0 < γ < 1/2

where:

  • x20 is the quantity of wine consumed at the end of the first year of the agent's life.
  • x21 is the quantity of wine consumed at the end of the second year of the agent's life.
  • x40 is the quantity of silk consumed at the end of the first year of the agent's life.
  • x41 is the quantity of silk consumed at the end of the second year of the agent's life.

In the numeric example, 4,158 agents are born each year in country A, and 3,969 agents are born each year in country B. Since wine and silk enter the utility function symmetrically, equal amounts of wine and silk are consumed each year in each country in a stationary state, given an (international) price of silk of unity. Although all agents are assumed identical in a given country, agents may vary across countries. In particular, difference in the parameter γ in the utility function between countries can rationalize the difference in income distribution between the two countries in the example.

It remains to outline in more detail a demonstration of these claims. The agent is faced with the following mathematical programming problem:

Given p, w, and r
Choose x20, x21, x40, x41
To maximize U(x20, x21, x40, x41)
Subject to:
(x20 + px40)(1 + r) + (x21 + px41) = w(1 + r)
x20 ≥ 0, x21 ≥ 0, x40 ≥ 0, x41 ≥ 0

Three independent marginal conditions arise in solving this optimization problem:

(∂U/∂x20)/(∂U/∂x21) = 1 + r
(∂U/∂x20)/(∂U/∂x40) = 1/p
(∂U/∂x21)/(∂U/∂x41) = 1/p

These three marginal conditions, along with the budget constraint, constitute a system of four equations in four variables. Its solution is:

x20 = γ w
x21 = (1 - 2 γ) w (1 + r)/2
x40 = γ w/p
x41 = (1 - 2 γ) (w/p) (1 + r)/2

The total demand for, say, wine to consume at the end of each year is summed over workers and retirees in that year:

X2 = lTotal(x20 + x21)
where:
  • X2 is the quantity of wine demanded in a given country each year.
  • lTotal is the annual endowment of labor in the given country.

A similar equation arises for the demand, X4, for silk:

X4 = lTotal(x40 + x41)

One can use the above equations to close the with-trade case in my numeric example, at least in cases where the interest rate is not too big. In the latter sort of cases, I might want to consider models in which agents either work or retire for more than one year. At any rate, agents, in this extension, will live for more than two years, and more than two generations will be alive in any given year.

3.0 Autarky

An autarky for my model of production is closed with this model of utility-maximization. A degree of freedom does not exist. The condition that both wine and silk both be produced leads to the determination of the wage and the price of silk as a function of the interest rate.

The equality of savings and investment is an equilibrium condition. In the above model, savings, S, is:

S = lTotal(w - x20 - p x40)

Using the aforementioned price equations, one can express savings as:

S = lTotal(1 - 2 γ)/(l1R + l2),

where:

R = 1 + r

Investment, I, is a numeraire quantity of capital, found from an indirect demand from consumer goods:

I = (l1 X2 + l3 X4)w

Once again, using the price equations, one can express investment as a function of model parameters and the interest rate:

I = lTotal[2 γ + (1 - 2 γ)R](2l1l3R + d)/[2 (l1R + l2)2 (l3R + l4)]

where:

d = l1 l4 + l2 l3

The equilibrium interest rate and, hence, the (domestic) price of silk and wage are found by equating savings and investment. I am hoping that this solution is sufficient to guarantee the quantities demanded of wine and silk lie on the Production Possibilities Frontier (PPF).

4.0 Numeric Values

In the numeric example, prices are specified. Wine is taken as the numeraire. The price of silk on the international market is unity. The wage is (1/200) units wine per person-year in country A and (1/194) units wine per person-year in country B. The interest rate is 20% in country A and 5% in country B. Let the parameter of the utility function be as follows in the two countries:

γA = 47/99
γB = 89/378

Then the quantities of wine and silk demanded for consumption are as in Table 1. But the entries in Table 1 are taken from my numeric example. So this utility-maximization model does, in fact, close the model of production and international trade used in the numeric example, at least in the with-trade case. When I worked out the autarky case, though, I ended up with a negative interest rate in the two countries.

Table 1: Selected Results for the Numeric Example
With-Trade Specialization
EndowmentsCountry AlTotal,A = 4,158 person-years
Country BlTotal,B = 3,969 person-years
International Price of Silkp = 1 Unit wine per Unit silk
Wine ConsumptionCountry A10 1/2 Units wine
Country B10 1/2 Units wine
Total22 Units wine
Silk ConsumptionCountry A10 1/2 Units silk
Country B10 1/2 Units silk
Total22 Units silk
5.0 Conclusion

I have constructed a numeric example in which trade in consumer goods unambiguously leaves the Production Possibilities Frontier (PPF) rotated inward, as compared with autarky, for country A. And I have rationalized, in a way consistent with neoclassical theory, why a positive interest rate exists and varies between countries in the with-trade equilibrium. But I have not found an example in which the corresponding autarkic equilibrium is consistent with positive interest rates in the two countries in the example.

Appendix: Definition of Parameters and Variables
  • γ: A parameter of the agent's utility function.
  • γA: A parameter of the agent's utility function for country A.
  • γB: A parameter of the agent's utility function for country B.
  • I: National investment.
  • R: 1 + r.
  • S: National savings.
  • U(x20, x21, x40, x41): The agent's utility function.
  • X2: The quantity of wine demanded yearly in a given country, summed across all agents.
  • X4: The quantity of silk demanded yearly in a given country, summed across all agents.
  • d: A parameter relating to the relative labor intensity of wine and silk production.
  • lTotal: The total endowment of labor in a given country; that is, the number of agents born each year.
  • x20: The quantity of wine the agent consumes at the end of the first year of his life.
  • x21: The quantity of wine the agent consumes at the end of the second year of his life.
  • x40: The quantity of silk the agent consumes at the end of the first year of his life.
  • x41: The quantity of silk the agent consumes at the end of the second year of his life.
  • p: The price of silk (in unit's wine per unit silk).
  • r: The interest rate.
  • w: The wage (in unit's wine per person-year).

Friday, September 06, 2013

What To Think Of This Alex Rosenberg Piece?

Alex Rosenberg writes on Free Markets and the Myth of Earned Inequalities.

What should we think of this essay? Philosophers of science, as I understand it, tend, these days, to take the consensus viewpoint in the sciences they examine as a given. They do not, in their professional role, advocate some overarching, context-free, scientific rationality and attempt to dictate to each specific science. Rather, they are engaged in trying to understand how scholars reason in specific disciplines. Furthermore, if one wants to be effective in practical policy advice, one might want, as a rhetorical strategy, to show how your policy conclusions follow from consensus views, no matter how mucked up that consensus may be. So I can understand how, sometimes, Rosenberg might be inclined to take neoclassical economics as given.

Furthermore, I accept some of the points of this essay. I can see how one might describe increased inequality in income distribution as part of a process of cumulative causation. Winners in competitive markets will tend to use their gains as a source of political power. And with that power, they will try to rewrite the rules of the game to gain even more. So competitive markets will lead, endogenously, to non-competitive markets. Is Rosenberg influenced by Dean Baker or Chris Hayes here?

In what sense do people born with better endowments deserve more because they earn more with those endowments? I think Rosenberg is correct to raise this question. (In agreement with Adam Smith, I question whether inborn talents have much to do with the distribution of income.)

I also agree with the general conclusion that government is violating no ethical norm when it institutes redistributive taxation. I would argue for the current need for such policy in the United States on the basis of the lack of sustainability of trends for the last third of a century.

But I think the following observations undermine much of the economics that Rosenberg draws on in his essay: Arrow and Debreu's proof of the Pareto efficiency of a static "competitive" General Equilibrium does not have much to do with the magnificent dynamics that Adam Smith and the classical economists were arguing about. Furthermore, price-taking in General Equilibrium Theory is a model of central planning (by the so-called auctioneer), not of competition. In actually existing capitalist economies, prices are formed in a range of institutions. Even when price-taking occurs, that occurrence depends on existence of certain algorithms for matching bids and offers, say, on the Chicago Mercantile Exchange. Marginal productivity, correctly understood, is not a theory of distribution; it is a theory of the choice of technique. Thus, marginal productivity cannot be correctly cited in an argument that, under competitive capitalism, agents earn what they get. Does Rosenberg know about reswitching examples, in which the same relative quantity flows in production are compatible with vastly different (functional) distributions of income? Besides, as Joan Robinson asked, in what sense is the ownership of capital productive?

Wednesday, September 04, 2013

Ronald Coase, 1910 - 2013

Elsewhere, on Ronald Coase:

  • An obituary in the New York Times.
  • John Cassidy offers an appreciation.
  • Mike Konczal explains that Coase's unintentionally undermines propertarianism (sometimes called "libertarianism").
  • Discussion of Coase at Crooked Timber.
  • An older piece, from Deidre McCloskey, arguing that the "Coase theorem" is misleadingly named.

Past posts from me:

  • The Coase Theorem does not describe market transactions.
  • Elodie Bertrand shows shows Coase was mistaken about lighthouses.
  • Michael Albert argues that building a law and economics approach on the Coase theorem encourages bullying and nasty behavior.

Related past posts from me:

  • Transactions costs make a nonsense out of the textbook theory of the firm under perfect competition.
  • America institutionalists had combined law and economics before Coase's work was picked up.