Friday, June 30, 2017

Bifurcations And Switchpoints

I have organized a series of my posts together into a working paper, titled Bifurcations and Switch Points. Here is the abstract:

This article analyzes structural instabilities, in a model of prices of production, associated with variations in coefficients of production, in industrial organization, and in the steady-state rate of growth. Numerical examples are provided, with illustrations, demonstrating that technological improvements or the creation of differential rates of profits can create a reswitching example. Variations in the rate of growth can change a "perverse" switch point into a normal one or vice versa. These results seem to have implications for the stability of short-period dynamics and suggest an approach to sensitivity analysis for certain empirical results regarding the presence of Sraffa effects.

Here are links to previous expositions of parts of this analysis:

In comments, Sturai suggests additional research with the model of oligopoly. One could take the standard commodity as such that it has no markup. What I am calling the scale factor for the rate of profits would be the rate of profits made in the production of the standard commodity. Markups for individual industries would be based on this. I have identified a problem, much like the transformation problem, in comparing and contrasting free competition and oligopoly. I would have to think about this.

Saturday, June 24, 2017

Bifurcation Analysis in a Model of Oligopoly

Figure 1: Bifurcation Diagram

I have presented a model of prices of production in which the the rate of profits differs among industries. Such persistent differential rates of profits may be maintained because of perceptions by investors of different levels of risk among industries. Or they may reflect the ability of firms to maintain barriers to entry in different industries. In the latter case, the model is one of oligopoly.

This post is based on a specific numeric example for technology, namely, this one, in which labor and two commodities are used in the production of the same commodities. I am not going to reset out the model here. But I want to be able to refer to some notation. Managers know of two processes for producing iron and one process for producing corn. Each process is specified by three coefficients of production. Hence, nine parameters specify the technology, and there is a choice between two techniques. In the model:

  • The rate of profits in the iron industry is rs1.
  • The rate of profits in the corn industry is rs2.

I call r the scale factor for the rates of profits. s1 is the markup for the rate rate of profits in the iron industry. And s2 is the markup for the rate of profits in the corn industry. So, with the two markups for the rates of profits, 11 parameters specify the model.

I suppose one could look at work by Edith Penrose, Michal Kalecki, Joseh Steindl, Paolo Sylos Labini, Alfred Eichner, or Robin Marris for a more concrete understanding of markups.

Anyways, a wage curve is associated with each technique. And that wage curve results in the wage being specified, in the system of equations for prices of production, given an exogenous specification of the scale factor for the rates of profits. Alternatively, the scale factor can be found, given the wage. Points in common (intersections) on the wage curves for the two techniques are switch points.

Depending on parameter values for the markups on the rates of profits, the example can have no, one, or two switch points. In the last case, the model is one of the reswitching of techniques.

A bifurcation diagram partitions the parameter space into regions where the model solutions, throughout a region, are topologically equivalent, in some sense. Theoretically, a bifurcation diagram for the example should be drawn in an eleven-dimensional space. I, however, take the technology as given and only vary the markups. Figure 1, is the resulting bifurcation diagram.

The model exhibits a certain invariance, manifested in the bifurcation diagram by the straight lines through the origin. Suppose each markup for the rates of profits were, say, doubled. Then, if the scale factor for the rates of profits were halved, the rates of profits in each industry would be unchanged. The wage and prices of production would also be unchanged.

So only the ratio between the markups matter for the model solution. In some sense, the two parameters for the markups can be reduced to one, the ratio between the rates of profits in the two industries. And this ratio is constant for each straight line in the bifurcation diagram. The reciprocal of the slopes of the lines labeled 2 and 4 in Figure 1 are approximately 0.392 and 0.938, respectively. These values are marked along the abscissa in the figure at the top of this post.

In the bifurcation diagram in Figure 1, I have numbered the regions and the loci constituting the boundaries between them. In a bifurcation diagram, one would like to know what a typical solution looks like in each region and how bifurcations occur. The point in this example is to understand changes in the relationships between the wage curves for the two techniques. And the wage curves for the techniques for the numbered regions and lines in Figure 1 look like (are topologically equivalent to) the corresponding numbered graphs in Figure 2 in this post

The model of oligopoly being analyzed here is open, insofar as the determinants of the functional distribution of income, of stable relative rates of profits among industries, and of the long run rate of growth have not been specified. Only comparisons of long run positions are referred to in talking about variations, in the solution to a model of prices of production, with variations in model parameters. That is, no claims are being made about transitions to long period equilibria. Nevertheless, the implications of the results in this paper for short period models, whether ones of classical gravitational processes, cross dual dynamics, intertemporal equilibria, or temporary equilibria, are well worth thinking about.

Mainstream economists frequently produce more complicated models, with conjectural variations, or game theory, or whatever, of firms operating in non-competitive markets. And they seem to think that models of competitive markets are more intuitive, with simple supply and demand properties and certain desirable properties. I think the Cambridge Capital Controversy raised fatal objections to this view long ago. Reswitching and capital reversing show that equilibrium prices are not scarcity indices, and the logic of comparisons of equilibrium positions, in competitive conditions does not conform to the principle of substitution. In the model of prices of production discussed here, there is a certain continuity between imperfections in competition and the case of free competition. The kind of dichotomy that I understand to exist in mainstream microeconomics just doesn't exist here.

Tuesday, June 20, 2017

Continued Bifurcation Analysis of a Reswitching Example

Figure 1: Bifurcation Diagram

This post is a continuation of the analysis in this reswitching example. That post presents an example of reswitching in a model of the production of commodities by means of commodities. The example is one of an economy in which two commodities, iron and corn, are produced. Managers of firms know of two processes for producing iron and one process for producing corn. The definition of technology results in a choice between two techniques of production.

The two-commodity model analyzed here is specified by nine parameters. Theoretically, a bifurcation diagram should be drawn in nine dimensions. But, being limited by the dimensions of the screen, I select two parameters. I take the inputs per unit output in the two processes for producing iron as given constants. I also take as given the amount of (seed) corn needed to produce a unit output of corn, in the one process known for producing corn. So the dimensions of my bifurcation diagram are the amount of labor required to produce a bushel corn and the amount of iron input required to produce a bushel corn. Both of these parameters must be non-negative.

I am interested in wage curves and, in particular, how many intersections they have. Figure 1, above, partitions the parameter space based on this rationale. I had to think some time about what this diagram implies for wage curves. In generating the points to interpolate, my Matlab/Octave code generated many graphs analogous to those in the linked post. I also generated Figure 2, which illustrates configurations of wage curves and switch points, for the number regions and loci in Figure 1. So I had some visualization help, from my code, in thinking about these implications. Anyways, I hope you can see that, from perturbations of one example, one can generate an infinite number of reswitching examples.

Figure 2: Some Wage Curves

One can think of prices of production as (not necessarily stable) fixed points of short period dynamic processes. Economists have developed a number of dynamic processes with such fixed points. But I leave my analysis open to a choice of whatever dynamic process you like. In some sense, I am applying bifurcation analysis to the solution(s) of a system of algebraic equations. The closest analogue I know of in the literature is Rosser (1983), which is, more or less, a chapter in his well-known book.

Update (22 Jun 2017): Added Figure 2, associated changes to Figure 1, and text.

References
  • J. Barkley Rosser (1983). Reswitching as a Cusp Catastrophe. Journal of Economic Theory V. 31: pp. 182-193.

Thursday, June 15, 2017

Perfect Competition With An Uncountable Infinity Of Firms

1.0 Introduction

Consider a partial equilibrium model in which:

  • Consumers demand to buy a certain quantity of a commodity, given its price.
  • Firms produce (supply) a certain quantity of that commodity, given its price.

This is a model of perfect competition, since the consumers and producers take the price as given. In this post, I try to present a model of the supply curve in which the managers of firms do not make systematic mistakes.

This post is almost purely exposition. The exposition is concrete, in the sense that it is specialized for the economic model. I expect that many will read this as still plenty abstract. (I wish I had a better understanding of mathematical notation in HTML.) Maybe I will update this post with illustrations of approximations to integrals.

2.0 Firms Indexed on the Unit Interval

Suppose each firm is named (indexed) by a real number on the (closed) unit interval. That is, the set of firms, X, producing the given commodity is:

X = (0, 1) = {x | x is real and 0 < x < 1}

Each firm produces a certain quantity, q, of the given quantity. I let the function, f, specify the quantity of the commodity that each firm produces. Formally, f is a function that maps the unit interval to the set of non-negative real numbers. So q is the quantity produced by the firm x, where:

q = f(x)
2.1 The Number of Firms

How many firms are there? An infinite number of decimal numbers exist between zero and unity. So, obviously, an infinite number of firms exist in this model.

But this is not sufficient to specify the number of firms. Mathematicians have defined an infinite number of different size infinities. The smallest infinity is called countable infinity. The set of natural numbers, {0, 1, 2, ...}; the set of integers, {..., -2, -1, 0, 1, 2, ...}; and the set of rational numbers can all be be put into a one-to-one correspondence. Each of these sets contain a countable infinity of elements.

But the number of firms in the above model is more than that. The firms can be put into a one-to-one correspondence with the set of real numbers. So there exist, in the model, a uncountable infinity of firms.

2.2 To Know

Cantor's diagonalization argument, power sets, cardinal numbers.

3.0 The Quantity Supplied

Consider a set of firms, E, producing the specified commodity, not necessarily all of the firms. Given the amount produced by each firm, one would like to be able to say what is the total quantity supplied by these firms. So I introduce a notation to designate this quantity. Suppose m(E, f) is the quantity supplied by the firms in E, given that each firm in (0, 1) produces the quantity defined by the function f.

So, given the quantity supplied by each firm (as specified by the function f) and a set of firms E, the aggregate quantity supplied by those firms is given by the function m. And, if that set of firms is all firms, as indexed by the interval (0, 1), the function m yields the total quantity supplied on the market.

Below, I consider for which set of firms m is defined, conditions that might be reasonable to impose on m, a condition that is necessary for perfect competition, and two realizations of m, only one of is correct.

You might think that m should obviously be:

m(E, f) = ∫Ef(x) dx

and that the total quantity supplied by all firms is:

Q = m((0,1), f) = ∫(0, 1) f(x) dx

Whether or not this answer is correct depends on what you mean by an integral. Most introductory calculus classes, I gather, teach the Riemann integral. And, with that definition, the answer is wrong. But it takes quite a while to explain why.

3.1 A Sigma Algebra

One would like the function m to be defined for all subsets of (0, 1) and for all functions mapping the unit interval to the set of non-negative real numbers. Consider a "nice" function f, in some hand-waving sense. Let m be defined for a set of subsets of (0, 1) in which the following conditions are met:

  • The empty set is among the subsets of (0, 1) for which m is defined.
  • m is defined for the interval (0, 1).
  • Suppose m is defined for E, where E is a subset of (0, 1). Let Ec be those elements of (0, 1) which are not in E. Then m is defined for Ec.
  • Suppose m is defined for E1 and E2, both being subsets of (0, 1). Then m is defined for the union of E1 and E2.
  • Suppose m is defined for E1 and E2, both being subsets of (0, 1). Then m is defined for the intersection of E1 and E2.

One might extend the last two conditions to a countable infinity of subsets of (0, 1). As I understand it, any set of subsets of (0, 1) that satisfy these conditions is a σ-algebra. A mathematical question arises: can one define the function m for the set of all subsets of (0, 1)? At any rate, one would like to define m for a maximal set of subsets of (0, 1), in some sense. I think this idea has something to do with Borel sets.

3.2 A Measure

I now present some conditions on this function, m, that specifies the quantity supplied to the market by aggregating over sets of firms:

  • No output is produced by the empty set of firms:
  • m(∅, f) = 0.
  • For any set of firms in the sigma algebra, market output is non-negative:
  • m(E, f) ≥ 0.
  • For disjoint sets of firms in the sigma algebra, the market output of the union of firms is the sum of market outputs:
  • If E1E1 = ∅, then m(E1E1, f) = m(E1, f) + m(E2, f)

The last condition can be extended to a countable set of disjoint sets in the sigma algebra. With this extension, the function m is a measure. In other words, given firms indexed by the unit interval and a function specifying the quantity supplied by each firm, a function mapping from (certain) sets of firms to the total quantity supplied to a market by a set of firms is a measure, in this mathematical model.

One can specify a couple other conditions that seem reasonable to impose on this model of market supply. A set of firms indexed by an interval is a particularly simple set. And the aggregate quantity supplied to the market, when each of these firms produce the same amount is specified by the following condition:

Let I = (a, b) be an interval in (0, 1). Suppose for all x in I:

f(x) = c

Then the quantity supplied to the market by the firms in this interval, m(I, f), is (b - a)c.

3.3 Perfect Competition

Consider the following condition:

Let G be a set of firms in the sigma algebra. Define the function fG(x) to be f(x) when x is not an element of G and to be 1 + f(x) when x is in G. Suppose G has either a finite number of elements or a countable infinity number of elements. Then:

m((0,1), f) = m((0,1), fG)

One case of this condition would be when G is a singleton. The above condition implies that when the single firm increases its output by a single unit, the total market supply is unchanged.

Another case would be when G is the set of firms indexed by the rational numbers in the interval (0, 1). If all these firms increased their individual supplies, the total market supply would still be unchanged.

Suppose the demand price for a commodity depends on the total quantity supplied to the market. Then the demand price would be unaffected by both one firm changing its output and up to a countably infinite number of firms changing their output. In other words, the above condition is a formalization of perfect competition in this model.

4.0 The Riemann Integral: An Incorrect Answer

I now try to describe why the usual introductory presentation of an integral cannot be used for this model of perfect competition.

Consider a special case of the model above. Suppose f(x) is zero for all x. And suppose that G is the set of rational numbers in (0, 1). So fG is unity for all rational numbers in (0, 1) and zero otherwise. How could one define ∫(0, 1)fG(x) dx from a definition of the integral?

Define a partition, P, of (0, 1) to be a set {x0, x1, x2, ..., xn}, where:

0 = x0 < x1 < x2 < ... < xn = 1

The rational numbers are dense in the reals. This implies that, for any partition, each subinterval, [xi - 1, xi] contains a rational number. Likewise, each subinterval contains an irrational real number.

Define, for i = 1, 2, ..., n the two following quantities:

ui = supremum over [xi - 1, xi] of fG(x)

li = infimum over [xi - 1, xi] of fG(x)

For the function fG defined above, ui is always one, for all partitions and all subintervals. For this function, li is always zero.

A partition can be pictured as defining the bases of successive rectangles along the X axis. Each ui specifies the height of a rectangle that just includes the function whose integral is being sought. For a smooth function (not our example), a nice picture could be drawn. The sum of the areas of these rectangles is an upper bound on the desired integral. Each partition yields a possibly different upper bound. The Riemann upper sum is the sum of the rectangles, for a given partition:

U(fG, P) = (x1 - x0) u1 + ... + (xn - xn - 1) un

For the example, with a function that takes on unity for rational numbers, the Riemann upper sum is one for all partitions. The Riemann lower sum is the sum of another set of rectangles.

L(fG, P) = (x1 - x0) l1 + ... + (xn - xn - 1) ln

For the example, the Riemann lower sum is zero, whatever partition is taken.

The Riemann integral is defined in terms of the least upper bound and greatest lower bound on the integral, where the upper and lower bounds are given by Riemann upper and lower sums:

Definition: Suppose the infimum, over all partitions of (0, 1), of the set of Riemann upper sums is equal to the supremum, also over all partitions, of the set of Riemann lower sums. Let Q designate this common value. Then Q is the value of the Riemann integral:

Q = ∫(0, 1)fG(x) dx

If the infimum of Riemann upper sums is not equal to (exceeds) the supremum of the Riemann lower sums, then the Riemann integral of fG is not defined.

In the case of the example, the Riemann integral is not defined. One cannot use the Riemann integral to calculate the changed market supply from a countably infinite firms each increasing their output by one unit.

5.0 Lebesque Integration

The Riemann integral is based on partitioning the X axis. The Lebesque integral, on the other hand, is based on partitioning the Y axis, in some sense. Suppose one has some measure of the size of the set in the domain of a function where the function takes on some designated value. Then the contribution to the integral for that designated value can be seen as the product of that value and that size. The integral of a function can then be defined as the sum, over all possible values of the function, of such products.

5.1 Lebesque Outer Measure

Consider an interval, I = (a, b), in the real numbers. The (Lebesque) measure of that set is simply the length of the interval:

m*(I) = b - a

Let E be a set of real numbers. Let {In} be a set of an at most countable infinite number of open intervals such that

E is a subset of ∪ In

In other words, {In} is an open cover of E. The (Lebesque) measure of E is defined to be:

m*(E) = inf [m*(I1) + m*(I2) + ...]

where the infimum is taken over the set of countably infinite sets of intervals that cover E.

The Lebesque measure of any set that is at most countably infinite is zero. So the rational numbers is a set of Lebesque measure zero. So is a set containing a singleton.

A measurable set E can be used to decompose any other set A into those elements of that set that are also in E and those elements that are not. And the measure of A is the sum of the measures of those two set.

If a set is not measurable, there exists some set A where that sum does not hold. Given the axiom of choice non-measurable sets exist. As I understand it, the set of all measurable subsets of the real numbers is a sigma algebra.

5.2 Lebesque Integral for Simple Functions

Let E be a measurable subset of the real numbers. Define the characteristic function, χE(x), for E, to be one, if x is an element of E, and zero, if x is not an element of E.

Suppose the function g takes on a finite number of values {a1, a2, ..., an}. Such a function is called a simple function. Let Ai be the set of real numbers where gi = ai. The function g can be represented as:

g(x) = a1 χA1(x) + ... + an χAn(x)

The integral of such a simple function is:

g(x) dx = a1 m*(A1) + ... + an m*(An)

This definition can be extended to non-simple functions by another limiting process.

5.3 Lebesque Upper and Lower Sums and the Integral

The Lebesque upper sum of a function f is:

UL(E, f) = sup over simple functions gf of ∫Eg(x) dx

One function is greater than or equal to another function if the value of the first function is greater than or equal to the value of the second function for all points in the common domain of the functions. The Lebesque lower sum is:

LL(E, f) = inf over simple functions gf of ∫Eg(x) dx

Suppose the Lebesque upper and lower sums are equal for a function. Denote that common quantity by Q. Then this is the value of the Lebesque integral of the function.

Q = ∫Ef(x) dx

When the Riemann integral exists for a function, the Lebesque integral takes on the same value. The Lebesque integral exists for more functions, however. The statement of the fundamental theorem of calculus is more complicated for the Lebesque integral than it is for the Riemann integral. Royden (1968) introduces the concept of a function of bounded variation in this context.

5.4 The Quantity Supplied to the Market

So the quantity supplied to the market by the firms indexed by the set E, when each firm produces the quantity specified by the function f is:

m(E, f) = ∫Ef(x) dx

where the integral is the Lebesque integral. In the special case, where the firms indexed by the rational numbers in the interval (0, 1) each supply one more unit of the commodity, the total quantity supplied to the market is unchanged:

Q = ∫(0, 1)fG(x) dx = ∫(0, 1)f(x) dx

Here is a model of perfect competition, in which a countable infinity of firms can vary the quantity they produce and, yet, the total market supply is unchanged.

6.0 Conclusion

I am never sure about these sort of expositions. I suspect that most of those who have the patience to read through this have already seen this sort of thing. I learn something, probably, by setting them out.

I leave many questions above. In particular, I have not specified any process in which the above model of perfect competition is a limit of models with n firms. The above model certainly does not result from taking the limit at infinity of the number of firms in the Cournot model of systematically mistaken firms. That limit contains a countably infinite number of firms, each producing an infinitesimal quantity - a different model entirely.

I gather that economists have gone on from this sort of model. I think there are some models in which firms are indexed by the hyperreals. I do not know what theoretical problem inspired such models and have never studied non-standard analysis.

Another set of questions I have ignored arises in the philosophy of mathematics. I do not know how intuitionists would treat the multiplication of entities required to make sense of the above. Do considerations of computability apply, and, if so, how?

Some may be inclined to say that the above model has no empirical applicability to any possible actually existing market. The above mathematics is not specific to the economics model. It is very useful in understanding probability. For example, the probability density function for any continuous random variable is only defined up to a set of Lebesque measure zero. And probability theory is very useful empirically.

Appendix: Supremum and Infimum

I talk about the supremum and the infimum of a set above. These are sort of like the maximum and minimum of the set.

Let S be a subset of the real numbers. The supremum of S, written as sup S, is the least upper bound of S, if an upper bound exists. The infimum of S is written as inf S. It is the greatest lower bound of S, if a lower bound exists.

References
  • Robert Aumann (1964). Markets with a continuum of traders. Econometrica, V. 32, No. 1-2: pp. 39-50.
  • H. L. Royden (1968). Real Analysis, second edition.

Sunday, June 11, 2017

Another Three-Commodity Example Of Price Wicksell Effects

Figure 1: Price Wicksell Effects in Example
1.0 Introduction

This post presents another example from my on-going simulation experiments. I am still focusing on simple models without the choice of technique. The example illustrates an economy in which price Wicksell effects are positive, for some ranges of the rate of profits, and negative for another range.

2.0 Technology

I used my implementation of the Monte-Carlo method to generate 20,000 viable, random economies in which three commodities are produced. For the 316 of these 20,000 economies in which price Wicksell effects are both negative and positive, the maximum vertical distance between the wage curve and an affine function is approximately 15% of the maximum wage. The example presented in this post is for that maximum.

The economy is specified by a process to produce each commodity and a commodity basket specifying the net output of the economy. Since the level of output is specified for each industry, no assumption is needed on returns to scale, I gather. But no harm will come from assuming Constant Returns to Scale (CRS). All capital is circulating capital; no fixed capital exists. All capital goods used as inputs in production are totally used up in producing the gross outputs. The capital goods must be replaced out of the harvest each year to allow production to continue on the same scale. The remaining commodities in the annual harvest constitute the given net national income. I assume the economy is in a stationary state. Workers advance their labor. They are paid wages out of the harvest at the end of the year. Net national income is taken as the numeraire.

Table 1 summarizes the technique in use in this example. The 3x3 matrix formed by the first three rows and columns is the Leontief input-output matrix. Each entry shows the physical quantity of the row commodity needed to produce one unit output of the column commodity. For example, 0.5955541 pigs are used each year to produce one bushel of corn. The last row shows labor coefficients, that is, the physical units of labor needed to produced one unit output of each commodity. The last column is net national income, in physical units of each commodity.

Table 1: The Technology for a Three-Industry Model
InputCorn
Industry
Pigs
Industry
Ale
Industry
Net
Output
Corn0.09057260.00216510.00228850.274545
Pigs0.59555410.22313790.00545690.097880
Ale0.12021800.63622780.02324520.804348
Labor0.262730.185550.31306

3.0 The Wage Curve

I now consider stationary prices such that the same rate of profits is made in each industry. The system of equations allow one to solve for the wage, as a function of a given rate of profits. The blue curve in Figure 2 is this wage curve. The maximum rate of profits, achieved when the wage is zero, is approximately 276.5%. The maximum wage, for a rate of profits of zero, is approximately 2.0278 numerate units per labor unit. As a contrast to the wage curve, I also draw a straight line, in Figure 2, connecting these maxima.

Figure 2: Wage Curve in Example

I do not think it is easy to see in the figure, but the wage curve is not of one convexity. The convexity changes at a rate of profits of approximately 25.35%, and I plot the point at which the convexity changes.

4.0 The Numeraire Value of Capital Goods

Since I have specified the net national product, the gross national product can be found from the Leontief input-output matrix. The gross national product is the sum of the capital goods, in a stationary state, and the net national product. The employed labor force can be found from labor coefficients and gross national product.

Given the rate of profits, one can find prices, as well as the wage. And one can use these prices to calculate the numeraire value of capital goods. Figure 1, at the top of this post, graphs the ratio of the value of capital goods to the employed labor force, as a function of the rate of profits.

A traditional, incorrect neoclassical idea is that a lower rate of profits incentivizes firms to increase the ratio of capital to labor. And a higher wage also incentivizes firms to increase the ratio of capital to labor. The region, for a low rate of profits, in which price Wicksell effects are positive already poses a problem for this vague neoclassical idea.

5.0 Conclusion

This example makes me feel better about my simulation approach. From some previous results, I was worried that I would have to rethink how I generate random coefficients. But, maybe if I generate enough economies, even with all coefficients, etc. confined to the unit interval, I will be able to find examples that approach visually interesting counter-examples to neoclassical economics.

Thursday, June 08, 2017

Elsewhere

  • Ian Wright has had a blog for about six months.
  • Scott Carter announces that Sraffa's notes for are now available online. (The announcement is good for linking to Carter's paper explaining the arrangement of the notes.)
  • David Glasner has been thinking about intertemporal equilibrium.
  • Brian Romanchuk questions the use of models of infinitesimal agents in economics. (Some at ejmr say he is totally wrong, but others cannot make any sense of such models, either. I am not sure if my use of a continuum of techniques here can be justified as a limit.)
  • Miles Kimball argues that there is no such thing as decreasing returns to scale.

Don't the last two bullets imply that the intermediate neoclassical microeconomic textbook treatment of perfect competition is balderdash, as Steve Keen says?

Tuesday, June 06, 2017

Price Wicksell Effects in Random Economies

Figure 1: Blowup of Distribution of Maximum Distance of Frontier from Straight Line
1.0 Introduction

This post is the third in a series. Here is the first, and here is the second.

In this post, I am concerned with the probability that price Wicksell effects for a given technique are negative, positive, or both (for different rates of profits). A price Wicksell effect shows the change in the value of capital goods, for different rates of profits, for a technique. If a (non-zero) price Wicksell effect exists, for some range(s) of the rate of profits in which the technique is cost-minimizing, the rate of profits is unequal to the marginal product of capital, in the most straightforward sense. (This is the general case.) Furthermore, a positive price Wicksell effect shows that firms, in a comparison of stationary states, will want to employ more capital per person-hour at a higher rate of profits. The rate of profits is not a scarcity index, for some commodity called "capital", limited in supply.

My analysis is successful, in that I am able to calculate probabilities for the specific model of random economies. And I see that an appreciable probability exists that price Wicksell effects are positive. However, I wanted to find a visually appealing example of a wage frontier that exhibits both negative and positive Wicksell effects. The curve I end up with is close enough to an affine function that I doubt you can readily see the change in curvature.

Bertram Schefold has an explanation of this, based on the theory of random matrices. If the Leontief input-output matrix is random, in his sense (which matches my approach), the standard commodity will tend to contain all commodities in the same proportion, that is, proportional to a vector containing unity for all elements. And I randomly generate a numeraire (and net output vector) that will tend to be the same. So my random economies tend to deviate only slightly from standard proportions. And this deviation is smaller, the larger the number of commodities produced. So this post is, in some sense, an empirical validation of Schefold's findings.

2.0 Simulating Random Economies

The analysis in this post is based on an analysis, for economies that produce a specified number of commodities, of a specified sample size of random economies (Table 1).

Table 1: Number of Simulated Economies
Seed for
Random
Generator
Number of
Commodities
Number of
Economies
66,96522,020
775,545320,458
586,65842,747,934

Each random economy is characterized by a Constant-Returns-to-Scale (CRS) technique, a numerate basket, and net output. The technique is specified by a:

  • A row vector of labor coefficients, where each element is the person-years of labor needed to a unit output of the numbered commodity.
  • A square Leontief input-output matrix, where each element is the units of input of the row commodity needed as input to produce a unit of the column commodity.

The numeraire and net output are column vectors. Net output is set to be the numeraire. The elements of the vector of labor coefficients, the Leontief matrix, and the numeraire are each realizations of independent and identically distributed random variables, uniformly distributed on the unit interval (from zero to one). Non-viable economies are discarded. So, as shown in the table above, more economies are randomly generated than the specified sample size (1,000).

I am treating both viability and the net output differently from Stefano Zambelli's approach. He bases net output on a given numeraire value of net output. Many vectors can result in the same value of net output in a Sraffa model. He chooses the vector for which the value of capital goods is minimized. This approach fits with Zambell's concentration on the aggregate production function.

3.0 Price Wicksell Effects

Table 2 shows my results. As I understand it, the probability that a wage curve for a random economy, in which more than one commodity is produced, will be a straight line is zero. And I find no cases of an affine function for the wage curve, in which the maximum wage (for a rate of profits of zero) and the maximum rate of profits (for a wage of zero) are connected by a straight line in the rate of profits-wage space.

Table 2: Price Wicksell Effects
Number
of
Industries
Number w/
Negative
Price
Wicksell
Effects
Number w/
Positive
Price
Wicksell
Effects
Number w/
Both Price
Wicksell
Effects
25484520
360341619
467933413

The wage curve in a two-commodity economy must be of a single curvature. So for a random economy in which two commodities are produced, price Wicksell effects are always negative or always positive, but never both. And that is what I find. I also find a small number of random economies, in which three or four commodities are produced, in which the wage curve has varying curvature through the economically-relevant range in the first quadrant.

4.0 Distribution of Displacement from Affine Frontier

I also measured how far, in some sense, these wage curves for random economies are from a straight line. I took the affine function, described above, connecting the intercepts, of the wage curve, with the rate of profits and the wage axes as a baseline. And I measured there absolute vertical distance between the wage curve and this affine function. (My code actually measures this distance at 600 points). I scale the maximum of this absolute distance by the maximum wage. Figure 1, above, graphs histograms of this scaled absolute vertical distance, expressed as a percentage. Tables 3 and 4 provide descriptive statistics for the empirical probability distribution.

Table 3: Parametric Statistics
Number of Produced Commodities
TwoThree Four
Sample Size1,0001,000 1,000
Mean1.9621.025 0.498
Std. Dev.3.4281.773 0.772
Skewness5.1114.837 3.150
Kurtosis48.46738.230 12.492
Coef. of Var.0.57240.578 0.645

Table 3: Nonparametric Statistics
Number of Produced Commodities
TwoThree Four
Minimum0.000180.000251 0.0000404
1st Quartile0.1790.114 0.0608
Median0.6530.402 0.203
3rd Quartile2.2111.120 0.583
Maximum50.61323.374 5.910
IQR/Median3.1102.504 2.574

We see that the wage curves for these random economies tend not to deviate much from an affine function. And, as more commodities are produced, this deviation is less.

5.0 An Example

For three commodity economies, the maximum scaled displacement of the wage curve from a straight line I find is 23.4 percent. But, of those three-commodity economies with both negative and positive price Wicksell effects, the maximum displacement is only 0.736 percent. Table 5 provides the randomly generated parameters for this example.

Table 5: The Technology for a Three-Industry Model
InputCorn
Industry
Pigs
Industry
Ale
Industry
Net
Output
Corn0.55251520.00248600.26527610.26077
Pigs0.51646750.74692860.11284060.42705
Ale0.56363080.03683990.21105450.98691
Labor0.7993640.0281110.012866

Figure 2 shows the wage curve for the example. This curve is not a straight line, no matter how close it may appear so to the eye. Figure 3 shows the distance between the wage curve and a straight line. Notice that the convexity towards the left of the curve in Figure 3 varies slightly from the convexity for the rest of the graph. This is a manifestation of price Wicksell effects in both directions. (I need to perform some more checks on my program.)

Figure 2: A Wage Frontier with Both Negative and Positive Price Wicksell Effects

Figure 3: Vertical Distance of Frontier from Straight Line

6.0 Conclusion

I hope Bertram Schefold and Stefano Zambelli are aware of each other's work.

Postscript: I had almost finished this post before Stefano Zambelli left this comment. I'd like to hear from him at rvien@dreamscape.com.

References
  • Bertram Schefold (2013). Approximate Surrogate Production Functions. Cambridge Journal of Economics.
  • Stefano Zambelli (2004). The 40% neoclassical aggregate theory of production. Cambridge Journal of Economics 28(1): pp. 99-120.