1,192
Views
4
CrossRef citations to date
0
Altmetric
Symposium on Rational Econometric Man

Methodological institutionalism as a transformation of structural econometrics

Pages 417-425 | Received 19 Sep 2014, Accepted 03 Oct 2014, Published online: 12 Jul 2016

ABSTRACT

I agree with Nell and Errouaki that an econometrics based on neoclassical economic theory fails to develop any insight into deep structures. But my methodology differs slightly from theirs because of my view that an econometrics based on any economic theory would fail in this sense. Economics is an inexact science, incapable of providing a complete set of causal factors to explain any economic phenomenon. I arrive at a different call for more fieldwork in econometrics due to a somewhat different reading of the criticisms of Trygve Haavelmo, Wassily Leontief, the young Tjalling Koopmans and Oskar Morgenstern. These economists not only shared the idea that economic structure is different in nature from natural laws and that statistical analysis alone is not enough to arrive at knowledge of this structure, but also that economic theory as an additional source of knowledge would not be sufficient either. Another additional source of knowledge is needed—that is, field expertise.

How Edward J. Nell and Karim Errouaki wish to transform structural econometrics is clearly presented on the cover of their book. The cover shows a diagram picturing how the unification of theory, measurement and applicability could be realized by a combined methodology of conceptual analysis and fieldwork. Nell and Errouaki call this proposed framework ‘methodological institutionalism', and it is meant to replace ‘methodological individualism.’

There are many criticisms and views in the book that I share, particularly its adoption of Trygve Haavelmo's (Citation1944) Probability Approach as a blueprint for reconstructing econometric methodology. However, my reflections on Haavelmo's approach brought me to a somewhat different kind of methodology. I share the view of Nell and Errouaki that neoclassical-economic-theory-based econometrics ‘fails to develop any insight into deep structures’ (p. xxi), but our methodologies differ slightly because of my view that an econometrics based on any economic theory would fail in this sense. Economics is an inexact science, which means that it is not capable of providing a complete set of causal factors to explain any economic phenomenon. I arrive at a different call for more fieldwork in econometrics due to a somewhat different reading of the criticisms of Haavelmo, Wassily Leontief, the young Tjalling Koopmans and Oskar Morgenstern.

Although the ideas of all four economists are discussed in the book, only Haavelmo's work receives detailed attention. That is a pity because it is a missed opportunity to enrich a methodology that acknowledges on one hand that the main components of an economy are institutions and not individuals, and on the other hand that knowledge about institutions is not easily captured in our models. These economists not only shared the idea that economic ‘structure’ is different in nature from ‘natural laws’ and that statistical analysis alone is not enough to arrive at knowledge of this structure (economics is not astronomy), but also that economic theory as an additional source of knowledge would not be sufficient either. Another additional source of knowledge is needed. But before I discuss these views, I must first examine the methodological triangle-circle modelling framework that Nell and Errouaki put forward.

Econometric models are representations of economic systems, but a system can be represented in various ways, so one has to define criteria that restrict the number of possible candidates. Based on the work of Alain Bonnafous (Citation1972), Nell and Errouaki arrive at three requirements: coherence, relevance and quantification. Coherence is defined in compliance with the principle of non-contradiction in two different ways: ‘first, absence of internal conflicts, that is, conflicts inherent in the model's design; and second, the absence of internal/external contradictions with regard to the model's objective and the technical capacity of its content, methodology and structure to meet it.’ Relevance is defined as the ‘nexus’ between the model's logical and mathematical structure and the ‘true nature of its object', and quantification is defined as ‘the potential of all magnitudes present in a model to be estimated’ (pp. 166–167).

In one sense these requirements are too vague, in another sense too strict. Relevance is too vague; it presupposes prior knowledge of the ‘true nature’ of the model's object. But how would we be able to assess relevance before having an adequate representation of that object? Quantification is too strict. There are all kinds of magnitudes in the world that are essential but impossible to estimate or to measure. The requirement of quantification should also allow for other ways of determining the values of the magnitudes.

To discuss the requirements of a model as a representation, I suggest adopting the requirements formulated by the physicist Heinrich Hertz (1899). Although Hertz formulated these requirements more than a century ago, they remain very useful for our modern understanding of models in econometrics.Footnote1 His first and most fundamental requirement, which he called ‘correctness', is that the consequences of (that is, inferences from) a representation of the laws of a system must be the representation of the consequences of those laws. The second requirement, called ‘logical permissibility', is that a representation should not contradict the rules of logic. And the third requirement, called ‘appropriateness', is that a representation should contain as many of the relevant essential relations as possible and should at the same time be as simple as possible. Appropriateness can only be tested in relation to the purpose of the representation, since one representation may be more suitable for one purpose than for another; only by testing many representations can we obtain the most appropriate one. Hence, the model's purpose determines which relations are ‘essential'.

‘Coherence’ seems to cover both ‘permissibility’ and ‘appropriateness', and ‘relevance’ seems to be similar to ‘correctness', though it does not require prior knowledge. Hertz made it very clear that models, in the epistemological sense, are pictures (Bilder)—our conceptions of the system under investigation: ‘we do not know, nor have we any means of knowing, whether our conception of things are in conformity with them in an other than [the first fundamental requirement of correctness]’ (Hertz Citation1899, p. 2). Hertz also did not require ‘quantification’: he allows for models that have magnitudes that are not measurable but nevertheless may represent important causal influences.

Unfortunately, Leontief appears in the book only because of his critical view of econometrics, which he expressed in striking terms, particularly in his 1970 presidential address to the American Economic Association (Leontief Citation1971). But Leontief is much more relevant for the project of this book. His criticism is based on a sophisticated view on what ‘statistical econometrics’ can and cannot achieve. Moreover, he developed his own alternative methodology that could be useful for reconsidering econometric methodology.

In econometrics one finds various empirical approaches to lay bare the structure of an economic system, of which the most dominant is the Cowles Commission approach. Generally, in that approach structure is not a surface phenomenon and is thought to be not directly observable. To uncover it, one has to dig deeper, one has to look under the surface. Leontief shared with the Cowles Commission the aim of revealing the structural relationships that govern an economic system, but he had a different view about the nature of these relationships. In his 1970 address, Leontief stated this most explicitly:

In contrast to most physical sciences, we study a system that [is] in a state of constant flux. I have in mind  …  the basic structural relationships described by the form and the parameters of these equations. In order to know what the shape of these structural relationships actually are at any given time, we have to keep them under continuous surveillance. By sinking the foundations of our analytical system deeper and deeper, by reducing, for example, cost functions to production functions and the production functions to some still more basic relationships eventually capable of explaining the technological change itself, we should be able to reduce this drift. It would, nevertheless, be quite unrealistic to expect to reach, in this way, the bedrock of invariant structural relationships (measurable parameters) which, once having been observed and described, could be used year after year, decade after decade, without revisions based on repeated observation. (Leontief Citation1971, pp. 3–4)

Besides emphasizing the need for a ‘steady flow of new data', Leontief also emphasized the necessity of looking beyond the traditional domains of economic phenomena:

The pursuit of a more fundamental understanding of the process of production inevitably leads into the area of engineering sciences. To penetrate below the skin-thin surface of conventional consumption to develop a systematic study of the structural characteristics and of the functioning of households, an area in which description and analysis of social, anthropological and demographic factors must obviously occupy the center of the stage. (Leontief Citation1971, p. 4)

To obtain this new kind of information, Leontief considered direct observation to be more appropriate than the method used by the Cowles Commission, which he called ‘indirect statistical inference'. Indirect statistical inference is merely ‘circular’; it does not widen or deepen the empirical foundations of economic analysis. Cowles Commission econometrics is circular because its models are constructed in such a way that prices, outputs and the rates of saving and investment are explained in terms of production functions, consumption functions and other structural relationships; but the parameters of these relationships are themselves estimated by using statistical data comprised of these same prices, outputs, savings rates and investment levels. According to Leontief (Citation1949), it would be much better to use direct, non-statistical information:

An alternative and more direct way of determining, for example, the amount of coke required to produce a ton of pig iron or the amount of corn feed required per hundredweight of live hogs is that of asking the ironmaster in the first and a specialist in animal husbandry in the second case. As a matter of fact one can easily visualize the possibility of assembling a complete set of input coefficients describing the structural characteristics of all branches of the national economy entirely on the basis of such direct information without recourse to any actual statistical input or output figures. (p. 213)

In Leontief's view, Cowles Commission econometrics, which he called ‘statistical econometrics’, provides only ‘indirect inference'. The approach he took in input-output analysis was based on ‘direct observation'. Since the 1930s, Leontief and his assistants at Harvard wrote letters, phoned, asked engineers, firms, statistical bureaus and unions in order to collect the data necessary to construct the input-output tables (see Foley Citation1998, p. 121). It should be noted that not just anyone was asked for their observations, only engineers, technicians and other experts on a relevant sector or component of the economic system.Footnote2 But it makes no sense to ask these experts for information that is too abstract or too highly aggregated. For example, a theoretical aggregate production function, intended to describe the relationship between, say, the amount of steel produced, y1, and the quantities of two different inputs, y2 and y3, needed to produce it, is typically described as a CES function: . Leontief (Citation1982, p. 104) observes that ‘while the labels attached to symbolic variables and parameters of the theoretical equations tend to suggest that they could be identified with those directly observable in the real world, any attempt to do so is bound to fail.’

Leontief's efforts to ground empirical work in direct observation were underpinned by two pervasive concerns: his disapproval of aggregate variables and his commitment to bringing engineering and technical data into economic analysis (see Carter and Petri Citation1989, p. 17).Footnote3 Leontief not only emphasized the importance of fieldwork, but actually was a fieldworker himself. But as Nell and Errouaki note, ‘the quality of results obtained from fieldwork depends on the data gathered in the field’ (p. 374). Leontief thought it sufficient to obtain the data from the experts, but this is a rather naïve view about their reliability. For a less naïve assessment of the quality of field data, we should turn to Oskar Morgenstern, who wrote extensively on the quality of observations.

Like Leontief, Morgenstern deserves more attention than he receives from Nell and Errouaki—he is mentioned only briefly and indirectly—because of his writings about the quality and epistemological reach of economic statistics (see Boumans [Citation2012] for a more detailed discussion of Morgenstern). Morgenstern had a lifelong interest in the methodological aspects of economic statistics. He reported his early experiences with statistics in his Wirtschaftsprognose (Citation1928). In a review essay, Marget (Citation1929) summarized Morgenstern's view of economic forecasting in the following way: ‘Forecast in economics by the methods of economic theory and statistics is “in principle” impossible’ (p. 313). This proposition was substantiated by three ‘subpropositions’:

A. The data with which the economic forecaster must deal are of such a nature as to make it certain that the prerequisites for adequate induction, that is, the application of the technique of probability analysis, must always be lacking.

B. Economic processes, and therefore the data in which their action is registered, are not characterized by a degree of regularity sufficient to make their future course amenable to forecast, such ‘laws’ as are discoverable being by nature ‘inexact’ and loose, and therefore unreliable.

C. Forecasting in economics differs from forecasting in all other sciences in the characteristic that, in economics, the very fact of forecast leads to ‘anticipations’ which are bound to make the original forecast false. (pp. 313–314)

Marget's review summarizes not only a very early work by Morgenstern but also the two themes Morgenstern continued to work on in his later writings on economic statistics, namely, the inadequacy of the statistical approach and the idea that economic laws are inexact and loose.

The problem of inaccuracies of economic observations, an issue that had always occupied a central role in Morgenstern's work, is discussed most extensively in his 1963 book, On the Accuracy of Economic Observations. The book's main message is that in comparison with data used in the natural sciences, economic statistics have additional peculiarities. Morgenstern (Citation1963) argued that the accuracy of economic observations cannot be ‘formulated according to a strict statistical theory for the simple reason that no such exhaustive theory is available for many social phenomena’ (p. 7). Statistical theory cannot be applied because the nature of economic data prevents ‘normal distribution of the observations, creating circumstances which cannot be readily treated according to classical notions of probable error’ (ibid., p. 13). Among the sources of error in economic statistics, Morgenstern included: lack of controlled experiments; lying and the concealment of information by respondents; inadequately trained observers; conceptual ambiguity in the formulation of questions; instrument errors; and the interdependence and stability of errors. Economic statistics are not the result of designed experiments, and they are often based on legal rather than economic definitions.

The main source of errors in economics, Morgenstern believed, is that:

Economic statistics are—in the overwhelming majority of cases—not scientific observations. This is a point of primary significance. They are at best historical accounts; mostly they are byproducts of business operations or of administrative acts. They are, as a rule, badly collected by scientifically untrained minor officials at the customhouses, warehouses, on street markets, etc. In other words they are not the results of carefully set experiments, or of strictly controlled measurements as are astronomical observations. (Morgenstern Citation1959, p. 9, original emphasis)

The most significant difference between natural science data and social science data, however, is that the latter are ‘frequently based on evasive answers and deliberate lies of various types. These lies arise, principally, from misunderstandings, from fear of tax authorities, from uncertainty about or dislike of government interference and plans, or from the desire to mislead competitors’ (Morgenstern Citation1963, p. 17). Morgenstern emphasized that to get good statistics, it is important ‘to understand that there is a fundamental difference (in the field of economics) between mere data and observations’ (ibid., p. 88, original emphasis). Observations are planned, designed and guided by theory, such as, for example (and not necessarily), obtained in a controlled experiment. Data are merely obtained, gathered and collected statistics, even though this involves administrative planning.

For planned observations to be scientific, however, Morgenstern (Citation1954) emphasized that theory has an essential role: ‘If there were no theory behind the experiment but still the intent to discover general properties of a system, there would be no experiment at all, only a meaningless muddling’ (p. 502).Footnote4 In the first place, theory reduces the number of required measurements: ‘The better the underlying theory the fewer will the direct experimental trials (measurements) have to be’ (ibid., p. 509,). Secondly, theory is needed to attach any meaning to data. Without theory, one would be ‘merely looking'.

So, to Morgenstern, ‘scientific observation’ meant not merely looking at the world but theory-guided planned observation. Unfortunately, according to Morgenstern, in economics, theory is not exact. His pessimistic judgment was that because economic theory is incomplete and inexact, scientific observation is extremely difficult to achieve in the discipline. Because the alternative of ‘just looking’ is too likely to produce misleading results, Morgenstern recommended the involvement of scientific observers with pertinent, though perhaps incomplete, knowledge of economics. In a field that does not have explicit and exact theories, a scientific observation is an observation made by a scientific expert having intuitive knowledge of the relevant phenomena.

These experts are necessary for another reason, beyond the inexactness of economic laws. In his discussions of the accuracy of economic information, Morgenstern is well aware of the distinction between observations of nature and observations of business, institutions and governments. Observations of natural phenomena can be inaccurate, but the fault for this lies entirely with the observer: nature does not lie. In contrast, a human being who provides information can also be a source of inaccuracies, sometimes deliberate. A scientific observer who has some knowledge of economics may, however, be able to see whether a picture based on economic data is diverging from reality.

Morgenstern would have agreed with Nell and Errouaki's methodological requirements that fieldwork should be combined with conceptual analysis, but to him it was not only neoclassical economics that was unsatisfactory, but economic theory in general.

One could argue that Morgenstern and Leontief were peripheral figures in the development of structural econometrics, but two econometricians who were at the heart of this development, Koopmans and Haavelmo, were also highly critical about what could be achieved with ‘statistical econometrics’ and, like Leontief and Morgenstern, they too argued that additional expert intuitions were needed (see Boumans [Citation2014] for a more detailed discussion).

To deal with the problem that statistical techniques alone could never yield a complete list of causal factors, Koopmans (Citation1937) suggested the following division of labor between ‘the economist’ and ‘the statistician': the economist ‘should by economic reasoning and general economic experience—or by his knowledge of the special branch of science concerned—devise a set of determining variables which he expects to be a complete set’ (p. 57). The task of the statistician was modest, in the sense that it was restricted to an evaluation of the ‘residual'. A large residual indicates that the set of causal factors is incomplete, but not what the omitted factor was. A small residual may be interpreted to indicate that, as a practical matter, the set is not incomplete, although that result could also occur by chance.

In a later paper, written to clarify the logic of econometrics, Koopmans (Citation1941) made the two different and separated tasks of the economist and the statistician more explicit. The economist's task is to come up with ‘additional information'. This economist is ‘not supposed to be of the too academic type versed only in abstract deduction from the ‘economic motive’. He is considered to have in addition an intimate knowledge of economic life and of the results of statistical investigations relating to similar countries and periods’ (Koopmans Citation1941, p. 166). Koopmans did not specify what he meant by ‘intimate knowledge of economic life', but he did say more explicitly what he meant by ‘additional information’: it is a set of statements that include ‘observations not expressible as statistical time series, experiences from other countries or periods showing a similar economic structure, deductions from economic theory, or even mere working hypotheses having a certain degree of plausibility’ (ibid., p. 163).

It should be noted that we are discussing the works of the young Koopmans. In the Cowles Commission approach to econometrics that he later helped to developed, theory came to have the prominent and sole task of providing a complete list of causal factors: ‘little attention was given to how to choose the variables and the form of the equations; it was thought that economic theory would provide this information in each case’ (Christ Citation1994, p. 33). In Koopmans’ work the ‘economist’ came to be replaced by ‘theory’.Footnote5 And a decade after the publication of his dissertation he explicitly emphasized the crucial role of theory in measurement, in the famous ‘measurement without theory’ debate (Koopmans Citation1947).

Haavelmo's (Citation1944) Probability Approach became the blueprint of Cowles Commission econometrics, particularly as developed under the direction of Koopmans. But Haavelmo made clear that statistics and theory were not sufficient for econometric modelling:

It is a creative process, an art, operating with rationalized notions of some real phenomena and of the mechanism by which they are produced. The whole idea of such models rests upon a belief, already backed by a vast amount of experience in many fields, in the existence of certain elements of invariance in a relation between real phenomena, provided we succeed in bringing together the right ones. (Haavelmo Citation1944, p. 10)

The problem of finding a complete list of causal factors is ‘a problem of actually knowing something about real phenomena, and of making realistic assumptions about them’ (ibid., p. 29).

Nell and Errouaki interpret their methodological triangle-circle diagram in analogy with a triangle diagram of Volmer (Citation1984), which shows an interaction between ontology, epistemology and methodology. They see a correspondence between theory-coherence and ontology, between applicability-relevance and epistemology, and between measurement-quantification and methodology. From this analogy they infer the following methodology: ‘the theory tells us what there is (for the purpose the model); it provides the list of variables and relationships to be studied.’ Then, these theoretical variables are related to ‘real-world counterparts'. ‘Finally, we gather data, measure and test; we estimate and work with the model’ (pp. 171–172). Actually, this is not so different from an account that one will find in any standard econometrics textbook, which usually presents the Cowles Commission kind of econometrics.

But if Nell and Errouaki are serious about this analogy, what tasks would be left over for fieldwork? Would it be only that ‘field work counts and measures, and gathers data of all kinds’ (p. 394). Of course not; fieldwork ‘also develops understanding of the concepts, ideas, values and norms guiding and regulating the activities being modeled’ (ibid.). It is what Haavelmo meant when he talked about the creative process and art of model building. As Nell and Errouaki write, ‘Field work, in turn, delivers these concepts and norms to conceptual analysis, which then develops them into theory. And that in turn will suggest new questions and new directions for field work’ (ibid.) In other words, the exciting practice of econometrics is to be found inside the triangle, in the interaction between fieldwork and conceptual analysis, and not along the edges of the triangle, the interactions between theory, applicability and measurement as most textbook would have us believe econometrics is all about.

The edges of the triangle do not tell what model building is but only what the result of model building is. This can be seen particularly with respect to measurement-quantification. To paraphrase Donald Rumsfeld, the former US Secretary of Defense, there are ‘unmeasurable knowns,’ that is, things we know to exist but that we cannot measure, and there are also ‘known unknowns,’ that is, things we know we do not know. But there are also ‘unknown unknowns,’ that is, things we don't know we don't know. When we miss these ‘knowns’ and ‘unknowns’ our economic models score badly on relevance and coherence. The imagination and intuition of fieldworkers—the active ‘minds of economic agents', as Nell and Errouaki (p. 358) put it—are of particular importance for discovering the ‘unknowns'.

But fieldwork is not only essential in model building. It is required also in the application of the model. For example, in model-based forecasting, it is acknowledged more explicitly than in other fields that besides a model one also needs expert judgments. Franses (Citation2008), noting positive evidence on the successful interaction between models and experts, has called for ‘more interaction between researchers in model-based forecasting and those who are engaged in judgemental forecasting research’ (p. 31). Arguing that ‘a model will miss out on something, and that the expert will know what it is’, Franses concludes that ‘a model's forecast, when subsequently adjusted by an expert, is often better in terms of forecasting than either a model alone or an expert alone’ (pp. 32–33).Footnote6

The three vertexes of the Nell and Errouaki methodology triangle—theory, applicability and measurement—should be viewed, I believe, as the outcomes of econometric research, that is, as outcomes of the interaction between conceptual analysis and fieldwork. If this view is accepted, we could also replace measurement by forecasting to reflect how model-based forecasting is routinely practiced.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1In Boumans (Citation2004) I argue that econometric modelling originates from a tradition in physics that can be traced back to Hertz.

2‘In analyzing the changing structure of the steel industry, we must get our information from the technical literature, from ironmasters and from rolling mill managers. To study the changing pattern of consumer behavior, we have to develop practical co-operation with psychologists and sociologists' (Leontief Citation1949, p. 225).

3As Leontief remarked in an interview with Foley (Citation1998): ‘I was somewhat skeptical of the whole curve-fitting notion. I thought of technological information. The people who know the structure of the economy are not statisticians but technologists, but of course to model technological information is very difficult. My idea was not to infer the structure indirectly from econometric or statistical techniques, but to go directly to technological and engineering sources’ (p. 123).

4Here Morgenstern uses the word ‘experiment’ interchangeably with ‘measurement’.

5We can see this, for example, in Koopmans, Rubin and Leipnik (Citation1950): ‘The construction of such a system [of equations] is the task in which economic theory and statistical method combine. Broadly speaking, considerations both of economic theory and of statistical availability determined the choice of the variables’ (p. 54).

6For example, ‘Shortcomings can occur when actual time series do not fit well with the estimated behavioural equation, for example because of revisions of the national accounts. Outside economic effects can involve specific knowledge for the near future about contracts or plans or the creation of temporally higher or lower effects of economic behaviour of households or firms because of sudden shocks in confidence or announced changes of tax rates' (Franses, Kranendonk, and Lanser Citation2007, p. 7).

References

  • Bonnafous, A. 1972. La Logique de l'Investigation Econométrique. Paris: Dunod.
  • Boumans, M. 2004. ‘Models in Economics.’ In The Elgar Companion to Economics and Philosophy, edited by J. B. Davis, A. Marciano and J. Runde. Cheltenham: Edward Elgar.
  • Boumans, M. 2012. ‘Observations in A Hostile Environment: Morgenstern on the Accuracy of Economic Observations.’ History of Political Economy 44 (annual supplement): 110–131.
  • Boumans, M. 2014. ‘Haavelmo's Epistemology for an Inexact Science.’ History of Political Economy 46: 211–229. doi: 10.1215/00182702-2647477
  • Carter, A. P. and P. A. Petri. 1989. ‘Leontief's Contributions to Economics.’ Journal of Policy Modeling 11: 7–30. doi: 10.1016/0161-8938(89)90022-7
  • Christ, C. F. 1994. ‘The Cowles Commission's Contribution to Econometrics at Chicago, 1939–1955.’ Journal of Economic Literature 32: 30–59.
  • Foley, D. K. 1998. ‘An Interview with Wassily Leontief.’ Macroeconomic Dynamics 2: 116–140.
  • Franses, P. H. 2008. ‘Merging Models and Experts.’ International Journal of Forecasting 24: 31–33. doi: 10.1016/j.ijforecast.2007.12.002
  • Franses, P. H., H. C. Kranendonk, and D. Lanser. 2007. ‘On the Optimality of Expert-Adjusted Forecasts.’ CPB Discussion Paper No. 92, The Hague, CPB.
  • Haavelmo, T. 1944. The Probability Approach in Econometrics. Supplement to Econometrica 12.
  • Hertz, H. 1899. The Principles of Mechanics Presented in A New Form. New York: Dover, 1956.
  • Koopmans, T. C. 1937. Linear Regression Analysis of Economic Time Series. Haarlem: Bohn.
  • Koopmans, T. C. 1941. ‘The Logic of Econometric Business-Cycle Research.’ Journal of Political Economy 49: 157–181. doi: 10.1086/255695
  • Koopmans, T. C. 1947. ‘Measurement Without Theory.’ Review of Economics and Statistics 29: 161–172. doi: 10.2307/1928627
  • Koopmans, T. C., H. Rubin and R. B. Leipnik. 1950. ‘Measuring the Equation Systems of Dynamic Economics.’ In Statistical Inference in Dynamic Economic Models [Cowles Commission Monograph 10], edited by T. C. Koopmans. New York: Wiley.
  • Leontief, W. 1949. ‘Recent Developments in the Study of Interindustrial Relationships.’ American Economic Review 39: 211–225.
  • Leontief, W. 1971. ‘Theoretical Assumptions and Nonobserved Facts.’ American Economic Review 61: 1–7.
  • Leontief, W. 1982. ‘Academic Economics.’ Science 217 (9 July): 104, 107.
  • Marget, A. W. 1929. ‘Morgenstern on the Methodology of Economic Forecasting.’ Journal of Political Economy 37: 312–339. doi: 10.1086/254020
  • Morgenstern, O. 1928. Wirtschaftsprognose: Eine Untersuchung ihrer Voraussetzungen und Möglichkeiten. Vienna: Springer.
  • Morgenstern, O. 1954. ‘Experiment and Large Scale Computation in Economics.’ In Economic Activity Analysis, edited by O. Morgenstern. New York: Wiley.
  • Morgenstern, O. 1959. International Financial Transactions and Business Cycles. Princeton: Princeton University Press.
  • Morgenstern, O. 1963. On the Accuracy of Economic Observations. 2nd ed. Princeton: Princeton University Press.
  • Volmer, G. 1984. ‘The Unity of Science in an Evolutionary Perspective.’ Proceedings of the Twelfth International Conference on the Unity of the Sciences, Chicago, November 1983. New York: International Cultural Foundation.