1,662
Views
1
CrossRef citations to date
0
Altmetric
Introduction

Introduction to symposium on teaching undergraduate econometrics

Over the past decades, econometrics and formal empirical methods have become more and more important to economics and hence to the teaching of economics. This is a natural movement reflecting the enormous computational and analytic technological advances in data collection and analysis. Within the economics major today there are many more courses in econometrics and statistics than there were in the past; in addition, most upper-level field courses include an almost mandatory econometrics component. Because the introductory and intermediate macro and micro courses currently do not include an introduction to econometrics, and instead concentrate on teaching students theory, this means that in addition to their core theory courses, undergraduate economics students need a core metrics course, or, better yet, a set of core metric courses, in addition to their introductory and intermediate theory core courses to properly prepare them for their field courses. Today, the two core pillars of economic teaching are theory courses and metrics courses.

The increasing importance of econometrics can be seen in the way economists think of themselves in relation to other social scientists. Even as late as the early 1980s, when I interviewed graduate students for my “Making of an Economist” article (Colander and Klamer Citation1987), students told me that it was economists’ use of rational choice theory that defined an economist.1 By the 2000s that had changed, and in interviews I did for my “Making of an Economist, Redux” article (Colander Citation2005), students told me with pride that, from their point of view, what differentiated economists from other social scientists was their high-powered econometric methods that allowed them to pull information from data, not their use of rational choice theory. Modern economists consider themselves best among social scientists at processing and analyzing data.2

In reflecting on these developments, it is useful to remember that good economic policy analysis, and hence, good economic training, has always been empirical and evidenced-based, at least in the eyes of the economists doing it. It is just that, before the recent formal empirical turn in economics, the empirical methods used to apply the theory to the real world were informal and involved interpretation and judgments that went far beyond formal scientific analysis. Applied economics of the past relied on reflective reasoning, or as Deirdre McCloskey classifies them—rhetorical skills.

Good old-style economists could process poorly specified empirical data and theory jointly, creatively capturing the essence of theories, and relating them to the real world in a way that left others with an “aha” sense—yes, that is the way it works. Those economists who were superb at it, such as J. M. Keynes, Jacob Viner, Joan Robinson, Fritz Machlup, or Armen Alchian, were seen as role models for how good applied economics was done. Their methods were discursive; they integrated normative sensibilities, empirical evidence, and formal theory into fables and stories that students could relate to different areas.

These reflective reasoning skills used in informally applying theory to the real world constituted an important pillar of economic learning that, before the increased importance of econometrics, was the mark of a top economist. Up until the 1960s, this alternative pillar was part of the core of the economic major. It was taught both in core theory courses and in history of economic thought and economic history courses, which, at the time, were also often considered core courses. The ability to relate theory to the real world discursively was what was meant by applied economics.

Since the 1960s, the teaching of this alternative reflective reasoning pillar has been whittled away from the economics major to make way for additional courses in teaching the formal empirical pillar. Economic history and history of economic thought have been removed not only from the core of most programs, but also, in many schools, from the program itself. As that happened, the reflective reasoning skills that these courses were meant to teach were also taught less often in the core theory courses to make way for more theory. Teaching reflective reasoning skills was also reduced in upper-level field courses, as more emphasis went into getting the students comfortable with the technical aspects of applied econometrics.

Essentially, these changes in the curriculum have eliminated the reflective reasoning pillar as a central skill in economics. Students no longer learn the art of determining whether the theory fit their intuitive perception of the real world.

This reflective reasoning pillar was not scientific, nor did it claim to be—applied policy was considered an art, not a science, and was seen as requiring normative judgments being integrated into scientific work. To integrate those normative judgments into the analysis, it followed philosophical and humanistic, not scientific, methodological rules. It was based on the belief that applied policy truths were not something that would be deduced from science alone. Rather, applied policy truths were seen as evolving from vigorous, (ideally) friendly debate with other trained economists who held different normative views. The focus of the discussion was open-ended: Did the argument make sense? Did it fit with the way the student understood the world? Were there other ways of looking at the issue? Were the conclusions of empirical work reasonable? What assumptions were they based on? Good applied economists were highly skilled in such discursive methods of learning, and their training involved learning those methods.

Formal statistical analysis does not eliminate the need for these informal reflective skills. Ideally, it supplements them, strengthening the empirical pillar. However, if formal data analysis becomes an alternative, rather than an aid, to the informal data analysis, the formal data analysis can be a step backward; it makes one think one is being scientific and objective, when that is impossible to achieve with the available tools. Currently, all too often students come away from an undergraduate econometrics course with insufficient appreciation of the need to use reflective judgment to apply the theory and statistical analysis to real-world policy issues. Hence, this symposium which considers some issues that might improve the teaching of econometrics. The goal of the symposium is to make students aware of the limitations of the theoretical and statistical tools they are using and to build that awareness into their understanding of economic issues.

The articles in this symposium

This symposium is best understood as a part of this broader debate about empirical evidence and how data analysis should be taught within economics programs. The four articles in the symposium do not directly discuss these broader issues, but all relate to them. The first article, “Is Precise Econometrics an Illusion?,” by Peter Swann addresses a small area within the overall issues, but one that is important for establishing the appropriate framework for students drawing inferences from data. The argument he presents is a good jumping-off point to a broader consideration of the econometrics portion of the undergraduate economics curriculum.

Swann argues that there is a tendency for students to think of econometric empirical results as more precise than they would be considered if the imprecisely met assumptions of the formal empirical analysis were taken into account. The culprit he focuses on is the independence of error term assumption that is conventionally made in econometrics without a lot of reflection on the limits that assumption might impose on interpreting the results. He suggests that, while good econometricians know that a regressor and the error term may not be independent, students generally do not come away from the course with an understanding of what that dependence might mean for the conclusions they draw from the evidence. Swann argues that, when there is a lot of noise in the data, as there generally is, dependence can markedly reduce the degree of precision provided by the empirical analysis. Without sensitivity analysis as to the implication of that dependence, the standard presentation of statistical inference is misleading to beginning students and to other consumers of economists’ research.

His article is based on his book, Economics as Anatomy (Swann Citation2019). In it, he argues that in drawing inferences from data analysis, economists should give more weight to nonquantifiable knowledge that can be learned from observation, discussion, and case studies of the area being studied. Formal data analysis can sharpen one’s nonquantifiable knowledge, but it does not replace it. His conclusion: as a core part of their econometrics’ training students need to learn the importance of continually testing assumptions and recognize the limitations of formal analysis. In other words, students need to learn the importance of the reflective reasoning pillar as a necessary tool in applying economic theory and in interpreting formal statistical results.

Swann’s argument about the interpretation of precision is a parallel argument to one that has been forcefully and often made by Deirdre McCloskey and Stephen Ziliak, the authors of the second article, “What Quantitative Methods Should We Teach to Graduate Students?” Their argument is that the meaning of significance, as provided by standard t tests, is often misunderstood by students, consumers of economic research and, in McCloskey and Ziliak’s view, even by econometricians at the highest level. While they agree that the answer to the question posed by Swann in his article is: “Yes, precise econometrics is an illusion,” they believe Swann’s answer needs “a big, big amendment.” The problem of misinterpreted significance is of much more importance—is more significant—than is the problem of misinterpreted precision. Too often students interpret statistical significance as economic significance, even though the two are quite different.

To do applied econometrics right, they argue that one needs a substantive loss function that is built into econometric analysis and the teaching of econometrics in ways that it currently is not. Students need to know (in a deep sense) that statistical significance is quite different from economic significance. What matters is “oomph,” and “oomph” is dependent on economic significance. Such understanding was an important part of the reflective reasoning pillar. Only when there is “oomph” is there “aha.” They conclude their article with a list of quantitative methods in which students should be instructed, and they include many of the skills that previously were part of the third reflective reasoning pillar.

The third article, “Teaching the Art of Pulling Truths from Economic Data,” by Edward Leamer, another well-known critic of current econometrics, nicely places the debate in the broader context described at the beginning of this introduction. He argues that students learn by doing, not by sitting in lectures, and that when they “learn by doing” they learn to blend reflective reasoning in with their formal analysis.

While Leamer does not find Swann’s specific approach to dealing with noisy data appealing, he agrees with Swann’s general sentiment and argues that if the analysis is presented in a multivariate setting with highly correlated data, it is obvious that “it doesn’t take much measurement error to render the data meaningless.” He concludes with his list of 10 topics that he believes would provide that best training for economic students. It has much more focus on what I above called discursive skills than does our current approach.

In the article, he argues that change in the teaching of econometrics will come. The reason is that technological changes will reduce the need for econometricians doing the type of rote analysis currently taught in many econometrics classes. That rote type of analysis will be replaced with computerized data analysis and artificial intelligence. What we will see is an increase in the need for econometricians who have the reflective reasoning skills to go beyond the rote applications of tests, and which are necessary to pull meaningful inferences from real data, not from the hypothesized data formal empirical tests are designed for. This work, he argues, is infinitely far from what the “monks of Asymptopia” currently teach. Why does not it happen? For Leamer, the answer is clear; it destroys too many people’s human capital.

The final article, “Theory vs. Practice: Teaching Undergraduate Econometrics,” by Alice Louise Kassens provides a different approach to the problem. Nonetheless, she comes to the same conclusion as the others. She concurs with Swann’s suggestion for a multi-methodological approach to applied economic research and agrees that econometric pedagogy is due for a radical makeover. The article evaluates the current undergraduate econometric curriculum and argues that there is a large disconnect between the classroom teaching of econometrics and the doing of econometrics. She argues that what most students do after graduating does not involve issues such as adjusting for heteroskedasticity, but rather involves dealing with messy company data and learning how to make sound judgments based on highly limited data.

While the articles approach the teaching of econometrics from quite different viewpoints, they come to similar conclusions. We need more reflective reasoning taught in our metrics courses and in the economics major. We need courses that teach students not only the technical tests but also how to deal with situations where the assumptions of the tests do not fit the data.

Will the teaching of econometrics change?

The calls for reform in the teaching of econometrics made in this symposium have been made before, and they have had little effect on the teaching of econometrics. So why will this time be different? It probably would not; inertia is a strong force, and the teaching of economics is in large part guided by inertia. But, as suggested by Edward Leamer’s article, I suspect that there is more of a chance for change than there was in the past. The reason is not to be found within the economics profession, but in the developments in the broader field of data analysis. The teaching of data analysis is in the process of disruptive radical change that is as, or more, significant (in McCloskey’s sense of “oomph”) and disruptive as occurred with the developments of formal statistical testing.

These new disruptive changes are creating new transdisciplinary fields of study—data science, deep learning, and artificial intelligence—fields that have one foot in statistics departments and one foot in computer science departments, just as econometrics initially had one foot in statistics and one foot in economics departments. These new approaches move further away from the current econometrics approach.

What is done in these new fields is not new—they are based on statistical data analysis. But the presentation and interpretation is being repackaged to integrate new technological and computational advances, just as econometrics was repackaged to fit economists’ focus on theory. Repackaging makes a difference. As the programs develop on their own, they change the emphasis in what is taught and eventually change what is taught. Modern econometricians learn a quite different approach to statistics than do modern statisticians. So my expectation is that, just as econometrics evolved in a way quite differently than did applied statistics, so too are these new fields evolving away from applied statistics. As they do, my suspicion is that both will move further and further away from econometrics as currently done, putting pressure on econometrics to change to better integrate these new approaches.

Let me explain my reasoning. While econometrics is simply applied statistics, the teaching of applied statistics is quite different from the teaching of econometrics. Econometrics is much more theoretical and concerned with being appropriately scientific. So, while econometrics and statistics training, in principle, cover the same material, in practice they differ significantly. Econometrics is more focused on causality and theory, whereas statistics focuses more on prediction. In practical terms, this means that there are more constraints applied from the underlying theoretical model in economics than there are in applied statistics. The new developments in data analytics are pulling data analysis further and further away from econometrics’ emphasis on the importance of theory.3

I suspect one important reason for econometrics’ focus on theory has to do with its history. Econometrics evolved from statistics being applied to economic issues at a time when the rational choice theory was central to economics. To integrate the two, econometrics blended theory and data analysis in way that emphasized the priority of theory over data. It followed what might be called a “theory first” approach. Students were taught to conceive of the theory as the true data-generating function. Knowing the formal theory was important because it was the only way in which one could interpret the data appropriately. The conventions that followed this “theory first” approach still guide the teaching of econometrics today.

Statistics training followed a different path. It followed a more “data first” approach. Graduate statistics programs became a training ground for actuaries, who were less concerned with theory and causality and more concerned with prediction in the real world. Essentially, statistics programs developed for statisticians whose primary job it was to come up with answers to real-world questions; econometrics training developed to train academics who could relate data to formal theory.

The applied statistics approach did not see theory as providing the actual data-generating function. The actual data-generating function was far too complex to formally capture. The goal of the applied statistics programs was to train students to answer more limited questions for which the data-generating function was too complex to analytically capture. At best, one could specify an imprecise, and possibly, changing pattern.4

The new developments in data analysis are much more “data first” than applied statistics, which, in turn, is already more “data first” than econometrics. The reason is that data availability, and low-cost computer techniques to analyze the data, lessen the value of theory. Machines are thousands of times faster (and, as Leamer notes, far less demanding of higher wages) than humans. These new techniques will likely pose significant challenges to the way econometrics is currently done; they will eliminate much of the rote work. What they would not eliminate is the need for reflective reasoning and precise specification of the questions being asked. The need for such skills will increase, which is why I am hopeful that, this time, it will be different, and econometrics training will listen more carefully to its critics.

Notes

Notes

1 When I was in graduate school in the 1970s, a popular joke of students was that an economics article involved developing a theory, and then adding a bit of econometrics in at the end to get it published. Today the joke is reversed, economic articles analyze a data set and add a little theory at the end to get it published.

2 You can see the importance of econometrics in seminars, which come alive when discussing fine points of econometric specification—debates that in the eighties came alive when discussing fine points of theory.

3 While there is some movement in econometrics toward the data science approach, such as the increased focus on nonparametric methods in econometrics, and work in Bayesian econometrics, that work is still the exception.

4 I see this approach as much more conducive to a Bayesian approach to statistical analysis. The Bayesian approach has a presence in economics, but it was only strongly advocated by a small group, such as Chris Sims, David Hendry, Arnold Zellner, and Ed Leamer.

References

  • Colander, D. 2005. The making of an economist, redux. Journal of Economic Perspectives 19 (1): 175–98. doi: 10.1257/0895330053147976.
  • Colander, D., and A. Klamer. 1987. The making of an economist. Journal of Economic Perspectives 1 (2): 95–111. doi: 10.1257/jep.1.2.95.
  • Swann, P. 2019. Economics as anatomy. Cheltenham, UK and Northampton, MA: Edward Elgar.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.