451
Views
6
CrossRef citations to date
0
Altmetric
Research in Economic Education

A meta-analysis of technology: Interventions in collegiate economics classes

&
 

Abstract

Technological interventions have been sold as improving student understanding of economics for decades. Yet despite the panoply of ways to incorporate technology, it is not clear which types of interventions consistently result in statistically significant improvements in learning outcomes. Of 145 papers devoted to the technology in collegiate economics courses, less than one third quantitatively assess the impact of technology on student learning outcomes. Of the regressions reported, 60 percent find a positive relationship between a technology intervention and a student-learning outcome; in only 42 percent is the relationship statistically significant. Meta-analysis indicates (a) no technology intervention routinely produces estimates of improved learning outcomes across studies, despite evidence of (b) publication bias that favors papers with statistically significant results.

JEL CODES:

Acknowledgment

The authors thank Ben Artz, Bill Goffe, the participants of the Committee for Economic Education poster sessions at the American Economic Association conference in January 2020, and two reviewers for helpful comments and suggestions.

Notes

1 The survey was given for the first time in 1995 and has since been updated quinquennially. The survey is mailed (now emailed) to 3000 to 4000 academic economists identified from various professional mailing lists. The response rate has varied between 7.9 percent and 19.1 percent. Watts and Becker (Citation2008) note that faculty more interested in teaching are likely more willing to fill out the survey, and thus responses should be considered to represent the “front line” of teaching changes.

2 The JEL codes are a system developed by the Journal of Economic Literature to standardize classification of academic journal articles in economics by field/subject. EconLit returned an initial sample of 927 articles for the search “computer or internet or phone or laptop or tablet or podcast or excel or spreadsheet or whiteboard” AND “education or university or college or higher education” for the period of January 1, 1990, through December 31, 2018. The initial search was conducted during the summer of 2018. A review search to capture newly added papers was completed in August 2019. In setting the parameters of the search, we follow the reporting guidelines of the Meta-Analysis of Economics Research Network outlined in Rosenberger and Rost (2013).

3 As an example of the tension between the desire for quantitative assessments and the traditional evolution of this literature, see the American Economic Association Committee for Economic Education Call for Proposals Poster Session for the 2020 annual meetings. They specifically requested papers “devoted to active learning strategies…[and] although we encourage presenters to include evidence that their strategy enhances learning, we do not require quantifiable evidence.” https://www.aeaweb.org/about-aea/committees/economic-education/call-for-papers.

4 As our focus is on the relationship between technological interventions and academic performance in economics courses, we do not consider studies that examine outcomes such as the impact of technology on student attendance, instructor-student communication, or student satisfaction. By design, articles that examine learning outcomes in economics but do not incorporate a technological intervention are also excluded.

5 Most commonly, the comparison is done using regression analysis. However, we also include a small number of papers that employed t-tests or ANOVA to make the comparison.

6 Zanca’s (Citation2017) meta-analysis of lecture versus less-lecture learning in economics illustrates the extent to which data limitations surrounding reporting of effect sizes and the appropriate standard errors reduce meta-analysis sample sizes.

7 These studies were published in 21 different journals, the most common being the Journal of Economic Education, the International Review of Economic Education, the American Economic Review, and the Southern Economic Journal. Because our studies cover nearly 30 years, we did not attempt to recover from authors the information missing in the 15 papers we exclude because of the third criterion. We were concerned that doing so would bias our study in favor of most recent publications for which authors still have convenient access to the data set.

8 I2 is one of several measures of noncombinability statistics that are used to evaluate meta-analyses. I2 looks at the percentage of variation across studies that is due to heterogeneity as compared to chance. A low I2 (close to zero) indicates that a standard fixed effects model can be used to underpin the meta-analysis. A high I2 suggests the studies may not be measuring the same effect.

9 From equation 2, ρ=0 would not weight by the number of observations from the same study at all, overrepresenting the effects from studies with more estimated effects. ρ=1 is conservative in the sense that it fully weights by the number of observations in each study, so that no study is overrepresented.

10 Standard error produced using robust variance estimation, which is robust to nonindependence of effect sizes. See Hedges, Tipton, and Johnson (Citation2010).

11 We code the studies such that any study reporting a positive gain in learning (of any size and significance) = 1. Studies with zero or negative gains are coded = 0. For details on the basic econometric assumptions and strategies for meta-regression analysis, see Stanley and Jarrell (Citation2005).

12 Under the assumption that all effect estimates are estimates of the same effect and in the absence of publication bias, estimates should be approximately normally distributed around the true effect (which we estimate to be near 0). This suggests that the use of a probit model may be appropriate. While we provide evidence that neither of those assumptions hold, logit and linear probability models produce quantitatively similar estimates of marginal effects, and these estimates are available from the authors upon request.

13 This is the t- or z-statistic that arises from a hypothesis test that specifies the null hypothesis that the technological intervention had no effect on the associated learning outcome versus the alternative that it had some effect (either positive or negative).

14 For the detailed correction process and how the bias estimation formula transforms, readers are referred to Doucouliagos and Stanley (Citation2009, 410). In light of space constraints and reader interest, we do not go through the entire derivation here.

15 What we cannot disentangle, however, is whether there is truly no gain from technological intervention or if the near-zero effect of technological interventions is the result of students using improvements in teaching to increase their leisure consumption instead of their learning outcomes (Allgood, Walstad, and Siegfried Citation2015).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.