1,371
Views
27
CrossRef citations to date
0
Altmetric
RESEARCH IN ECONOMIC EDUCATION

In-Class vs. Online Experiments: Is There a Difference?

&
Pages 4-18 | Published online: 18 Jan 2012
 

Abstract

Classroom experiments in economics continue to increase in popularity. While early experiments were often hand-run in class, now computerized online experiments are also widely available. Using a quasiexperimental approach, the authors investigated whether any difference in student achievement (as measured by course scores and the Test of Understanding in College Economics (TUCE) (Saunders Citation1991)) or other outcomes exists between students exposed to experiments in class and students exposed to them online. In this investigation, class sections differed only in the manner through which the experiments were administered: manually in class; or computerized online. The authors found no significant difference in student achievement or overall views of the course or instructor between the two treatments. The authors did, however, find that students exposed to hand-run experiments report more favorable views of the experimental pedagogy and report higher levels of interaction with their classmates.

JEL code:

Acknowledgments

The authors gratefully acknowledge financial support from Baylor University. The authors are very grateful for the patience and hard work of Susan Armstrong, Rebecca Jordan, and Kellie Konsor during the data collection process. Finally, the article has benefited greatly from comments by Steve Conroy, Myra Moore, Jennifer Imazeki, and the Research on Economic Education session participants at the 2009 and 2011 ASSA meetings. Any errors are the authors’ sole responsibility. This article is based on a paper that was presented at the National Conference on Teaching Economics at Stanford University on June 1–3, 2011.

Notes

1. See Becker and Watts (Citation1995) for an overview of innovative teaching methods aimed at promoting active learning in economics courses.

2. Emerson and Taylor (Citation2004, Citation2007), Dickie (Citation2006), Ball, Eckel, and Rojas (Citation2006), and Durham, Mc- Kinnon, and Schulman (Citation2007) investigate achievement in principles courses. Frank (Citation1997) studies the efficacy of experiments in public and environmental economics courses.

3. For further discussion of the literature on experiments, see Emerson and Taylor (Citation2004), Dickie (Citation2006), Durham, McKinnon, and Schulman (Citation2007), and Emerson and Hazlett (forthcoming, Citation2012).

4. Vazquez-Cognet (Citation2008), however, has adapted some hand-run experiments for use in a large enrollment (500+) class. Hazlett et al. (Citation2011) describes variations of three experiments to increase student engagement and participation, including an adaptation of the basic double oral auction for use in large enrollment classes.

5. More information about these computerized experiments can be found at http://aplia.com/experiments/, http://veconlab.econ.virginia.edu/, and http://www.econport.org.

6. This is not to suggest that no feedback or communication is possible with computerized experiments. For example, VeconLab has a feature that allows instructors to send messages to experiment participants. Aplia has a “chat room” feature that also facilitates communication between instructors and participants as well as among participants themselves. With both of these programs, however, debriefing is still delayed. Further, it is less likely that online discussions will involve the entire class to the same extent as an in-class discussion that may employ additional active-learning techniques.

7. Anecdotally, students in the study appeared to prepare more for the online experiments. While instructions and warm-up exercises were provided to students in both treatment and control groups, those in the control group (informally) reported spending less time in preparing for experiments. With the computerized experiments utilized in the study, students were required to read through on-screen instructions and perform practice exercises prior to each experiment's commencement. This required attention and effort may contribute positively to student understanding.

8. At Baylor University, completion of microeconomic principles is a prerequisite for macroeconomic principles. As a result, most students in the sample had not yet taken a course in college-level macroeconomics.

9. Optimally, students would be randomly assigned across sections. Such assignment, however, was not possible. At the time of student enrollment, students did not know whether they had selected a section with in-class or online experiments. In fact, students did not know whether they had selected into a class employing experimental pedagogy at all.

10. Experts in the pedagogy of classroom experiments consider it best practice to start a new topic with an experiment—with no instruction on that topic prior to the experiment. See Emerson and Hazlett (forthcoming, Citation2012) for additional discussion of best practices.

11. In other words, the gap-closing measure is defined as (postcourse TUCE − precourse TUCE)/(33 − precourse TUCE).

12. While analysis of whether TUCE questions related topically to experiments may also be of interest, we limited the current study to focus on overall achievement. That is, here we attempted to answer whether the media of administration of the experimental pedagogy affects overall achievement (rather than whether specific experiments produce increased understanding of topics illustrated therein).

13. The professors in the study did have some differences in the number of homework assignments and exams. One professor assigned 11 problem sets and two midterms, while the other assigned 15 homework sets and three midterms.

14. Some researchers have questioned the validity and reliability of the TUCE for measuring student achievement. Issues include the test format (fixed response) and context (real-world problems). For additional discussion of TUCE limitations, see Emerson and Taylor (Citation2004). Even so, we employed the TUCE as it is the only nationally recognized and normed instrument available at the collegiate level.

15. The third edition of the TUCE has a total of 33 questions for each version. Instructors are given the option of having students complete either the first 30 questions or all 33 questions. Students in this study were instructed to answer all 33 questions to the best of their ability.

16. The precourse TUCE was designed to be a surprise exam (i.e., students were to have had no knowledge of the exam before coming to class because such knowledge could have affected attendance and participation in the study).

17. During the precourse TUCE assessment, students were not made aware of this grading method to prevent strategic behavior that could have led to a downward bias of the precourse TUCE scores. The change in TUCE scores entered into grade calculation in the same fashion across all sections.

18. Becker and Powers (Citation2001) argue against using student-provided data for aptitude measures due to their unreliability. Further, Maxwell and Lopus (Citation1994) demonstrate that student self-reporting of scores may suffer from systematic reporting bias. Such nonrandom reporting would produce biased estimates of the relationship between student achievement and educational inputs.

19. Twenty-one students in the study only took the ACT and not the SAT. In the interest of preserving as large a sample as possible, we used a standard ACT-SAT conversion chart (http://www.act.org/aap/concordance/) to translate the ACT scores into SAT scores.

20. Instructors took attendance. For the treatment section, participation in online experiments was used to record attendance (as opposed to presence at the debriefing) on the six experiment class days. There was, however, little to no difference in the participation in the online experiments and the subsequent debriefing.

21. Because much of the literature studying the efficacy of experiments has employed in-class experiments, and to facilitate the discussion, we refer to the group with in-class manually run experiments as the “control group” and refer to the group with computerized online experiments as the “treatment group.”

22. All students who were still enrolled at the end of the course were assigned a course score and final course grade. However, not all of these students took the postcourse TUCE, and thus some are missing the gap-closing measure. The total number of observations was 217 for the final course score and 203 for the TUCE measure. Of these students, several failed to provide information on basic demographic characteristics (e.g., age) or SAT scores. So, we are left with a sample of 186 students for whom demographic information, SAT scores, and course scores are available and 176 students for whom demographic information, SAT scores, and TUCE gap-closing measures are available.

23. Extra credit was offered in each of the seven sections. As a result, the maximum percentage possible exceeded 100.

24. Specifically, we used precourse TUCE score and transfer status to identify the selection equation in the course score analysis and used student GPA and transfer status to identify the selection equation in our analysis of the TUCE gap-closing measure. In correcting for selection, we do note that in the final course score estimation, in specification (3), age is no longer a statistically significant predictor of achievement.

25. Students were also asked to report the number of acquaintances and friends in their class in both the pre- and postcourse surveys. No significant difference between the changes in the number of acquaintances or friends was found between the groups.

26. When asked to evaluate the statement, “Experiments contributed to my overall satisfaction with this course,” the difference was only significant at the 10-percent level.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.