943
Views
16
CrossRef citations to date
0
Altmetric
ARTICLES

Classroom Versus Online Assessment

&
Pages 450-456 | Published online: 04 Nov 2014
 

Abstract

The authors examined students’ effort and performance using online versus traditional classroom testing procedures. The instructor and instructional methodology were the same in different sections of an introductory finance class. Only the procedure in which students were tested—online versus in the classroom—differed. The authors measured student effort by tracking the number of times students accessed study resources that had been placed on the university Blackboard course management system, and performance as grades on tests given either online for some students or in the classroom for other students. The results indicate that neither study efforts nor course performance was influenced by the testing procedure. However, the authors did find a strong positive relationship between students’ effort and their performance in the course.

Notes

1. Blackboard has evolved into the prominent online course management system for higher education. It quite versatile and can be employed for a wide range of instructional, communication, and assessment functions. Textbook publishers also provide flexible, dynamic online interactive teaching, learning, and assessment tools. Examples include McGraw Hill's Connect, Wiley's Wiley Plus, Cengage's Course Mate, and Pearson's MyLab.

2. Students who were tested online versus tested in the classroom knew the testing method when they enrolled in the sections. There also was only one section offering online testing which made the online testing sample size smaller than the traditional testing sample size where multiple sections were offered. This lack of randomness may introduce a sample selection bias. Self-selection is most often a problem when samples are truncated (or censored), but can be present due to nonrandom sample selection. We utilized the technique suggested by Heckman (Citation1979) to test for sample selection bias. Neither the coefficient of a Probit estimate (substituted for the online dummy) from a model regressing the independent variables against our online dummy nor the inverse mills ratio constructed from the estimate were significant in either the efforts regression or course performance regression. This suggests no self-selection bias was present.

3. The low correlation of the control variables suggests the explanatory variables are not correlated sufficiently to create a multicollinearity problem.

4. We did the analysis using panel data comparing the views each of the four test periods with the respective test grades. The findings are consistent with the reported analyses using the total views for the course and the overall course grade.

5. We also used the number of views the 10 days prior to each test as our measure of student effort. The overall findings were unchanged.

6. We also estimated our regressions with a log transformation of our continuous measure of course grade as the dependent variable. As a third specification, we utilized a multinomial ordered Probit model. The dependent variable was categorized as those making an A (90–100), B (80–89), C (70–79), D (60–69), and F (<60). The key findings of the study were unchanged.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 64.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.