417
Views
9
CrossRef citations to date
0
Altmetric
Original Articles

What’s the Payoff?: Assessing the Efficacy of Student Response Systems

, &
Pages 249-263 | Published online: 04 Aug 2015
 

Abstract

Student response systems, or “clickers,” have been presented as a way of solving student engagement problems, particularly in large-enrollment classes. These devices provide real-time feedback to instructors, allowing them to understand what students are thinking and how well they comprehend material. As clickers become more common, it is important to assess their impact on student learning and engagement. Utilizing individual-level data from introductory courses in political science, we demonstrate that students both approve of clicker usage and that they are positively associated with class performance. Using clickers to test students’ understanding of class concepts has positive effects on exam and essay scores even after controlling for previous levels of academic achievement, students’ evaluation of the technology, and other socioeconomic traits.

Notes

More information about the vendor and devices can be found on the i > clicker Web site: http://www1.iclicker.com/.

To provide context, “Do you believe it is good to have an unelected judiciary in a democratic nation?” is an example of an opinion question. An example of an application question can take on the following form: “Which of the following is an example of fiscal federalism?” Finally, “What alternatives to the electoral college might you propose?” is an example of a discussion question.

Alternatively, these two grading schemes can be combined, awarding a portion of the credit based on participation and the balance on correctness.

In survey responses, about 20% of students agreed with the statement that participation was worth too much of their grade while approximately 36% agreed that it was worth too little. Seventy-four percent felt the policy for grading participation using clickers was fair while about 14% disagreed.

Our response rate is 40% during the Fall 2011 semester and 43% during Fall 2012. In each semester, the same survey is administered about two weeks prior to the end of the term. To account for any semester-specific variation, we include in our statistical models a control variable for the term in which the survey is collected.

We do not use the raw final score earned in the class because this also includes the frequency with which students responded to clicker questions. Isolating grades on exams and essays allows us to examine any effect more frequent clicking might have on exogenous assessments of understanding.

In creating this measure, we only include questions that contain an objectively correct answer (i.e., application questions). This measure is calculated as the percentage of application questions correctly answered by the respondent.

Utilizing a 5-point Likert scale ranging from strong dislike to strongly like, the average score for concept questions are 3.26 and 3.18 and 3.09 for opinion and discussion questions, respectively.

Students are allowed to earn their grade in this course by pursuing one of two tracks, one in which they take a series of tests and another where they write essays. Both grading schemes assess the same concepts and theories, but students have a choice in how they wish to demonstrate their mastery of the material. Someone on the essay track does not take exams and, likewise, those on the exam track do not write essays. The dependent variable is the final score earned by the individual on either the essay or exam track and the same essay and exam questions are used across the two semesters.

Specifically, items missing student responses include the following: questions about political interest, whether they had taken college-level political science courses or a high school government class, and their year of school (nine missing responses for each); questions about student gender, college GPA, and whether or not the student was born in another country (10 missing responses for each); parents’ levels of education (13 missing for father, 15 missing for mother), and students’ levels of engagement in on- and off-campus organizations (13 missing).

We used the slate of mi commands in Stata 13 to impute data iteratively into five separate datasets. This process utilizes complete information from observations with nonmissing data as well as partial information from observations with partially missing data to estimate missing values. Using the “mi estimate” command in Stata, OLS (version 13) regressions were run on each dataset separately and then pooled into a single table of results with coefficient estimates representing the average effects of each variable across the analyses. It is important to note that imputed information for observations with missing values does not add significance or nonsignificance to an estimate for that variable. In other words, the technique does not skew the results and provides a standardized estimate of a missing value so the nonmissing information provided by a respondent can also be included in the model. More information about the necessity of multiple imputation and its use can be found in King et al. (Citation2001).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 365.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.