Abstract
Using data that identify the respondents to student evaluation of teaching (SET), the author finds that respondents and nonrespondents are different along several characteristics. Students respond more if they are first-term freshmen, or if the course is a major requirement. Men, students with light course loads, and students with low cumulative grade point average or low course grade are less likely to evaluate the course and the instructor. A matched-pairs test that effectively eliminates class- and instructor-invariant student characteristics confirms that students who do better in a course are more likely to participate in SET. In addition, students who are more likely to have strong opinions, identified by early participation, hold, on average, positive views toward the course. These results do not support the idea that SET attracts disproportionally more unhappy students. Given the widely documented positive correlation between grades and ratings, these findings suggest that SET ratings can be biased upward.
Keywords:
The author thanks Peter Kennedy, William Becker, John Chilton, and the two anonymous referees for their valuable comments and thanks American University of Sharjah staff Nabeel Amireh, Ron Ray, and Ahmed Aboubaker for their help in retrieving the data. An earlier version of this article benefited from feedback received at a session sponsored by the National Association of Economic Educators and the National Council on Economic Education (Allied Social Sciences Annual Meeting, New Orleans, January 2008).
Notes
1. Layne, DeCristoforo, and McGinty's (1999) approach is different. They predicted which students would complete the electronic survey by using a two-group discriminant analysis.
2. Avery et al. (2006) pointed out that their students could have felt that paper evaluation was more secure, because of the potential ability to track and identify Internet users. As a result of such confidentiality concerns, the response rate in classes subject to electronic SET might have been affected by the presence of the traditional method of evaluation.
3. Student identification number was replaced with a randomly chosen alternative, to maintain anonymity.
4. Whether the course grade measures learning or other factors is not important here. If the course grade reflects lenient grading, for example, high-quality students are still expected to outperform other students.
5. In SET studies linking ratings to course grade, the estimation process is complicated by the possible two-way causation between these two factors. However, endogeneity is not a problem in this study, for responding to SET does not affect the course grade. The actual grade and the expected grade at the time of the evaluation are expected to be highly correlated, for the evaluations ended one week before the final exams. The results do not change when using a modified CGPA that does not include the course grade associated with a particular observation.
6. I tried one additional variable. AUS is a regional campus with very diverse student population. I classified the students into five nationality groups (the following figures represent group shares and are in percentages): United Arab Emirates (16.9), other Gulf Cooperation Council (GCC) countries (11.8), non-GCC Arab nationality (41.5), South Asia (19.7), and other (10). None of these groups was significantly different from any other.
7. Thirteen percent of classes had a response rate between 40 and 60 percent, 53 percent had a response rate between 60 and 80 percent, and 33 percent had a response rate above 80 percent.