578
Views
1
CrossRef citations to date
0
Altmetric
Scholarship of Teaching and Learning

Keeping Students from Going AWOL: The Link between Pedagogy and Student Retention

Pages 318-345 | Received 28 Jan 2017, Accepted 09 Mar 2018, Published online: 15 Oct 2018
 

Abstract

Teaching and learning research typically focuses on learning outcomes relating to the acquisition of knowledge. In this article, we shift focus to a different outcome: student commitment to, and thus successful completion of, a course. By examining the relationship between instructor pedagogical choices and rates of student retention—as measured primarily by withdrawal rates—we hope to help instructors think about ways to limit the incidence of their students “going AWOL.” How strong is the evidence that an instructor’s teaching behavior influences student retention or attrition? If an instructor’s decisions do seem to matter, exactly which patterns are evident? To begin answering these questions, we report results from a major data collection effort, which involved undergraduate research assistants attending a wide variety of courses and recording details related to the instructor’s pedagogical approach—such as the extent to which the instruction was student-centered or instructor-centered, the extent to which the classroom environment was regulated, elements of entertainment incorporated into class sessions, and efforts to facilitate rapport with students. Comparing this information to end-of-semester grade and enrollment statistics, we identify the relationship between how a course is taught and the retention of its students. We find a pattern of lower retention in courses where class sessions are heavily structured, in the sense that students are “cold-called” to participate or discouraged from such behaviors as phone use. We also find a relationship, weaker but still significant, between higher retention rates and both the showing of videos and the use of active learning.

Notes

Acknowledgments

The authors would like to thank Joan Middendorf, Alex Greer, Samuel Brazys, and three anonymous reviewers of the Journal of Political Science Education for feedback that helped make this article stronger. They are also grateful for the work of 20 undergraduate research assistants at Oklahoma State University. Without their capable data collection efforts, this research would not have been possible.

Notes

1 Blair (Citation2015) shows that 84% of such studies focus on a single course.

2 Note that our approach does rely in part on student perceptions, because our observers are students and are asked in a few select instances to provide their subjective perceptions of certain features of the sessions they observe.

3 The disciplines, and the frequencies with which they appear in our sample, are as follows: Political Science (11), History (6), Geography (3), Statistics (3), Philosophy (2), Geology (2), Art (2), Sociology (1), Psychology (1).

4 No instructor appears more than once in our sample.

5 These were not always 100- /1000-level courses. Rather, our sample includes courses at any level that focus on introductory subject matter—including major introductory courses within various disciplines (e.g., Introduction to Sociology), as well as introductions to subdisciplines (e.g., Introduction to Comparative Politics).

6 Limiting the sample to only political science, while keeping the other filtering criteria, would have made it impossible to include more than about 15 courses in our study.

7 Our typical threshold was 35 students, although we included political science courses with as few as 25 students in the interest of over-sampling courses from our discipline.

8 While it might reduce our external validity to some extent, the lack of random selection is not problematic in terms of internal validity. We are not attempting to catalogue what instructors are doing in their courses; if we were, the lack of a representative sample would be problematic. Instead, given what instructors in our sample are doing, we are showing the relationship between those particular teaching styles and student retention.

9 For instance, during the Spring 2015 round of research, more than two-thirds of the instructors responded to the attempts to solicit their participation, and more than 90% of the responding instructors—25 in total—agreed to participate. Some of the consenting instructors did not end up in the study because observing their courses proved impossible for scheduling reasons.

10 More detail on observation procedures and a copy of the observation sheet used during observations can be found in the Appendix.

11 Consequently, we have no reason to suspect that teaching behavior differed between days when observers were present and days when they were not.

12 On the low end, a course was observed as few as five times over the course of the semester; on the high end, almost 20.The mean and median observation frequencies were, respectively, 9.77 and 9. The information recorded by the observers is averaged across class sessions, so differences in the exact frequency of observations across courses do not affect the data analysis

13 In one sense, it is not accurate to say that a student earning a D has failed to successfully complete a course: the student still passes the course and earns credit hours. However, in many instances, a student needs a higher grade in order for a course to fulfill relevant requirements; and, presumably, a grade of D falls below what both students and instructors would typically regard as success.

14 The only such instance that we do not count is when the instructor merely asks if students have any questions.

15 As our study focuses on in-class instruction, we leave out work completed outside of the classroom.

16 "Overall levels of classroom regulation" includes cold-calling of students and discouraging their sleeping, distraction, or use of mobile devices.

17 This is not to say that doing otherwise indicates not valuing students or the class; we merely offer these as explicit, observable, attempts to create such an impression.

18 The point is carefully specified in our coding instructions for observers: it is the attempt to address students by name—not getting names correct—that matters.

19 This assumption is informed by discussions with students. Instructors who dress well seem, according to various student accounts, to “take the class seriously”—in keeping with what we mean when referring to rapport.

20 Observers were instructed to measure laughter per se (not, for example, how humorous the instructor was). Thus, this indicator captures anything involving humor during the class session—such as comments by the instructor or students, as well as videos or other resources used during class.

21 A table listing all of the variables in our study, with more details about each one, is included in the Appendix.

22 There are a great many control variables that one would need to include if hoping to isolate the effect of specific pedagogical characteristics on retention. Adding all of these variables into a regression would increase the required sample size far beyond what is feasible, barring a monumental data collection effort. To illustrate, consider a sample size of 300, which would provide much better statistical leverage. Based on the data collection specifications that we use for this project, it is possible to observe courses at a rate of approximately two per research assistant. Keeping the observations at one university, and only in disciplines related to political science, we estimate that one could recruit about 30 instructors per semester, on average. This means that a sample size of 300 would require roughly 10 semesters of observations and up to 150 research assistants over that time period. Alternative means of measuring pedagogy would provide greater potential for this large sample size, but they would come with the drawbacks noted earlier in this section.

23 A series of summary statistics describing the data we collected can be found in the Appendix.

24 Interestingly, while political courses appear pedagogically quite similar to courses from other disciplines, we did find that political science courses used active learning techniques more frequently. See the Appendix.

25 Withdrawal rates and D/F rates are positively but not significantly correlated (r = .2356) among the courses in our sample.

26 For an overview of content analysis research methodology, see Krippendorff (Citation2013) or Riffe, Lacy, and Fico (Citation1996).

27 For an overview of content analysis research methodology, see Krippendorff (Citation2013) or Riffe, Lacy, and Fico (Citation1996).

28 In most other applications of content analysis, the content being analyzed is often newspaper articles or similar textual material. This ostensible difference is methodologically inconsequential.

29 A copy of the class observation coding sheet, which (along with a document containing general instructions for coders) constitutes the coding protocol, is provided in Section C of this Appendix.

30 Gaining insights into the student experience is our core objective when measuring an instructor’s pedagogical approach. We care less about the “true” extent to which an instructor does something and more about the extent to which students perceive the instructor as doing something—hence our placement of student assistants in the classroom who record what they perceive. For instance, when gathering information about instructor attempts to build rapport with students, we have selected indicators that we view as signaling to students that the instructor cares. We cannot directly observe student experiences, so we make inferences about what these experiences are likely to be based on what our observers record.

31 The extent to which different coders agree with one another when evaluating the same material is often used to assess the internal validity of the research: if the categories being used cannot reliably be applied by the coders—in other words, if different coders often provide different responses to the same coding item (for the same material)—then it suggests that the measurement procedures may not be trustworthy (Krippendorff Citation2013, 268). See Riffe, Lacy, and Fico (Citation1996, 127–133) for a discussion of different measures of intercoder reliability.

32 Commonly, most texts are analyzed by just one coder, but a portion of them (perhaps 15% of the texts in the sample) are coded by all in order to assess intercoder reliability.

33 There are two coders per class session. It is often, but not always, the same two coders who observe a given course. In many cases, three coders observe on a rotating basis. The overall average number of coders contributing to the data for a particular course (including anomalous one-time substitutions) is 2.74.

34 In our approach, the possibility remains that any given combination of coders could be anomalous compared to other combinations of coders. Our inability to definitively rule out this possibility may be a viewed as the primary limitation of our data collection procedure.

35 A common standard is that agreement must be at least 80%, after controlling for chance—see Riffe, Lacy, and Fico (Citation1996, 128).

36 The standard training videos were Open Yale Courses, available at (http://oyc.yale.edu).

37 Our approach regards variability in responses as acceptable, if not desired. However, responses that are significantly at variance with what others would have responded are problematic, because they would likely reflect a systematic bias on the part of a coder. Thus, the goal of the training was to ensure that a reasonable range of disagreement resulted from the deliberations.

38 The responses of both observers are combined in this standardization of the variables, so the previous example would require that observers always agreed about the instructor never/always addressing students by name. Imagine a hypothetical scenario in which, for every class session, one coder reported that the instructor always used names and the other coder reported that the instructor never did so. In this case, the overall value for that course would be .5.

39 The correlation coefficients would be exactly the same with or without this conversion.

40 For a better sense of what the values for each variable indicate, refer to and recall that all values described therein are converted to a 0 to 1 range.

41 The mean and median of the “overall level of student involvement” variable is considerably higher than that of the active learning frequency variable.

42 The only exception is calling students by name, a practice that is only slightly more common in political science courses.

Additional information

Notes on contributors

Eric Michael French

Eric Michael French is an Assistant Professor of Political Science at Oklahoma State University. He received his PhD from Indiana University in 2013. His research interests fall into two different areas: (1) teaching and learning—especially student retention and faculty perceptions of student comprehension; and (2) political behavior—especially political disagreement and persuasion.

Brendon Westler

Brendon Westler is a Postdoctoral Fellow with the Center for the Study of Liberal Democracy at the University of Wisconsin-Madison. He conducts research in the area of political theory, with a particular focus on the history of liberalism and Hispanic political thought. His other published work has appeared in the Journal of the History of Ideas, the Review of Politics, and Perspectives on Politics.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 365.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.