125
Views
0
CrossRef citations to date
0
Altmetric
Pedagogical and Curricular Innovations

Engaging Students in American Politics: Effort and Accomplishment

Pages 565-579 | Received 18 Jan 2021, Accepted 13 Jul 2022, Published online: 15 Feb 2023
 

Abstract

How much should student effort matter in their course grades? How much does student effort actually matter? What is the link between student effort and student performance, especially when the effort is not specifically focused on a specific performance metric? This paper examines these questions normatively as well as by analyzing student data generated from an introductory course in US politics. We argue as follows. First, student effort should comprise a substantial share of student grades: otherwise, the grade is likely to reward student resources (“human capital”), which are in large part morally arbitrary. Second, in a course in which effort was appropriately incentivized, weighted, and measured, effort was the principal determinant of final grades through its direct and indirect effects. Finally, although there was intriguing evidence that certain kinds of effort mattered more than other kinds in affecting student performance, we were not able to draw firm conclusions on this point.

Notes

1 The syllabi were collected from a non-random sample of 130 political science syllabi at the APSA website and from participants in the APSA Teaching and Learning Annual Research Conference in 2020.

2 In the syllabi sample the mean weight for participation was 15% (standard deviation = 12.0%) Unpublished data available from the authors by request.

3 When referring to these course components (or the variables linked to them) we use them as proper nouns; otherwise, as ordinary words.

4 The 25 question quizzes needed to be completed in 18 min; the 15 question quizzes in 12 min.

5 Engagement as used in this class is distinct from civic engagement (see e.g., McCartney, Bennion, and Simpson Citation2013; Matto et al. Citation2017).

6 If the content questions were multiple choice, they were machine graded, with the student receiving points based on the number of correct answers. If the content questions called for text responses, the student received points if the responses demonstrated effort in the judgment of the teaching assistant.

7 Strategic students might have tried to determine the most efficient way to earn Engagement points, but we have no data to examine this possibility.

8 Of those who completed these tasks, the mean score was 92.6% (85.2 out of 93.0). The knowledge questions on these activities were machine graded multiple choice questions. The interpretive questions were given credit so long as they indicated effort (but the students observed a rubric to which they could compare their answers).

9 One section had an average score of 87, statistically distinct from the other sections. As the TA section had another section that met the 85 mean goal, we believe this section might have had a higher number of higher performing students.

10 Because Total is a perfect linear combination of Knowledge, Engagement, and Research, we omitted Research from the model to better compare the effort and resource components.

11 In examining the relationship between Engagement and Total, we used each student’s scores as a percentage of that student’s points relative to the highest scoring student in that semester. As a result, the maximum score was 100. But because the two semesters differed substantially in the total points possible, in this section Engagement is measured as the total number of points earned. To put this on a more easily interpretable scale, we divided the total number of Engagement points earned by 5, as most assignments were five points. As a result, the highest scoring student in the first semester had an Engagement score of 119.6 (which implies that, over the course of the semester, this student completed over eight activities each week).

12 Unfortunately, we lacked SAT scores for nearly 15% of our sample. Consequently, we used Knowledge scores to be another, less perfect, measure of resources. We assumed that ‘wealthier’ students (those with better test training, a larger stock of knowledge, and aptitude for multiple choice exams, etc.) would score higher on the Knowledge component than their ‘poorer’ counterparts. Students with and without SAT scores are generally comparable on most dimensions. Those lacking scores were more likely to be first year students, with a modest overrepresentation of male international students. Models that omitted SAT scores and substituted Knowledge scores produce similar results as a model including SAT but not knowledge. In a model including both SAT and Knowledge, SAT was statistically significant and Knowledge was not. Full results available from the author by request.

13 The data collected did not have an option for non-binary or other regarding sex. When ACT scores were submitted, we converted them to SAT equivalents.

14 Neither Sex nor Nationality were statistically significant. Sophomores and Seniors performed substantially better than did first year students.

Additional information

Notes on contributors

Mark Carl Rom

Mark Carl Rom is devoted to improving teaching and learning in higher education. His recent research has focused on assessing student participation, improving grading accuracy, reducing grading bias, and improving data visualizations. Mark also explores and critiques the field of political science through symposia on academic conferences, ideology in the classroom, and ideology within the discipline. He is an Associate Professor of Government and Public Policy at Georgetown University.

Jorge Abeledo

Jorge Abeledo is the Director of the Youth Ambassadors program at Georgetown University’s Center for Intercultural Education and Development, a federally funded cultural exchange program designed to strengthen the leadership skills of young people from the Caribbean, Latin America, and the United States. Previously, Jorge was an Assistant Professor in the Institute of Human Development at the Universidad Nacional General Sarmiento in Argentina where he taught media and communication and conducted research on the use of informational technologies in education. Jorge holds a Master’s in Learning, Design, and Technology from Georgetown University.

Randall Ellsworth

Randal Ellsworth is a Senior Instructional Designer at AllenComm. Previously he was a graduate associate at CNDLS, where he supported learning design, technology-enhanced learning, and the Blended Learning Colloquium. Randal holds a Master’s in Learning, Design, and Technology from Georgetown University

Noah Martin

Noah Martin is a Senior Designer for Learning Ecosystems at Georgetown University’s Red House, responsible for designing new educational transformation initiatives at the university as well as monitoring and growing existing projects. Noah is the current project lead for the Core Pathway Imitative at Georgetown University. Noah holds a Master’s in Learning, Design, and Technology from Georgetown University.

Lina Zuluaga

Lina Zuluaga is an advisor to the European Commission initiative Digital Learning for All (Dell4All). She was Presidential Advisor in Colombia and 5 years ago was invited to join the Inter-American Development Bank as an advisor to the Chief of the Education Division in Washington, DC. She is a frequent speaker and author. Lina holds a Master’s in Learning, Design, and Technology from Georgetown University.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.