1,160
Views
11
CrossRef citations to date
0
Altmetric
Political Science Instruction

Specifications Grading in Political Science

 

ABSTRACT

This article explores the efficacy of specifications grading in undergraduate political science classes. Specifications grading organizes instruction around a set of learning objectives and evaluates student success based on the achievement of carefully articulated specifications for each assessment. Assessments are considered satisfactory or unsatisfactory and final grades are determined based on satisfactory completion of groups (bundles) of assignments, with a bundle linked to each letter grade. The are several advantages to this system: (1) it allows students to be strategic in the work they choose to complete; (2) it allows students to know where they stand without having to calculate weighted averages; and (3) it facilitates more efficient and reliable grading. In this essay, we provide an overview of this technique, reflections on how it works in practice, and what we consider to be best practices for implementation in political science.

Notes

Nilson notes that specifications and contract grading are distinct. Contract grading is an approach based on a written agreement between the instructor and student that specifies required work for particular grades (Hassencahl Citation1979). It is related to specs grading, in that “students can choose what grade they desire by the amount of work they perform” (Bonner Citation2016, 30). Nilson (Citation2014, 73–74) argues that specs gradings’ requirement that all assignments be assessed as Satisfactory/Unsatisfactory, the high bar and clearly articulated specifications for earning Satisfactory marks, and the connection of assignments and grade bundles to learning objectives distinguish specifications grading from contract grading.

Proponents of specs grading often advocate a backwards-design approach to course design. Backwards-design encourages instructors to begin the task of course design by identifying targeted end of semester learning outcomes—that is, by asking what should students be able to do after taking a class. Having articulated learning outcomes, instructors identify necessary evidence of those outcomes, and plan learning experiences that provide students an opportunity to provide that evidence (Bonner Citation2016). The primary alternative to backwards-design is content-focused design, where instructors begin with a text or a set of general topics and structure instructional plans around those topics. Clearly-articulated learning objectives and enduring understandings may not play a significant role in content-focused course design (Hansen Citation2011). While specifications grading can be incorporated by instructors adopting backwards-design principles, we believe the tools are separable. We have found the process of integrating specifications grading into existing courses built on content-focused principles to be relatively straightforward. In fact, for instructors new to backwards-design, adopting specifications grading can provide a starting point for thinking about the relationship between learning objectives and existing assignments. In short, it is not necessary to completely redesign a course to conform to backwards-design principles to adopt specifications grading.

Instructors should be aware that there is a sizable community of scholars working with specs grading that provides assistance and advice to one another. See, for example, Robert Talbert’s Google group (https://plus.google.com/communities/117099673102877564377).

We refer here to a composite example that reflects how we commonly teach the course. In practice we both adapt this general format at the margins.

Detailed (or, analytical) rubrics can be used apart from specifications grading and can be paired with other approaches, such as draft grading. Indeed a literature has developed around their development and efficacy (e.g., Mertler Citation2001; Rezaei & Lovorn Citation2010; Rom Citation2011). Irrespective of whether one uses specifications grading, rubrics provide a transparent and reliable mechanism for assigning grades. The difference here is that students must satisfactorily execute each rubric component to receive any credit for the assignment. Nilson (Citation2016) argues that this ensures rigor: “[n]o skipping the directions and no sliding by on partial credit for sloppy, last-minute work.” Some of the purported benefits of specifications grading could be achieved by adopting analytical rubrics within a traditional grading scheme. We contend that the additional requirements of specifications grading further enhance the utility of analytical rubrics by creating incentives for students to give thoughtful consideration to the feedback provided in rubric form.

Credit for the syllabus quiz was awarded if students answered every question correctly. Students were able to retake the assessment as many times as needed but had to complete the quiz within the first two weeks of class to earn credit. In Blackstone’s Spring 2017 course, 74% of students earned credit for the syllabus quiz.

Our University’s SET provides an overall summative rating for each course based on student responses to global summative items (e.g., “The course as a whole was” and “The instructor’s effectiveness in teaching the subject matter was”) as well as questions that speak directly to student perceptions of grading and clarity of expectations. The grading SET items ask students to choose from six options—Excellent (5), Very Good (4), Good (3), Fair (2), Poor (1), and Very Poor (0)—to complete the following statements: “Evaluative and grading techniques (tests, papers, projects, etc., were” and “Clarity of student responsibilities and requirements were.” For both courses, the instructor’s overall summative ratings and the mean responses to the grading-specific items were identical across semesters. The overall summative rating was 4.3 out of 5 for both offerings of the introductory course and 4.8 out of 5 for both offerings of the upper-division course. In the introductory course, scores for the evaluative and teaching techniques item were 4.5 while scores for the clarity of responsibilities and requirements item were 4.6; the average rating for both items in the constitutional law course was 4.8 in both semesters.

Analyses available from the authors.

Additional information

Notes on contributors

Bethany Blackstone

Bethany Blackstone is an Associate Professor of Political Science at the University of North Texas.

Elizabeth Oldmixon

Elizabeth Oldmixon is a Professor of Political Science at the University of North Texas.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.