ABSTRACT
Rater fit analyses provide insight into the degree to which rater judgments correspond to expected properties, as defined within a measurement framework. Parametric models such as the Rasch model provide a useful framework for evaluating rating quality; however, these models are not appropriate for all assessment contexts. The purpose of this study is to explore numeric and graphical rater monotonicity analyses as a nonparametric approach to evaluating rater fit, and to consider whether these indices lead to similar decisions as Rasch fit indices in contexts with complete and incomplete data. Using simulated and real data, results indicate that nonparametric rater monotonicity indices can be used to identify raters whose ratings exhibit acceptable and problematic psychometric properties.