504
Views
13
CrossRef citations to date
0
Altmetric
Original Articles

Trifactor Models for Multiple-Ratings Data

, &
Pages 360-381 | Published online: 28 Mar 2019
 

Abstract

In this study we extend and assess the trifactor model for multiple-ratings data in which two different raters give independent scores for the same responses (e.g., in the GRE essay or to subset of PISA constructed-responses). The trifactor model was extended to incorporate a cross-classified data structure (e.g., items and raters) instead of a strictly hierarchical structure. we present a set of simulations to reflect the incompleteness and imbalance in real-world assessments. The effects of the rate of missingness in the data and of ignoring differences among raters are investigated using two sets of simulations. The use of the trifactor model is also illustrated with empirical data analysis using a well-known international large-scale assessment.

Article information

Conflict of interest disclosures: Each author signed a form for disclosure of potential conflicts of interest. No authors reported any financial or other conflicts of interest in relation to the work described.

Ethical principles: The authors affirm having followed professional ethical guidelines in preparing this work. These guidelines include obtaining informed consent from human participants, maintaining ethical treatment and respect for the rights of human or animal participants, and ensuring the privacy of participants and their data, such as ensuring that individual participants cannot be identified in reported results or from publicly available original or archival data.

Funding: This work was not supported.

Role of the funders/sponsors: None of the funders or sponsors of this research had any role in the design and conduct of the study; collection, management, analysis, and interpretation of data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.

Acknowledgments: The majority of the work was done when the first author, Hyo Jeong Shin, was at the University of California at Berkeley. The authors would like to thank Kentaro Yamamoto, James Carlson, Jodi Casabianca, and Ikkyu Choi for their comments on prior versions of this manuscript and Emily Lubaway and Larry Hanover for their editing help. The ideas and opinions expressed herein are those of the authors alone, and endorsement by the authors’ institutions is not intended and should not be inferred.

Notes

1 Generalizability theory (G-theory) can be viewed as a similar approach (to that of IRT) by providing a way of partitioning the total variance into separate and uncorrelated parts (Shavelson, Baxter, & Gao, Citation1993) and by considering the estimated variance components as latent factors. G-theory separates the total variability in the ratings into variance components from different sources, such as (a) systematic variability between individual test takers, (b) variability between raters (interrater inconsistencies), (c) variability within raters across rating occasions (intrarater inconsistencies), and (d) variability between the writing tasks (e.g., Sudweeks, Reeve, & Bradshaw, Citation2004). In this paper, we focus on the measurement models based on the IRT framework and connect the results to the factor analysis approach or generalizability theory approach.

2 To our knowledge, nothing has been published yet regarding application of the HRM to dichotomous data.

3 The Rasch-based model was chosen to estimate factor variances for all raters and items.

4 Results of fitting the correct model, HEHE, are the same as the “full linkage design” columns in the Table 2.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 352.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.