Abstract
In psychology and education, tests (e.g., reading tests) and self-reports (e.g., clinical questionnaires) generate counts, but corresponding Item Response Theory (IRT) methods are underdeveloped compared to binary data. Recent advances include the Two-Parameter Conway-Maxwell-Poisson model (2PCMPM), generalizing Rasch’s Poisson Counts Model, with item-specific difficulty, discrimination, and dispersion parameters. Explaining differences in model parameters informs item construction and selection but has received little attention. We introduce two 2PCMPM-based explanatory count IRT models: The Distributional Regression Test Model for item covariates, and the Count Latent Regression Model for (categorical) person covariates. Estimation methods are provided and satisfactory statistical properties are observed in simulations. Two examples illustrate how the models help understand tests and underlying constructs.
Data Availability
Additional materials accompanying this work, including all code and simulation results are available on the OSF repository for this work: https://osf.io/dzcyt/. The algorithms described in this work are implemented in the R package countirt (https://github.com/mbsmn/countirt). For the examples, two data sets were re-analyzed. They were collected as part of other research projects and have previously been published. For the data in the first example, BF was involved in the data collection as part of a previous project, and made the data for the first example available on our OSF repository https://osf.io/dzcyt/. Data for the second example were not collected by any of the authors' of this paper and only reused as made publicly available on https://osf.io/gbtd3/.
Notes
1 The data collection was carried out at a German university which adheres to the guidelines of the German Psychological Society (DGPs). More information on those specific guidelines can be found here (Point 6): https://www.dgps.de/fileadmin/documents/ethikrl2004.pdf (German source).