ABSTRACT
Kindergarten entry assessments (KEAs) are frequently used to understand students’ early literacy skills. Amidst COVID-19, such assessments will be vital in understanding how the pandemic has affected early literacy, including how it has contributed to inequities in the educational system. However, the pandemic has also created challenges for comparing scores from KEAs across years and modes of administration. In this study, we examine these issues using a KEA administered to most Kindergarten students in Virginia. This screener was rapidly converted to an online platform to ensure students could continue taking it during the pandemic. Results indicate that the sample of students taking the test shifted substantially pre- and post-pandemic, complicating comparisons of performance. While we do not find evidence of noninvariance by mode at the test level, we do see signs that more subtle forms of item-level bias may be at play. Implications for equity, fairness, and inclusion are discussed.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplementary material
Supplemental data for this article can be accessed on the publisher’s website
Notes
1 All analyses were conducted in Mplus Version 8.4 (Muthén & Muthén, Citation2018). To allow use of a greater range of fit statistics, we preferred a weighted least squares with means and variance adjustments (WLSMV) and a probit link to an estimation strategy using maximum likelihood (ML) and a logit link, though results were not sensitive to this choice.
2 To make scores more comparable to those from a traditional IRT model, these analyses were conducted using maximum likelihood estimation (MLE) and a logit link (in Mplus as before).
3 Note: we do not include an interaction between poverty and mode in the variance model due to technical complications described by Bauer (Citation2017).