Abstract
The use of online literacy applications is proliferating in elementary classrooms. Using data generated by these applications is assumed to be helpful for teachers to identify struggling readers. Unfortunately, many teachers are unsure how to use and interpret the plethora of data from these apps. In this longitudinal study, we followed a cohort of students from kindergarten through first grade (n = 54). We then used quasi-simplex models to estimate the relation between five performance measures taken from an online literacy application and five reading related progress monitoring outcomes at four sequential time points controlling for previous achievement. Results suggest performance measures have more predictive power during kindergarten and the amount of time students were logged-in to the program was the most consistent predictor across outcomes and assessment periods. The number of interactions with the program was significantly related to students’ decoding skills. We discuss how these results might be used to increase teachers’ use of performance measures to adapt instruction.
Acknowledgments
The authors would like to thank our data team: Talia Campese, Alexis Freeman, Nicolette Grasley-Boy, Alisa Hanson, D’Annette Mullen, Janice Nieves, and Christine Woods.
Data availability
The data used in this study are openly available on LDbase.org at http://doi.org/10.33009/ldbase.1617977242.ea21.
Disclosure statement
The authors have no conflicts of interests to disclose.