ABSTRACT
Background and Context
Despite the increasing implementation of coding in early curricula, there are few valid and reliable assessments of coding abilities for young children. This impedes studying learning outcomes and the development and evaluation of curricula.
Objective
Developing and validating a new instrument for assessing young children’s proficiency in the programming language ScratchJr, based on the Coding Stages framework.
Method
We used an iterative, design-based research approach to develop the Coding Stages Assessment (CSA), a one-on-one assessment capturing children’s technical skills and expressivity. We tested 118 five-to-eight-year-olds and used Classical Test Theory and Item Response Theory to evaluate the assessment’s psychometric properties.
Findings
The CSA has good to very good reliability. CSA scores were correlated with computational thinking ability, demonstrating construct validity. The items have good discrimination levels, and a variety of difficulty levels to capture different proficiency levels. Younger children tended to have lower scores, but even first graders can achieve the highest coding stage. There was no evidence of gender or age bias.
Implications
The CSA allows testing learning theories and curricula, which supports the implementation of Computer Science as a school subject. The successful remote administration demonstrates that it can be used without geographical restrictions.
Acknowledgments
We would like to thank Riva Dhamala and Jessica Blake-West for their support in developing the CSA, Amanda Strawhacker and Madhumita Govindarajan for their work and feedback on previous iterations of the instrument, and Jessica Blake-West for coordinating the data collection. Thanks also to all children, teachers and parents who made it possible to conduct this research in the middle of the COVID-19 pandemic.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1. Our goal had been to have at least half of all participants complete TechCheck. However, due to logistical challenges caused by the COVID-19 pandemic in 2020, only 23 children were able to take the test.
2. We thank an anonymous reviewer for pointing this out.
Additional information
Funding
Notes on contributors
Laura E. de Ruiter
Laura de Ruiter is a research assistant professor at the DevTech Research Group at the Eliot-Pearson Department of Child Study and Human Development at Tufts University. She studies language acquisition and cognitive development in young children. Her current research in developmental computer science focuses on the design and evaluation of interventions.
Marina U. Bers
Marina Umaschi Bers is professor and chair at the Eliot-Pearson Department of Child Study and Human Development with a secondary appointment in the Department of Computer Science at Tufts University. She heads the interdisciplinary DevTech Research Group. Her research involves the design and study of innovative learning technologies to promote children’s positive development.