1,072
Views
1
CrossRef citations to date
0
Altmetric
Introduction

Introduction to the Special Issue on Kindergarten Entry Assessments: Policies, Practices, Potential Pitfalls, and Psychometrics

As the title of this special issue suggests, our aim in issuing its call was to expand the literature base for kindergarten entry assessment (KEA) policies, practices, and on-the-ground implementation issues, as well as the extent to which these measures are valid and reliable for their stated purposes. In turn, we hoped to provide policy makers, school and program administrators, assessment developers, and teachers with an expanded understanding of the issues to consider when selecting or developing a KEA, when scaling up the administration of a KEA across a state, and when using KEA data for an array of purposes. These interrelated topics are salient given the widespread use of KEAs across the U.S. over the past eight years (Daily & Maxwell, Citation2018). We therefore are delighted to present a Special Issue comprised of an array of KEA-relevant articles. Given Early Education and Development’s focus on research, practice, and policy, we are particularly pleased that the eight articles presented here provide insight into specific KEAs, as well as practical implications for U.S. stakeholders to consider when using these measures.

To provide a policy context for the overall focus of this Special Issue, we begin with an article by Weisenfeld et al. (Citation2020). Their study analyzed federal and state efforts in the implementation of KEAs between 2011 and 2018. Analysis of these data suggest the important role federal funding and guidelines played during this time frame to support the development of, and reliance on, comprehensive measures that focus on all five domains of child development.

The issue’s next two articles look across measures that can be used at kindergarten entry. The first of these articles is by Houri and Miller (Citation2020). They provide an overview of 11 rating scales that can be used as screeners of students’ social-emotional and behavioral skills at kindergarten entry. Their study also evaluated bias identification methods to determine the extent to which scales may be used with diverse populations. This article is followed by Ackerman’s (Citation2020) comparative case study of six KEAs, all of which were based on Teaching Strategies GOLD. Of specific interest in this study were the domains and items contained in these “GOLD-based” KEAs, as well as for which items English language learner kindergartners were permitted to use their home language to demonstrate their knowledge and skills.

The Special Issue’s next three articles provide an in-depth look at the careful work that assessment developers can engage in when aiming to create KEAs that are psychometrically sound and intended for use on a large-scale basis. Specifically, the article by Montroy et al. (Citation2020) details the five phases that were undertaken to develop, calibrate, and launch the Texas Kindergarten Entry Assessment, which serves as a screening tool for a broad range of school readiness domains. The authors also illuminate the role that teacher input and state education agency priorities and policies played in developing the test blueprint. In the next article, Kriener-Althen et al. (Citation2020) use data from California to illustrate how thresholds of social-emotional readiness were established for the Desired Results Developmental Profile. Their article also highlights the research steps involved in determining these thresholds. The third article within this theme is by Joseph et al. (Citation2020) and focuses on the inter-rater reliability of Washington State’s kindergarten entry assessment. In this study, the researchers were particularly interested in the extent to which inaccurate ratings might lead to misidentification of school readiness and whether teacher characteristics were correlated with the accuracy of their ratings. Their findings highlight that accuracy is not only related to domains, but also is weaker for English language learners and children with developmental differences.

The Issue’s final two articles turn to KEA stakeholders’ perspectives on the utility of a KEA. In the first of these articles, Schachter et al. (Citation2020) used data collected during the second year of a large-scale KEA’s implementation to examine teachers’ perspectives of the administration process, the KEA’s perceived benefits, and the degree to which teachers used KEA data to inform their instruction. Analyses of mixed-methods data found that teachers generally did not view the measure as beneficial for their day-to-day instruction. However, these perceptions were correlated with both teacher training and experience. The final article by Little et al. (Citation2020) used three years of qualitative data from North Carolina to examine the perspectives of a variety of KEA stakeholders. Their analyses found that, while stakeholders appreciated the broad domain focus of the KEA, teachers struggled to interpret the KEA’s data and translate it into meaningful instruction.

When taken as a whole, we believe that the content of this Special Issue supports the premise that creating a validity argument for formative assessments that rely on observer ratings of young children – including KEAs – is much more complex than it is for direct summative assessments. We are not positing that the information yielded by these measures is merely subjective. Instead, due to their heavy reliance on teachers to accurately determine young children’s knowledge and skills, observer-dependent measures can provide information about both the raters and the students who are being assessed.

This Special Issue illustrates that KEAs are designed and validated to be instructional resources for teachers and the children and families they serve. The studies contained herein illustrate how the validity of the information that KEAs provide is ultimately context-specific, and is strongly linked to the extent to which teachers and administrators use these measures as intended by their developers. As researchers, one of our roles is to critically evaluate the content and implementation of KEAs, as well as the appropriateness of inferences based on the scores they provide. We are also charged with evaluating essential features of KEAs that may be more important than whether scores are closely aligned to the true ability of children. Specifically, we can investigate whether KEAs provide teachers with accessible and useful information, as well as whether the policies that govern these measures contribute to their utility. Our desire is that you will find the studies within this Special Issue to be valuable sources of evidence that will help you reflect on these important issues.

Acknowledgments

We gratefully acknowledge the anonymous individuals who served as reviewers for the articles in this Special Issue. The views expressed in this Introduction are ours alone.

Disclosure Statement

No potential conflict of interest was reported by the authors.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.