ABSTRACT
To meet accreditation expectations, teacher preparation programs must demonstrate their candidates are evaluated using summative assessment tools that yield sound, reliable, and valid data. These tools are primarily used by the clinical experience team – university supervisors and mentor teachers. Institutional beliefs regarding best practices and external stakeholder demands often influence the design, execution, and evaluation of these tools, and the scholarly literature is robust with studies of the design and execution of the tools. This study extends prior research to investigate, in a novel way, evaluation of the tool and discrepancies among raters in the data yielded by the instrument. How important is interrater agreement on summative assessment tools in the student teaching experience? Should we simply dismiss these differences, forge ahead to minimize disagreement, or seek to better understand how and why limited agreement might occur? In this paper, we explore methods to evaluate interrater agreement between mentors and university supervisors and the ways in which these data can inform purposeful continuous programmatic improvement. Furthermore, we contend that “less is more,” as our data findings reveal similar outcomes with both simple and more complex statistical methods of examining agreement.
Disclosure Statement
No potential conflict of interest was reported by the authors.
Additional information
Funding
Notes on contributors
Elayne P. Colón
Elayne P. Colón, is the Associate Dean for Academic and Student Affairs in the College of Education at the University of Florida. With a background in school psychology, her scholarly interests include pathways to teaching, assessing quality educator preparation, and issues related to accountability and accreditation in higher education.
Lori M. Dassa
Lori M. Dassa, is the Director of Clinical Experiences and Partnerships in the College of Education at the University of Florida. Her scholarly interests include the recruitment and retention of beginning teachers and varied pathways to support this process.
Thomas M. Dana
Thomas M. Dana, is a Professor of Education and Director of the Institute for Advanced Learning Technologies in the College of Education at the University of Florida. Dana’s academic interests are in science education, particularly the study of quality educator professional development programs and the development of reform-oriented, subject-specific pedagogical knowledge and practice.
Nathan P. Hanson
Nathan P. Hanson, MBA, is the data manager and analyst in the College of Education at the University of Florida. His focus is the collection, analysis, and reporting of information, including the coordination of data projects and development of strategies, as well as the implementation of data systems and processes.