ABSTRACT
Comparative Judgement (CJ) aims to improve the quality of performance-based assessments by letting multiple assessors judge pairs of performances. CJ is generally associated with high levels of reliability, but there is also a large variation in reliability between assessments. This study investigates which assessment characteristics influence the level of reliability. A meta-analysis was performed on the results of 49 CJ assessments. Results show that there was an effect of the number of comparisons on the level of reliability. In addition, the probability of reaching an asymptote in the reliability, i.e., the point where large effort is needed to only slightly increase the reliability, was larger for experts and peers than for novices. For reliability levels of .70 between 10 and 14 comparisons per performance are needed. This rises to 26 to 37 comparisons for a reliability of .90.
Acknowledgments
The authors want to thank the two reviewers and the editor for their critical and helpful remarks. They helped increase the clarity and quality of this paper.
The assessments used in the data were conducted within and outside the University of Antwerp and with the cooperation of the following persons: Prof. dr. Wilfried Admiraal (Leiden Univerisity), Prof. dr. Kris Aerts (KULeuven), Prof. dr. Michael Becker-Mrotzek (University of Cologne), Nathalie Boonen (CLiPS University of Antwerp), Pia Claes (University of Cologne), Ilke De Clerck (CLiPS University of Antwerp), Liesje Coertjens (Université Catholique de Louvain), Cynthia De Bruycker (Hasselt University), Tinne De Kinder, Fien de Smedt (Ghent University), Prof. dr. Benedicte de Winter (University of Antwerp), Prof. dr. Steven Gillis (CLiPS University of Antwerp), Evghenia Goltsev (University of Cologne), Maarten Goossens (University of Antwerp), Ann-Kathrin Hennes (University of Cologne), Prof. dr. Hanne Kloots (CLiPS University of Antwerp), dr. Marion Krause-Wolters (University of Cologne), Valerie Lemke (University of Cologne),Marije Lesterhuys (University of Antwerp), Stefan Martens (University of Antwerp), Prof. dr. Nele Michels (University of Antwperp), Filip Moens (AHOVOKS), Michèle Pettinato (CLiPS University of Antwerp), Prof. dr. Gert Rijlaarsdam (University of Amsterdam), Iris Roose (Potential Project), dr. Pierpaolo Settembri (College of Europe), Prof. dr. Jean-Michel Rigo (Hasselt University), dr. Joke Spildooren (Hasselt University), dr. Sabine Stephany (University of Cologne), dr. Olia E. Tsivitanidou (University of Cyprus), Andries Valcke (Headmaster Training Flemish Public Schools), Danielle Van Ast (Flemish Public Schools Antwerp), Tine van Daal (University of Antwerp), Marie-Thérèse van de Kamp (University of Amsterdam), Kirsten Vandermeulen (Thomas Moore University of Applied Sciences), Roos Van Gasse (University of Antwerp), Prof. dr. Hilde Van Keer (Ghent University), Kristof Vermeiren, Ellen Volckaert (Hudson), Prof. dr. Jo Verhoeven (University of Antwerp) and, Ivan Waumans (Karel de Grote University College).
Data availability statement
The data and R script that support the findings of this study are openly available in the Zenodo repository at http://doi.org/10.5281/zenodo.1493425
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. The maximum information that can be obtained if each pair is compared once. It might be possible to increase the information by judging each pair multiple times. This should however be checked.
Additional information
Funding
Notes on contributors
San Verhavert
San Verhavert is working on the Digital Platform for the Assessment of Competencies project (D-PAC) at the University of Antwerp (Belgium). His PhD focuses on the method of comparative judgement.
Renske Bouwer
Renske Bouwer is now assistant professor in Pedagogical and Education Sciences at the Vrije Universiteit Amsterdam. At the time of this research she was research coordinator for the D-PAC project at the University of Antwerp. Her own research focuses on the quality of comparative judgement for the assessment of writing quality and the effects of comparative judgements on student’s learning.
Vincent Donche
Vincent Donche is an associate professor in Training and Education Sciences at the University of Antwerp, Belgium. His research interests are situated in the domains of student learning, higher education, assessment and related educational measurement issues.
Sven De Maeyer
Sven De Maeyer is a full professor in Training and Education Sciences at the University of Antwerp. He has expertise in statistical modelling. His research mainly focusses on assessment in both education and vocational contexts, with a strong focus on judgement and rater-effects and the merits and pitfalls of comparative judgement.