1,132
Views
7
CrossRef citations to date
0
Altmetric
Original Articles

Problematizing the concept of the “borderline” group in performance assessments

, &
 

Abstract

Introduction: Many standard setting procedures focus on the performance of the “borderline” group, defined through expert judgments by assessors. In performance assessments such as Objective Structured Clinical Examinations (OSCEs), these judgments usually apply at the station level.

Methods and results: Using largely descriptive approaches, we analyze the assessment profile of OSCE candidates at the end of a five year undergraduate medical degree program to investigate the consistency of the borderline group across stations. We look specifically at those candidates who are borderline in individual stations, and in the overall assessment. While the borderline group can be clearly defined at the individual station level, our key finding is that the membership of this group varies considerably across stations.

Discussion and conclusions: These findings pose challenges for some standard setting methods, particularly the borderline group and objective borderline methods. They also suggest that institutions should ensure appropriate conjunctive rules to limit compensation in performance between stations to maximize “diagnostic accuracy”. In addition, this work highlights a key benefit of sequential testing formats in OSCEs. In comparison with a traditional, single-test format, sequential models allow assessment of “borderline” candidates across a wider range of content areas with concomitant improvements in pass/fail decision-making.

Notes on contributors

Matt Homer, BSc, MSc, PhD, CStat, is an Associate Professor, working in both the Schools of Medicine and Education. His medical education research focuses on psychometrics and assessment quality, particularly related to OSCEs and knowledge tests.

Godfrey Pell, BEng, MSc, C.Stat, C.Sci, is principal research fellow emeritus at Leeds Institute of Medical Education, who has a strong background in management. His research focuses on quality within the OSCE, including theoretical and practical applications. He acts as an assessment consultant to a number of medical schools.

Richard Fuller, MA, MBChB, FRCP, FAcadMed, is a consultant physician, Professor of Medical Education and Director of the undergraduate degree program at Leeds Institute of Medical Education. His research interests focus on the “personalization” of assessment, to support individual learner journeys, through application of intelligent assessment design in campus and workplace-based assessment formats, assessor behaviors, mobile technology delivered assessment, and the impact of sequential testing methodologies.

Disclosure statement

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.