References
- Bartman I, Smee S, Roy M. 2013. A method for identifying extreme OSCE examiners. Clin Teach. 10:27–23.
- Brooks JG, Brooks MG. 1993. In search of understanding: the case for constructivist classrooms. Alexandra (VA): Association for Supervision and Curriculum Development.
- Chesser A, Cameron H, Evans P, Cleland J, Boursicot K, Mires G. 2009. Sources of variation in performance on a shared OSCE station across four UK medical schools. Med Educ. 43:526–532.
- Crossley J, Davies H, Humphris G, Jolly B. 2002. Generalisability: a key to unlock professional assessment. Med Educ. 36:972–978.
- Driver R, Oldham V. 1986. A constructivist approach to curriculum development in science. Stud Sci Educ. 13:105–122.
- Eva K, Hodges B. 2012. Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? Med Educ. 46:914–919.
- Farmer E, Page G. 2005. A practical guide to assessing clinical decision-making skills using the key features approach. Med Educ. 39:1188–1194.
- Fuller R, Homer M, Pell G. 2013. Longitudinal interrelationships of OSCE station level analyses, quality improvement and overall reliability. Med Teach. 35:515–517.
- Gingerich A, Kogan J, Yeates P, Govaerts M, Holmboe E. 2014. Seeing the 'black box' differently: assessor cognition from three research perspectives. Med Educ. 48:1055–1068.
- Gormley G, Johnston J, Thomson C, McGlade K. 2012. Awarding global grades in OSCEs: evaluation of a novel eLearning resource for OSCE examiners. Med Teach. 34:587–589.
- Govaerts M. 2016. Competence in assessment: beyond cognition. Med Educ. 50:502–504.
- Govaerts M, van der Viel M, Schuwirth L, van der Vleuten C, Muijtjens A. 2013. Workplace-based assessment: raters' performance theories and constructs. Adv Health Sci Educ Theory Pract. 18:375–396.
- Govaerts M, van der Vleuten C. 2007. Broadening perspectives on clinical performance assessment: rethinking the nature of in-training assessment. Adv Health Sci Educ Theory Pract. 12:239–260.
- Harasym P, Woloschuk W, Cunning L. 2008. Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs. Adv Health Sci Educ. 13:617–632.
- Harden R, Lilley P, Patricio P. 2015. The definitive guide to the OSCE: the objective structured clinical examination as a performance assessment. 1st ed. Edinburgh; New York: Churchill Livingstone.
- Harden RM, Crosby J. 2000. AMEE Guide No. 20: the good teacher is more than a lecturer – the twelve roles of the teacher. Med Teach. 22:334–347.
- Harden RM, Stevenson M, Downie WW, Wilson GM. 1975. Assessment of clinical competence using objective structured examination. Br Med J. 1:447–451.
- Holmboe ES, Hawkins RE, Huot SJ. 2004. Effects of training in direct observation of medical residents' clinical competence: a randomized trial. Ann Intern Med. 140:874–881.
- Homer M, Pell G, Fuller R, Patterson J. 2015. Quantifying error in OSCE standard setting for varying cohort sizes: a resampling approach to measuring assessment quality. Med Teach. 24:1–8.
- Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. 2011. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ. 45:1048–1060.
- Kogan JR, Conforti LN, Iobst WF, Holmboe ES. 2014. Reconceptualizing variable rater assessments as both an educational and clinical care problem. Acad Med. 89:721–727.
- McManus IC, Ludka K. 2012. Resitting a high-stakes postgraduate medical examination on multiple occasions: nonlinear multilevel modelling of performance in the MRCP(UK) examinations. BMC Med. 10:60. doi: 10.1186/1741-7015-10-60.
- McManus IC, Thompson M, Mollon J. 2006. Assessment of examiner leniency and stringency ('hawk-dove effect') in the MRCP(UK) clinical examination (PACES) using multi-facet Rasch modelling. BMC Med Educ. 6:42. doi: 10.1186/1472-6920-6-42.
- Patricio M, Juliao M, Fareleira F, Young M, Norman G, Vaz Carneiro A. 2009. A comprehensive checklist for reporting the use of OSCEs. Med Teach. 31:112–124.
- Pell G, Fuller R, Homer M, Roberts T. 2010. How to measure the quality of the OSCE: a review of metrics – AMEE guide no. 49. Med Teach. 32:802–811.
- Pell G, Fuller R, Homer M, Roberts T. 2012. Is short-term remediation after OSCE failure sustained? A retrospective analysis of the longitudinal attainment of underperforming students in OSCE assessments. Med Teach. 34:146–150.
- Pell G, Fuller R, Homer M, Roberts T. 2013. Advancing the objective structured clinical examination: sequential testing in theory and practice. Med Educ. 47:569–577.
- Pell G, Fuller R, Roberts T, Homer M. 2009. Comments on within-station between-sites variation. Med Educ. 43:1021–1022.
- Pell G, Homer MS, Roberts TE. 2008. Assessor training: its effects on criterion‐based assessment in a medical context. Int J Res Method Educ. 31:143–154.
- Pell RG, Roberts TE. 2006. Setting standards for student assessment. Int J Res Method Educ. 29:91–103.
- Rushton A. 2005. Formative assessment: a key to deep learning? Med Teach. 27:509–513.
- Schuwirth LWT, van der Vleuten CPM. 2011. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 33:478–485.
- Streiner D, Norman G. 2008. Health measurement scales: a practical guide to their development and use. 4th ed. Oxford, UK: OUP Oxford.
- Tavakol M, Dennick R. 2012. Post-examination interpretation of objective test data: monitoring and improving the quality of high-stakes examinations: AMEE Guide No. 66. Med Teach. 34:e161–e175.
- Yeates P, Cardell J, Byrne G, Eva KW. 2015. Relatively speaking: contrast effects influence assessors' scores and narrative feedback. Med Educ. 49:909–919.