1,522
Views
9
CrossRef citations to date
0
Altmetric
ARTICLES

Setting defensible standards in small cohort OSCEs: Understanding better when borderline regression can ‘work’

ORCID Icon, , &

References

  • American Educational Research Association. 2014. Standards for educational and psychological testing. Washington, D.C: American Educational Research Association.
  • Ben-David MF. 2000. AMEE Guide No. 18: standard setting in student assessment. Med Teach. 22(2):120–130.
  • Bland JM, Altman DG. 2011. Correlation in restricted ranges of data. BMJ. 342(Mar11 1):d556–d556.
  • Boulet JR, De Champlain AF, McKinley DW. 2003. Setting defensible performance standards on OSCEs and standardized patient examinations. Med Teach. 25(3):245–249.
  • Boursicot KAM, Roberts TE, Pell G. 2007. Using borderline methods to compare passing standards for OSCEs at graduation across three medical schools. Med Educ. 41(11):1024–1031.
  • Cizek GJ, editor. 2012. Setting performance standards: foundations, methods, and innovations. 2nd ed. New York: Routledge.
  • Cizek GJ, Bunch MB. 2007. Standard setting a guide to establishing and evaluating performance standards on tests. Thousand Oaks, California: Sage Publications.
  • Clauser BE, Mee J, Baldwin SG, Margolis MJ, Dillon GF. 2009. Judges’ use of examinee performance data in an Angoff standard-setting exercise for a medical licensing examination: an experimental study. J Educ Measure. 46(4):390–407.
  • Crossley J, Jolly B. 2012. Making sense of work-based assessment: ask the right questions, in the right way, about the right things, of the right people. Med Educ. 46(1):28–37.
  • Crossley JGM, Groves J, Croke D, Brennan PA. 2019. Examiner training: a study of examiners making sense of norm-referenced feedback. Med Teach. 41(7):787–794.
  • Currie DGP, Cleland PJ. 2016. Sequential objective structured clinical examinations: number of stations. Med Teach. 38(8):857–858.
  • Cusimano MD. 1996. Standard setting in medical education. Academic Medicine: Journal of the Association of American Medical Colleges. 71(10):S112–S120.
  • Downing SM, Haladyna TM. 2004. Validity threats: overcoming interference with proposed interpretations of assessment data. Med Educ. 38(3):327–333.
  • Draper NR, Smith H. 1998. Applied regression analysis. 3rd ed. New York: Wiley-Blackwell.
  • Dwyer T, Wright S, Kulasegaram KM, Theodoropoulos J, Chahal J, Wasserstein D, Ringsted C, Hodges B, Ogilvie-Harris D. 2016. How to set the bar in competency-based medical education: standard setting after an Objective Structured Clinical Examination (OSCE). BMC Med Educ. 16(1):1.
  • Farmer EA, Page G. 2005. A practical guide to assessing clinical decision-making skills using the key features approach. Med Educ. 39(12):1188–1194.
  • Fitzpatrick AR. 1989. Social influences in standard setting: the effects of social interaction on group judgments. Rev Educ Res. 59(3):315–328.
  • Fuller R, Homer M, Pell G. 2013. Longitudinal interrelationships of OSCE station level analyses, quality improvement and overall reliability. Med Teach. 35(6):515–517.
  • General Medical Council. 2019. What is the PLAB 2 exam? [online]. General Medical Council; [accessed 2019 Jan 7]. https://www.gmc-uk.org/registration-and-licensing/join-the-register/plab/plab-2-guide/what-is-the-plab-2-exam.
  • Harden R, Lilley P, Patricio M. 2015. The definitive guide to the OSCE: the objective structured clinical examination as a performance assessment. 1st ed. Edinburgh; New York: Churchill Livingstone.
  • Hays R, Gupta TS, Veitch J. 2008. The practical value of the standard error of measurement in borderline pass/fail decisions. Med Educ. 42(8):810–815.
  • Health Education England. 2015. Physician associate [online]. Health Careers; [accessed 2019 Jan 7]. https://www.healthcareers.nhs.uk/explore-roles/medical-associate-professions/roles-medical-associate-professions/physician-associate.
  • Hejri SM, Jalili M, Muijtjens AMM, Van Der Vleuten CPM. 2013. Assessing the reliability of the borderline regression method as a standard setting procedure for objective structured clinical examination. J Res Med Sci. 18(10):887–891.
  • Homer M, Fuller R, Pell G. 2018. The benefits of sequential testing: improved diagnostic accuracy and better outcomes for failing students. Med Teach. 40(3):275–284.
  • Homer M, Pell G, Fuller R, Patterson J. 2016. Quantifying error in OSCE standard setting for varying cohort sizes: A resampling approach to measuring assessment quality. Med Teach. 38(2):181–188.
  • Kane MT. 2013. Validating the interpretations and uses of test scores. J Educ Measure. 50(1):1–73.
  • Kramer A, Muijtjens A, Jansen K, Düsman H, Tan L, van der Vleuten C. 2003. Comparison of a rational and an empirical standard setting procedure for an OSCE. Objective structured clinical examinations. Med Educ. 37(2):132–139.
  • Livingston SA, Zieky MJ. 1982. Passing scores: a manual for setting standards of performance on educational and occupational tests. Princeton, NJ: Educational Testing Service.
  • Margolis MJ, Mee J, Clauser BE, Winward M, Clauser JC. 2016. Effect of content knowledge on Angoff-style standard setting judgments. Educ Measure. 35(1):29–37.
  • McKinley DW, Norcini JJ. 2014. How to set standards on performance-based examinations: AMEE Guide No. 85. Med Teach. 36(2):97–110.
  • Muijtjens AMM, Kramer AWM, Kaufman DM, Van der Vleuten CPM. 2003. Using resampling to estimate the precision of an empirical standard-setting method. Appl Measure Educ. 16(3):245–256.
  • Pell G, Fuller R, Homer M, Roberts T. 2010. How to measure the quality of the OSCE: a review of metrics – AMEE guide no. 49. Med Teach. 32(10):802–811.
  • Pell G, Fuller R, Homer M, Roberts T. 2013. Advancing the objective structured clinical examination: sequential testing in theory and practice. Med Educ. 47(6):569–577.
  • Pell G, Homer M, Fuller R. 2015. Investigating disparity between global grades and checklist scores in OSCEs. Med Teach. 37(12):1106–1113.
  • Revelle W, Zinbarg RE. 2009. Coefficients alpha, beta, omega, and the glb: comments on Sijtsma. Psychometrika. 74(1):145.
  • Schoonheim-Klein M, Muijtjens A, Habets L, Manogue M, van der Vleuten C, van der Velden U. 2009. Who will pass the dental OSCE? Comparison of the Angoff and the borderline regression standard setting methods. Euro J Dental Educ. 13(3):162–171.
  • Tavakol M, Pinner G, Doody GA. 2018. The Bayesian borderline regression method: identifying pass marks for small cohorts. Med Teach. 41(6):723.
  • Wilcox RR. 2012. Introduction to robust estimation and hypothesis testing. 3rd ed. Amsterdam; Boston: Academic Press.
  • Wood TJ, Humphrey-Murto SM, Norman GR. 2006. Standard setting in a small scale OSCE: a comparison of the modified borderline-group method and the borderline regression method. Adv Health Sci Educ Theory Pract. 11(2):115–122.
  • Yeates P, Cope N, Hawarden A, Bradshaw H, McCray G, Homer M. 2019. Developing a video-based method to compare and adjust examiner effects in fully nested OSCEs. Med Educ. 53:250–263.
  • Yeates P, Sebok-Syer SS. 2017. Hawks, Doves and Rasch decisions: understanding the influence of different cycles of an OSCE on students’ scores using many facet rasch modeling. Med Teach. 39(1):92–99.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.