1,198
Views
5
CrossRef citations to date
0
Altmetric
AMEE Guide

Feedback to support examiners’ understanding of the standard-setting process and the performance of students: AMEE Guide No. 145

, &

References

  • American Educational Research Association, A. P. A., & National Council on Measurement in Education. 2014. Standards for educational and psychological testing. Washington (DC): American Educational Research Association.
  • Angoff W. 1971. Scales, norms and equivalent scores. Educational measurement. R. Thorndike. Washington (DC): American Council on Education.
  • Ben-David M. 2000. AMEE guide no. 18: standard setting in student assessment. Med Teach. 22(2):120–130.
  • Berk R. 1986. A consumer’s guide to setting performance standards on criterion-referenced tests. Rev Educ Res. 56(1):137–172.
  • Brandon P. 2004. Conclusions about frequently studied modified angoff standard-setting topics. " Appl Meas Educ. 17(1):59–88.
  • Brennan R. 2010. Generaliability theory. New York: Springer.
  • Brennan R, Lockwood R. 1980. A comparison of the Nedelsky and Angoff cutting score procedures using generalizability theory. Appl Psychol Meas. 4(2):219–240.
  • Buckendahi C, Davis-Becker S. 2012. Setting passing standards for credentialing programs. Setting performance standards. G. Cizek. New York: Routledge.
  • Cizek G. 2006. Standard setting. Handbook of test development. S. Downing and T. Haladyna. London: Routledge.
  • Clauser B, Harik P, Margolis MJ, McManus I, Mollon J, Chis L, Williams S. 2008. Empirical evidence for the evaluation of performance standards estimated using the Angoff procedure. Appl Meas Educ. 22(1):1–21.
  • Clauser B, Mee J, Baldwin S, Margolis M, Dillon G. 2009. Judges’ use of examinee performance data in an angoff standard-setting exercise for a medical licensing examination: an experimental study. J Educ Meas. 46(4):390–407.
  • Clauser B, Swanson D, Harik P. 2002. A multivariate generalizability analysis of the impact of training and examinee performance information on judgments made in an Angoff-style standard-setting procedure. J Educ Meas. 39(4):269–290.
  • Clauser J, Margolis M, Clauser B. 2014. An examination of the replicability of Angoff standard setting resultswithin a generalizability theory framework. J Educ Meas. 51(2):127–140.
  • Cronbach L. 1990. Essentials of psychological testing. New York: Harper and Row.
  • Cusimano M. 1996. Standard setting in medical education. Acad Med. 71(10):S112–S120.
  • Downing S. 2002. Threats to the validity of locally developed multiple-choice tests in medical education: construct-irrelevant variance and construct underrepresentation. Adv Health Sci Educ Theory Pract. 7(3):235–224.
  • Ebel R. 1979. Essentials of educational measurement. Englewood Cliffs, NJ: Prentice–Hall.
  • Eckes T. 2015. Introduction to many-facet rasch measurement; analysing and evaluating rater-mediated assessments. Frankfort: Peter Lang.
  • Educational Testing Service (ETS). 2002. ETS standards for quality and fairness. N. J. Princeton.
  • Fuller R, Homer M, Pell G, Hallam J. 2017. Managing extremes of assessor judgment within the OSCE. Med Teach. 39(1):58–66.
  • Glass G. 1978. Standards and criteria. J Educ Meas. 15(4):237–261.
  • Godfrey P, Homer M, Fuller R. 2015. Investigating disparity between global grades and checklist scores in OSCEs. Med Teach. 37(12):1106–1113.
  • Goodwin L. 1999. Relations between observed item difficulty levels and Angoff minimum passing levels for a group. Appl Meas Educ. 12(1):13–21.
  • Haladyna TM, Downing SM. 2005. Construct-irrelevant variance in high-stakes testing. Educ Meas Issues Pract. 23(1):17–27.
  • Hambleton R, Pitoniak M, Copella J. 2012. Essential steps in setting performance standards on educational tests and strategies for assessing the reliability of results. Setting performance standards. G. Cizek. New York: Routledge; p. 47–76.
  • Impara J, Plake B. 1998. Teachers’ ability to estimate item difficulty: a test of the assumptions in the Angoff standard setting method. J Educ Meas. 35(1):69–81.
  • Jaeger R. 1989. Certification of student competence. Educational measurement R. Linn. New York: MacMillan; p. 485–515.
  • Kane M, Crooks T, Cohen A. 1999. Designing and evaluating standard-setting procedures for licensure and certification tests. Adv Health Sci Educ theory Pract. 11(4):195–207.
  • Kramer K, Muijtjens A, Jansen K, Düsman H, Tan L, van der Vleuten C. 2003. Comparison of a rational and an empirical standard setting procedure for an OSCE. Objective structured clinical examinations. Med Educ. 37(2):132–139.
  • Landy F, Farr J. 1980. Performance rating. Psychol Bull. 87(1):72–107.
  • Linacre J. 2020. Fair average. [accessed 2020 Dec 12]. https://www.winsteps.com/facetman/fairaverage.htm
  • Linden WJ. 1982. A latent trait method for determining intrajudge inconsistency in the Angoff and Nedelsky techniques of standard-setting. J Educ Meas. 19(4):295–308.
  • Linn R, Koretz D, Baker E, Burstein L. 1991. The validity and credibility of the achievement levels for the 1990 National Assessment of Educational Progress in mathematics. CSE Technical Report 330. Los Angeles (CA): CRESST.
  • Lord FM, Novick MR. 1968. Statistical theories of mental test scores. Reading (MA): Addison Wesley.
  • McKinley D, Norcini J. 2014. How to set standards on performance-based examinations: AMEE Guide No. 85. Med Teach. 36(2):97–110.
  • Meskauskas J, Webster G. 1975. The American Board of Internal Medicine recertification examination: process and results. Ann Intern Med. 82(4):577–581.
  • Miller M, Linnm R, Gronlund N. 2013. Measurement and assessment in teaching. London: Pearson.
  • Nedelsky L. 1954. Absolute grading scores for objective tests. Educ Psychol Meas. 14(1):3–19.
  • Reckase M, Chen J. 2012. The role, format,and impact of feedback to standard setting panelists. Setting performance standards. G. Cizek. New York: Routledge.
  • Reid J. 1991. Training judges to generate standard setting data. Educ Meas Issues Pract. 10(2):11–14.
  • Roch S, Woehr D, Mishra V, Kieszczynska U. 2012. Rater training revisited: an updated meta-analytic review of frame of – reference training. J Occup Organ Psychol. 85(2):370–395.
  • Skorupski W. 2012. Understanding the cognitive process of standard-setting panelists. Setting performance standards. G. Cizek. New York: Routledge.
  • Smith R, Smith J. 1988. Differential use of item information by judges using Angoff and Nedeisky procedures. J Educ Meas. 25(4):259–274.
  • Stone G, Koskey K, Sondergeld T. 2011. Comparing construct definition in the Angoff and objective standard setting models: playing in a house of cards without a full deck. Educ Psychol Meas. 71(6):942–962.
  • Tavakol M, Brennan R. 2013. Medical education assessment: a brief overview of concepts in generalizability theory. Int J Med Educ. 4:221–222.
  • Tavakol M, Dennick R. 2011. Post-examination analysis of objective tests. Med Teach. 33(6):447–458.
  • Tavakol M, Dennick R. 2012. Post-examination interpretation of objective test data: monitoring and improving the quality of high-stakes examinations–a commentary on two AMEE Guides. Med Teach. 2012 34(3):245–248.
  • Tavakol M, Dennick R. 2012. Post-examination interpretation of objective test data: monitoring and improving the quality of high-stakes examinations: AMEE Guide No. 66. Med Teach. 34(3):e161–175.
  • Tavakol M, Dennick R. 2012. Psychometric evaluation of a knowledge-based examination using Rasch analysis: an illustrative guide: AMEE Guide No. 72. Med Teach. 35(1):838–848.
  • Tavakol M, Dennick R. 2017. The foundations of measurement and assessment in medical education. AMEE guide no 119. Med Teach. 39(10):1010–1015.
  • Tavakol M, Pinner G. 2018. Enhancing objective structured clinical examinations through visualisation of checklist scores and global rating scale. Int J Med Educ. 2018:132–136.
  • Tavakol M, Pinner G. 2019. Using the many-facet Rasch model to analyse and evaluate the quality of objective structured clinical examination: a non-experimental cross-sectional design. Open BMJ. 24:141–150.
  • Yeates P, Moreau M, Eva K. 2015. Examiners’ judgments in OSCE-style assessments influenced by contrast effects? Acad Med. 90(7):975–980.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.