172
Views
0
CrossRef citations to date
0
Altmetric
Twelve Tips

Twelve tips for developing effective marking schemes for constructed-response examination questions

ORCID Icon & ORCID Icon
Received 01 Jun 2023, Accepted 21 Feb 2024, Published online: 14 Mar 2024

References

  • Ahmed A, Pollitt A. 2011. Improving marking quality through a taxonomy of mark schemes. Assess Educ. 18(3):259–278. doi: 10.1080/0969594X.2010.546775.
  • Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer R, Pintrich PR, Raths JD, Wittrock MC. 2001. A taxonomy for learning, teaching, and assessing: a revision of bloom’s taxonomy of educational objectives. Boston (MA): Allyn & Bacon.
  • Bloxham S, Boyd P. 2007. Developing effective assessment in higher education: a practical guide. Maidenhead: Open University Press.
  • Bloxham S, den-Outer B, Hudson J, Price M. 2016. Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assess Eval High Educ. 41(3):466–481. doi: 10.1080/02602938.2015.1024607.
  • Brookhart SM. 2013. How to create and use rubrics for formative assessment and grading. Alexandria (VA): ASCD.
  • Brookhart SM, Chen F. 2015. The quality and effectiveness of descriptive rubrics. Educ Rev. 67(3):343–368. doi: 10.1080/00131911.2014.929565.
  • Brooks V. 2012. Marking as judgment. Res Pap Educ. 27(1):63–80. doi: 10.1080/02671520903331008.
  • Cizek GJ, Bunch MB, Koons H. 2004. Setting performance standards: contemporary methods. Educ Measurement. 23(4):31–31. doi: 10.1111/j.1745-3992.2004.tb00166.x.
  • Cook DA, Brydges R, Ginsburg S, Hatala R. 2015. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 49(6):560–575. doi: 10.1111/medu.12678.
  • Daniels VJ, Pugh D. 2018. Twelve tips for developing an OSCE that measures what you want. Med Teach. 40(12):1208–1213. doi: 10.1080/0142159X.2017.1390214.
  • Dawson P. 2017. Assessment rubrics: towards clearer and more replicable design, research and practice. Assess Eval High Educ. 42(3):347–360. doi: 10.1080/02602938.2015.1111294.
  • De Champlain AF. 2010. A primer on classical test theory and item response theory for assessments in medical education. Med Educ. 44(1):109–117. doi: 10.1111/j.1365-2923.2009.03425.x.
  • Dennis I. 2007. Halo effects in grading student projects. J Appl Psychol. 92(4):1169–1176. doi: 10.1037/0021-9010.92.4.1169.
  • Fendler L. 2018. Validity-versus-reliability tradeoffs and the ethics of educational research. In: Smeyers P, Depaepe M, editors. Educational research: ethics, social justice, and funding dynamics. Berlin, Germany: Springer. p. 143–61.
  • Greatorex J, Sutch T, Werno M, Bowyer J, Dunn K. 2019. Investigating a new method for standardising essay marking using levels-based mark schemes. Int. J Assess Tool Educ. 6(2):218–234. doi: 10.21449/ijate.564824.
  • Handley KM, Cox BM. 2007. Beyond model answers: learners’ perceptions of self-assessment materials in e-learning applications. Res Learn Technol. 15(1):21–36.
  • Hauer KE, Boscardin C, Brenner JM, van Schaik SM, Papp KK. 2019. Twelve tips for assessing medical knowledge with open-ended questions: designing constructed response examinations in medical education. Med Teach. 42(8):880–885. doi: 10.1080/0142159X.2019.1629404.
  • Hogan TP, Murphy G. 2007. Recommendations for preparing and scoring constructed-response items: what the experts say. Appl Meas Educ. 20(4):427–441. doi: 10.1080/08957340701580736.
  • Jones L, Allen B, Dunn P, Brooker L. 2016. Demystifying the rubric: a five-step pedagogy to improve student understanding and utilisation of marking criteria. High Educ Res Dev. 36(1):129–142. doi: 10.1080/07294360.2016.1177000.
  • Jönsson A, Panadero E. 2017. The use and design of rubrics to support assessment for learning. In: Carless D, Bridges SM, Chan CKY, Glofcheski R, editors. Scaling up assessment for learning in higher education. Singapore: Springer Singapore; p. 99–111.
  • Masters G. 2013. Reforming education assessment: imperatives, principles and challenges. Australian education review 57. Retrieved from Australian Council for Educational Research. http://research.acer.edu.au/aer/12/.
  • Moskal BM, Leydens JA. 2000. Scoring rubric development: validity and reliability. Pract Assess Res Eval. 7(10):1–6.
  • Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Hays R, Palacios Mackay MF, Roberts T, Swanson D, et al. 2018. Consensus framework for good assessment. Med Teach. 40(11):1102–1109. doi: 10.1080/0142159X.2018.1500016.
  • Partington J. 1994. Double-marking students’ work. Assess Eval High Educ. 19(1):57–60. doi: 10.1080/0260293940190106.
  • Pearce J, Tavares W. 2021. A philosophical history of programmatic assessment: tracing shifting configurations. Adv Health Sci Educ Theory Pract. 26(4):1291–1310. doi: 10.1007/s10459-021-10050-1.
  • Pylman S, Ward A. 2020. Twelve tips for effective questioning in medical education. Med Teach. 42(12):1330–1336. doi: 10.1080/0142159X.2020.1749583.
  • Raymond MR, Grande JP. 2019. A practical guide to test blueprinting. Med Teach. 41(8):854–861. doi: 10.1080/0142159X.2019.1595556.
  • Reid KJ, Dodds AE, Fink MA. 2015. Setting short-answer question standards using borderline regression. Med Educ. 49(5):520–521. doi: 10.1111/medu.12687.
  • Rickards G, Magee C, Artino AR. Jr. 2012. You can’t fix by analysis what you’ve spoiled by design: developing survey instruments and collecting validity evidence. J Grad Med Educ. 4(4):407–410. doi: 10.4300/JGME-D-12-00239.1.
  • Schuwirth LWT, van der Vleuten CPM. 2011. Programmatic assessment: from assessment of learning to assessment for learning. Med Teach. 33(6):478–485. doi: 10.3109/0142159X.2011.565828.
  • Schuwirth LWT, van der Vleuten CPM. 2012. Programmatic assessment and Kane’s validity perspective. Med Educ. 46(1):38–48. doi: 10.1111/j.1365-2923.2011.04098.x.
  • Sein AS, Rashid H, Meka J, Amiel J, Pluta W. 2021. Twelve tips for embedding assessment for and as learning practices in a programmatic assessment system. Med Teach. 43(3):300–306. doi: 10.1080/0142159X.2020.1789081.
  • Sein AS, Dathatri S, Bates TA. 2021. Twelve tips on guiding preparation for both high-stakes exams and long-term learning. Med Teach. 43(5):518–523. doi: 10.1080/0142159X.2020.1828570.
  • Tavakol M, Dennick R. 2013. Psychometric evaluation of a knowledge-based examination using Rasch analysis: an illustrative guide: AMEE guide No. 72. Med Teach. 35(1):e838–e848. doi: 10.3109/0142159X.2012.737488.
  • Torre D, Rice NE, Ryan A, Bok H, Dawson LJ, Bierer B, Wilkinson TJ, Tait GR, Laughlin T, Veerapen K, et al. 2021. Ottawa 2020 consensus statements for programmatic assessment – 2. Implementation and practice. Med Teach. 43(10):1149–1160. doi: 10.1080/0142159X.2021.1956681.
  • van der Vleuten CP, Schuwirth LW, Driessen EW, Govaerts MJ, Heeneman S. 2015. Twelve tips for programmatic assessment. Med Teach. 37(7):641–646. doi: 10.3109/0142159X.2014.973388.
  • Wiliam D. 2001. Reliability, validity, and all that jazz. Educ 3–13. 29(3):17–21. doi: 10.1080/03004270185200311.
  • Williamson J, Child S. 2022. Mark scheme design for school- and college-based assessment in VTQs. J Vocat Educ Train. 74(3):454–474. doi: 10.1080/13636820.2020.1782454.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.