2,438
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Mark distribution is affected by the type of assignment but not by features of the marking scheme in a biomedical sciences department of a UK university

ORCID Icon

References

  • Bird, F. L., and R. Yucel. 2013. “Improving Marking Reliability of Scientific Writing with the Developing Understanding of Assessment for Learning Programme.” Assessment & Evaluation in Higher Education 38 (5): 536–553. doi:10.1080/02602938.2012.658155.
  • Bloxham, S., and P. Boyd. 2007. Developing Effective Assessment in Higher Education: A Practical Guide: A Practical Guide. London, UK: McGraw-Hill Education.
  • Bridges, P., B. Bourdillon, D. Collymore, A. Cooper, W. Fox, C. Haines, D. Turner, H. Woolf, and M. Yorke. 1999. “Discipline‐Related Marking Behaviour Using Percentages: A Potential Cause of Inequity in Assessment.” Assessment & Evaluation in Higher Education 24 (3): 285–300. doi:10.1080/0260293990240303.
  • Brookhart, S. M. 2018. “Appropriate Criteria: Key to Effective Rubrics.” Frontiers in Education 3:22. doi:10.3389/feduc.2018.00022.
  • Cho, H. J., C. Levesque-Bristol, and M. Yough. 2021. “International Students’ Self-Determined Motivation, Beliefs about Classroom Assessment, Learning Strategies, and Academic Adjustment in Higher Education.” Higher Education 81 (6): 1215–1235. doi:10.1007/s10734-020-00608-0.
  • Dawson, P. 2017. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment & Evaluation in Higher Education 42 (3): 347–360. doi:10.1080/02602938.2015.1111294.
  • Deci, E. L., and R. M. Ryan. 2016. “Optimizing Students’ Motivation in the Era of Testing and Pressure: A Self-Determination Theory Perspective.” In Building Autonomous Learners: Perspectives from Research and Practice Using Self-Determination Theory, edited by W. C. Liu, J. C. K. Wang and R. M. Ryan, 9–29. Singapore: Springer Singapore.
  • Ecclestone, K. 2001. “‘I Know a 2:1 When I See It’: Understanding Criteria for Degree Classifications in Franchised University Programmes.” Journal of Further and Higher Education 25 (3): 301–313. doi:10.1080/03098770126527.
  • Grainger, P., M. Christie, G. Thomas, S. Dole, D. Heck, M. Marshman, and M. Carey. 2017. “Improving the Quality of Assessment by Using a Community of Practice to Explore the Optimal Construction of Assessment Rubrics.” Reflective Practice 18 (3): 410–422. doi:10.1080/14623943.2017.1295931.
  • Grainger, P., K. Purnell, and R. Zipf. 2008. “Judging Quality through Substantive Conversations between Markers.” Assessment & Evaluation in Higher Education 33 (2): 133–142. doi:10.1080/02602930601125681.
  • HESA. 1996. “Higher Education Statistics for the UK 1995/96.” https://www.hesa.ac.uk/data-and-analysis/publications/higher-education-1995-96. Accessed July 8, 2021.
  • HESA. 2020. “What Are HE Students’ Progression Rates and Qualifications?” https://www.hesa.ac.uk/data-and-analysis/students/outcomes#classifications. Accessed February 23, 2021.
  • Hornby, W. I. N. 2003. “Assessing Using Grade-Related Criteria: A Single Currency for Universities?” Assessment & Evaluation in Higher Education 28 (4): 435–454. doi:10.1080/0260293032000066254.
  • Hudson, L., and I. Mansfield. 2020. Universities at the Crossroads: How Higher Education Leadership Must Act to Regain the Trust of Their Staff, Their Communities and the Whole Nation. London, UK: Policy Exchange.
  • Jephcote, C., E. Medland, and S. Lygo-Baker. 2021. “Grade Inflation versus Grade Improvement: Are Our Students Getting More Intelligent?” Assessment & Evaluation in Higher Education 46 (4): 547–571. doi:10.1080/02602938.2020.1795617.
  • Jonsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences.” Educational Research Review 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
  • Kohn, A. 1993. Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A’s, Praise, and Other Bribes. New York, NY: HarperCollins Publishers.
  • Kuisma, R. 1999. “Criteria Referenced Marking of Written Assignments.” Assessment & Evaluation in Higher Education 24 (1): 27–39. doi:10.1080/0260293990240103.
  • Mabry, L. 1999. “Writing to the Rubric: Lingering Effects of Traditional Standardized Testing on Direct Writing Assessment.” Phi Delta Kappan 80 (9): 673.
  • McKeachie, W. J. 1976. “College Grades: A Rationale and Mild Defense.” AAUP Bulletin 62 (3): 320–322. doi:10.2307/40224973.
  • Morgan, C. 1996. “The Teacher as Examiner: The Case of Mathematics Coursework.” Assessment in Education: Principles, Policy & Practice 3 (3): 353–375. doi:10.1080/0969594960030305.
  • Office for Students. 2020. “Analysis of Degree Classifications over Time: Changes in Graduate Attainment from 2010-11 to 2018-19.”
  • Orr, S. 2007. “Assessment Moderation: Constructing the Marks and Constructing the Students.” Assessment & Evaluation in Higher Education 32 (6): 645–656. doi:10.1080/02602930601117068.
  • Peeters, M. J., K. A. Schmude, and C. L. Steinmiller. 2014. “Inter-Rater Reliability and False Confidence in Precision: Using Standard Error of Measurement within PharmD Admissions Essay Rubric Development.” Currents in Pharmacy Teaching and Learning 6 (2): 298–303. doi:10.1016/j.cptl.2013.11.014.
  • QAA. 2018. UK Quality Code, Advice and Guidance: Assessment. Gloucester, UK: QAA.
  • Reddy, J. 2012. “A Self-determination Theory Perspective on Student Engagement.” In Handbook of Research on Student Engagement, edited by S. L. Christenson, A. L. Reschly, C. Wylie, 149–172. Boston, MA: Springer US.
  • Reddy, Y. M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education 35 (4): 435–448. doi:10.1080/02602930902862859.
  • Rudolph, M. J., K. K. Daugherty, M. E. Ray, V. P. Shuford, L. Lebovitz, and M. V. DiVall. 2019. “Best Practices Related to Examination Item Construction and Post-Hoc Review.” American Journal of Pharmaceutical Education 83 (7): 7204. doi:10.5688/ajpe7204.
  • Sadler, D. R. 2005. “Interpretations of Criteria‐Based Assessment and Grading in Higher Education.” Assessment & Evaluation in Higher Education 30 (2): 175–194. doi:10.1080/0260293042000264262.
  • Sadler, D. R. 2009. “Indeterminacy in the Use of Preset Criteria for Assessment and Grading.” Assessment & Evaluation in Higher Education 34 (2): 159–179. doi:10.1080/02602930801956059.
  • Schinske, J., and K. Tanner. 2014. “Teaching More by Grading Less (or Differently).” CBE Life Sciences Education 13 (2): 159–166. doi:10.1187/cbe.CBE-14-03-0054.
  • Thorndike, E. L. 1920. “A Constant Error in Psychological Ratings.” Journal of Applied Psychology 4 (1): 25–29. doi:10.1037/h0071663.
  • Torrance, H. 2007. “Assessment as Learning? How the Use of Explicit Learning Objectives, Assessment Criteria and Feedback in Post‐Secondary Education and Training Can Come to Dominate Learning.” Assessment in Education: Principles, Policy & Practice 14 (3): 281–294. doi:10.1080/09695940701591867.
  • Vilapakkam Nagarajan, S., and J. Edwards. 2014. “Is the Graduate Attributes Approach Sufficient to Develop Work Ready Graduates?” Journal of Teaching and Learning for Graduate Employability 5 (1): 12–28. doi:10.21153/jtlge2014vol5no1art565.
  • Weale, S. 2019. “Government Calls on OfS to Clamp down on University Grade Inflation.” In The Guardian. London, UK: Guardian News & Media Limited.
  • Wendorf, C. A., and S. Alexander. 2005. “The Influence of Individual- and Class-Level Fairness-Related Perceptions on Student Satisfaction.” Contemporary Educational Psychology 30 (2): 190–206. doi:10.1016/j.cedpsych.2004.07.003.