1,347
Views
10
CrossRef citations to date
0
Altmetric
Articles

Grading rubrics: hoopla or help?

References

  • Allen, D., & Tanner, K. (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. Life Sciences Education, 5, 197–203.
  • Anderson, R. (2008). Modern methods for robust regression. Thousand Oaks, CA: Sage.
  • Arensen, K. W. (2006, February 9). Panel explores standard tests for colleges. NewYork Times. Retrieved June 27, 2011, from http://www.nytimes.com
  • Australian Universities Quality Agency. (2009). Setting and monitoring academic standards for Australian higher education. Melbourne: Author.
  • Bissell, A. N., & Lemons, P. P. (2006). A new method for assessing critical thinking in the classroom. Bioscience, 56, 66–72.
  • Bring, J. (1994). How to standardize regression coefficients. The American Statistician, 48, 209–213.
  • Brown, G., Bull, J., & Pendlebury, M. (1997). Assessing student learning in higher education. London: Routledge.
  • Brown, G. T., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9, 105–121.
  • Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31, 219–233.
  • Cejda, B. D., Kaylor, A. J., & Rewey, K. L. (1998). Transfer shock in an academic discipline: The relationship between students’ major and their academic performance. Community College Review, 26, 1–13.
  • Cox, M. D., & Richlin, L. (1993). Emerging trends in college teaching for the 21st Century: A message from the Editors. Journal on Excellence in College Teaching, 4, 1–7.
  • Crooks, T. J. (2002). Educational assessment in New Zealand schools. Assessment in Education: Principles, Policy & Practice, 9, 237–253.
  • Dahm, K. D., Newell, J. A., & Newell, H. L. (2004). Rubric development for assessment of undergraduate research: Evaluating multidisciplinary team projects. Chemical Engineering Education, 38, 68–73.
  • Dearing, R. (1997). Higher education in a learning society. London: National Committee of Enquiry into Higher Education.
  • Dochy, F., Gijbels, D., & Segers, M. (2006). Learning and the emerging new assessment culture. In L. Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.), Instructional psychology: Past, present, and future trends (pp. 191–206). Oxford: Elsevier.
  • Ecclestone, K. (2001). I know a 2:1 when I see it: Understanding criteria for degree classifications in franchised university programmes. Journal of Further and Higher Education, 25, 301–313.
  • Frederiksen, J. R., & Collins, A. (1989). A systems approach to educational testing. Educational Researcher, 18, 27–32.
  • George, D., & Mallery, P. (2001). SPSS® for Windows® step by step: A simple guide and reference (3rd ed.). Boston, MA: Allyn and Bacon.
  • Good, R., & Fletcher, H. J. (2006). Reporting explained variance. Journal of Research in Science Teaching, 18, 1–7.
  • Hafner, J. C., & Hafner, P. M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25, 1509–1528.
  • Halloun, I., & Hestenes, D. (1985). The initial knowledge state of college physics students. American Journal of Physics, 53, 1043–1055.
  • Harris, K. L. (2009). International trends in establishing standards of academic achievement in higher education. Melbourne: Australian Universities Quality Agency.
  • Hensen, R. K., Hull, D. M., & Williams, C. S. (2010). Methodology in our education research culture: Toward a stronger collective quantitative proficiency. Educational Researcher, 39, 229–240.
  • Hornby, W. (2003). Assessing using grade-related criteria: A single currency for universities? Assessment & Evaluation in Higher Education, 28, 435–453.
  • Hyde, J. S., Fennema, E., & Lamon, S. J. (1990). Gender differences in mathematics performance: A meta-analysis. Psychological Bulletin, 107, 139–155.
  • James, R. (2003). Academic standards and the assessment of student learning: Some current issues in Australian higher education. Tertiary Education and Management, 9, 187–198.
  • James, R., & McInnis, C. (2001). Standards oil tertiary debate. The Australian Higher Education, 15, 28.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2, 130–144.
  • Kleijn, W., Ploeg, H., & Topman, R. (1994). Cognition, study habits, test anxiety, and academic performance. Psychological Reports, 75, 1219–1226.
  • Krause, K., Hartley, R., James, R., & McInnis, C. (2005). The first year experience in Australian universities: Findings from a decade of national studies. Canberra: Commonwealth of Australia.
  • Kuisma, R. (1999). Criteria referenced marking of written assignments. Assessment & Evaluation in Higher Education, 24, 27–39.
  • Lee, H., & Lee, G. (2001). The National Certificate of Educational Achievement (NCEA): ‘Fragile-handle with care’. New Zealand Annual Review of Education, 10, 5–38.
  • Moriarty, L. J. (2006). Investing in quality: The current state of assessment in criminal justice programs. Justice Quarterly, 20, 409–427.
  • Norton, L., Harrington, K., Elander, J., Sinfield, S., Reddy, P., Pitt, E., & Aiyegbayo, O. (2004, September). Supporting diversity and inclusivity through writing workshops. Paper presented at the International Improving Student Learning Symposium, Birmingham, England.
  • Nusche, D. (2008). Assessment of learning outcomes in higher education: A comparative review of selected practices. Paris: Organisation for Economic Cooperation and Development.
  • O’Donovan, B., Price, M., & Rust, C. (2004). Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education, 9, 325–335.
  • Organisation for Economic Cooperation and Development (2009). International assessment of higher education learning outcomes (AHELO) feasibility study. Paris: Author.
  • Ozer, D. J. (1985). Correlation and the coefficient of determination. Psychological Bulletin, 97, 307–315.
  • Pomerantz, E. M., Altermatt, E. R., & Saxon, J. L. (2002). Making the grade but feeling distressed: Gender differences in academic performance and internal distress. Journal of Educational Psychology, 94, 396–404.
  • Potolsky, A., Cohen, J., & Saylor, C. (2003). Academic performance of nursing students: Do prerequisite grades and tutoring make a difference? Nursing Education Perspectives, 24, 246–250.
  • Price, M. (2005). Assessment standards: The role of communities of practice and the scholarship of assessment. Assessment & Evaluation in Higher Education, 30, 215–230.
  • Rodgers, J. L., & Nicewander, A. (1988). Thirteen ways to look at the correlation coefficient. The American Statistician, 42, 59–66.
  • Rowntree, D. (1987). Assessing students: How shall we know them? (2nd ed.). London: Kogan Page.
  • Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34, 159–179.
  • Sadler, P. M., & Good, E. (2006). The impact of self- and peer-grading on student learning. Educational Assessment, 11, 1–31.
  • Sansgiry, S., Bhosle, M., & Sail, K. (2006). Factors that affect academic performance among pharmacy students. American Journal of Pharmacy Education, 70, 104–116.
  • Schafer, W. D., Swanson, G., Bene, N., & Newberry, G. (2001). Effects of teacher knowledge of rubrics on student achievement in four content areas. Applied Measurement in Education, 14, 151–170.
  • Smith, E., & Coombe, K. (2006). Quality and qualms in the marking of university assignments by sessional staff: An exploratory study. Higher Education, 51, 45–69.
  • Stemmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., & Schmitz, J. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36, 102–107.
  • Suda, K., Franks, A., McKibbin, T., Wang, J., & Smith, E. (2008, July). Differences in student performance based on student and lecture location when using distance learning technology. Paper presented at the annual meeting of the American Association of Colleges of Pharmacy, Chicago.
  • Thaler, N., Kazemi, E., & Huscher, C. (2009). Developing a rubric to assess student learning outcomes using a class assignment. Teaching of Psychology, 36, 113–116.
  • Wellman, J. V. (2001). Assessing state accountability systems. Change, 33, 46–52.
  • Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. San Francisco, CA: Jossey-Bass.
  • Woodward, W. (2002, May 20). Universities in crisis. The Guardian, pp. 4–5.
  • Yorke, M. (1999). Benchmarking academic standards in the UK. Tertiary Education and Management, 5, 79–94.
  • Yorke, M. P., Bridges, P., & Woolf, H. (2000). Mark distributions and marking practices in UK higher education: Some challenging issues. Active Learning in Higher Education, 1, 7–27.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.