Abstract
Peer assessment of long written tasks poses particular problems as these tasks typically involve complex learning and solving ill‐structured problems which require divergent responses. Marking reliability of this kind of writing task is difficult to achieve. The author illustrates this through an evaluation of two implementations of peer assessment, involving 81 students, in a UK university. In these implementations, all peer assessor grades were returned to students (not just mean grades). In this way students were exposed to subjectivity in marking. The implementations were evaluated through questionnaires, focus groups, observations of lectures and tutor interview. While students reported a better understanding of quality in student writing as a result of their experience, many complained that peer assessors’ marks were not ‘fair’. The article draws on recent research on the reliability of tutor marking to argue that marking judgements are subjective and that peer assessment offers the opportunity to explore subjectivity in marking, creating an opportunity for dialogue between tutors and students.
Acknowledgements
I would like to thank my colleague Julia Shelton whose enthusiasm and energy were essential to the success of this project. I would also like to thank Sally Mitchell whose support and insightful comments have been invaluable. I gratefully acknowledge the constructive feedback of the referees which helped to improve this paper.
Notes
1. The Thinking Writing initiative works with academic tutors to explore ways of improving the teaching and assessment of writing. Further details of the PA project can be found on our website: http://www.thinkingwriting.qmul.ac.uk/assessment7.htm
2. This project received funding from the Higher Education Academy Engineering Subject Centre.