504
Views
10
CrossRef citations to date
0
Altmetric
Main Papers

Self- and Peer-Assessment: Evidence from the Accounting and Finance Discipline

, &
Pages 225-243 | Received 01 Oct 2011, Accepted 17 Feb 2014, Published online: 17 Jun 2014
 

Abstract

Self- and peer-assessment of student work is an area that is under-researched in the accounting education literature, although the subject area of study seems to influence the results obtained in prior studies. The current study contributes to the literature by examining the accuracy and construct validity of self- and peer-assessment by accounting students. It also investigates students’ views about these exercises. The findings show that whilst the self- and peer-assessment of students appear to be neither accurate nor valid, the students are positive about the impact of these procedures on their learning experience. These findings indicate that, although instructors might not rely on self- and peer-assessment as measures of students’ performance for the purpose of summative assessment, the exercise may prove useful for formative assessment because it can promote a wide range of transferable skills.

Acknowledgements

The authors thank all students who took part in the exercise. We also thank Professor Richard M. S. Wilson, the anonymous associate editor and reviewers for their constructive feedback on the previous versions of this paper. The authors are also grateful to Professor Chris Hudson, Dr Kate Dunton, Dr Gianluigi Giorgioni, Professor David Power, Dr Jan Fidrmuc, Professor Frank Skinner and the participants of the Learning & Teaching Symposium at Brunel University 2011 for their helpful comments.

Notes

1. The main objective of formative assessment is to provide learners with feedback on how they are performing during a programme of study, thereby helping them to learn more effectively. It does not normally count towards a final grade or is it normally used to determine whether the learner will be allowed to progress to a later stage of a course. It is, however, sometimes used to permit entry to an examination: the class certificate or duly performed approach (Ellington, Citation1996).

2. Summative assessment is normally conducted at the end of study to establish what the learner has achieved. It differs from formative assessment in that it usually counts towards a final grade or is used to determine whether the learner is allowed to progress through the course (Ellington, Citation1996).

3. Johnson and Smith's (Citation1997) study is one of the few to examine peer assessment in the accounting and finance domain. However, they study the concurrent validity of peer evaluations, which is not the subject of the current study. In their research, peer assessment is the score awarded to each student by his or her team members using a proposed instrument. To examine the concurrent validity of peer evaluation, the Spearman rank correlation coefficient is calculated between peer evaluations and individual objective scores based on a quiz. A significant correlation coefficient of 0.46 is obtained between peer scores using the proposed evaluation instrument and the individual score, which led Johnson and Smith to conclude that their proposed instrument produced reasonably valid peer scores.

4. Orpen (Citation1994) provides an example of tests for predictive validity in the context of peer-assessment. Individuals are asked to predict a future achievement for their peers, in this case the future examination performance of their peers. Then, the correlation coefficients between these predictions and actual outcomes are calculated to determine the predictive validity of the peer evaluations. However, the predictive validity is not one of the tests of interest to the current study.

5. This is also true of reliability, which refers to the extent to which an experiment or test yields similar results on repeated trials (Carmines and Zeller, Citation1991). Indeed, the Chartered Institute of Educational Assessors (Citation2011) suggests that an assessment in which a candidate scores the same mark, irrespective of who the assessor is, is reliable. In the context of SA and PA, most of the existing literature uses the inter-marker reliability as a measure for reliability (Evans, Leeson and Petrie, Citation2007; Papinczak et al., Citation2007; Stefani, Citation1994). Topping (Citation1998) suggests that, to provide a true measure of reliability, markers must be at the same level of education, training and professionalism, otherwise a measure of validity is obtained.

6. This is because, with a correlation coefficient, one does not know how different the student assessment and the teacher assessment are or which is higher than the other, only whether they move together in the same or opposite direction or don't move together at all (Blanch-Hartigan, Citation2011).

7. Because the teaching plan of this course was set up to analyse one case study every week, this resulted in one student handling a case study by herself. Hence her work was minimised by requiring her to present a case study only where the full answer was available.

8. Submissions for peer assessment were invited the week after the first group's presentation, which resulted in very few responses. Hence it was decided to drop the peer review observations for the students who formed the first group. Afterwards, evaluation forms were collected at the end of each class.

9. Another potential issue is the possibility of students being unfamiliar with case studies. Hence an illustration of how to handle a case study was provided by the lecturer at the first seminar. Students were encouraged to discuss with the lecturer prior to presentation any aspects of the case study that they were required to analyse and present. During discussion of each case study, the lecturer also demonstrated to the class the required steps for analysis. Furthermore, when necessary, the correct answer to the case study was made available to all students after presentation. However, future research might replicate the analysis, controlling for the effect of the assessment task prior to reaching firm conclusions.

10. Each week the students were required to upload a written piece of work (similar to an exam-type question) onto the university virtual learning environment. Computer software was used to anonymise the work and then to distribute two pieces of their peers work to each student, which they were required to mark in accordance with marking criteria that had been developed by the module tutors. Once the work was marked and written feedback supplied, the software then returned the feedback to the author. Thus, each student marked two pieces of their peers work and received two pieces of peer feedback. This exercise was undertaken in the university s computer labs.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 551.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.