12,388
Views
14
CrossRef citations to date
0
Altmetric
Articles

A questionnaire to assess students’ beliefs about peer-feedback

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Research into students’ peer-feedback beliefs varies both thematically and in approaches and outcomes. This study aimed to develop a questionnaire to measure students’ beliefs about peer-feedback. Based on the themes in the literature four scales were conceptualised. In separate exploratory (N = 219) and confirmatory (N = 121) studies, the structure of the questionnaire was explored and tested. These analyses confirmed the a priori conceptualised four scales: (1) students’ valuation of peer-feedback as an instructional method, (2) students’ confidence in the quality and helpfulness of the feedback they provide to a peer, (3) students’ confidence in the quality and helpfulness of the feedback they receive from their peers and (4) the extent to which students regard peer-feedback as an important skill. The value of this Beliefs about Peer-Feedback Questionnaire (BPFQ) is discussed both in terms of future research and the practical insights it may offer higher education teaching staff.

Introduction

Belief systems help a person to define and understand the world and one’s place within that world, functioning as a lens through which new information is interpreted. Not surprisingly therefore, most definitions of ‘beliefs’ emphasise how these guide attitudes, perceptions and behaviour (Pajares, Citation1992). Considering beliefs as a precursor to attitudes and behaviour (Ajzen, Citation1991; Ajzen & Fishbein, Citation2005), we describe the need for, and development of a questionnaire to assess higher education students’ beliefs about peer-feedback. Peer-feedback is defined as all task-related information that a learner communicates to a peer of similar status which can be used to modify his or her thinking or behaviour for the purpose of learning (cf. Huisman, Saab, van Den Broek, & van Driel, Citation2018). By including all task-related information that is communicated between peers (i.e. both scores and comments) for the purpose of learning, this definition encompasses both formative ‘peer-feedback’ and formative ‘peer-assessment’, insofar these reflect different practices in the literature (cf. Huisman, Citation2018). In this study, we use the term ‘peer-feedback’. When discussing the literature, however, the term ‘peer-assessment’ is sometimes adopted to reflect the terminology used by the referenced authors.

In line with this interpretation of beliefs, students’ educational beliefs are likely to influence their perceptions and behaviour during learning processes. For example, students’ beliefs regarding the utility of a task may relate to their effort and performance (see Hulleman, Durik, Schweigert, & Harackiewicz, Citation2008). In the context of peer-feedback, this could mean that students’ active engagement in the peer-feedback process is contingent upon the degree to which they believe that peer-feedback contributes to their learning and/or is an important skill to acquire. At the same time, students’ peer-feedback beliefs can be regarded as an outcome of the peer-feedback process (van Gennip, Segers, & Tillema, Citation2009). A relevant overview is provided by van Zundert, Sluijsmans, and van Merriënboer (Citation2010). One focus of their review relates to how training and experience in peer-feedback influence students’ attitudes towards peer-feedback. Although attitudes and beliefs are not identical constructs, we do consider these to be similar enough in the context of this study. van Zundert et al. (Citation2010) found that 12 out of the 15 studies reported positive attitudes towards peer-feedback. However, they also concluded that ‘It is notable that, whereas the procedures varied tremendously, there was also an enormous variety in the instruments used to measure student attitudes’ (p. 277). Hence, a single comprehensive measure of either students’ attitudes or beliefs about peer-feedback is missing. A comprehensive measure of students’ peer-feedback beliefs seems imperative as peer-feedback is frequently applied within higher education. From an academic perspective, such a measure could facilitate the alignment of research findings, for example with respect to how peer-feedback beliefs are defined and measured. The resulting comparability of research findings across different contexts could allow for more generalisable conclusions with regard to students’ beliefs about peer-feedback and the factors that influence those beliefs. From a practical perspective, such a measure could assist higher education teaching staff in understanding how the design of their peer-feedback practices (e.g. Gielen, Dochy, & Onghena, Citation2011) affects students’ experience of, and support for peer-feedback as an instructional method.

Within the instrument that is developed and tested in the current study, four themes are conceptualised and integrated as separate constructs. The following sections describe how these themes are derived from the existing empirical research literature.

Themes of student beliefs in the existing research literature

Prior studies investigating students’ beliefs concerning peer-feedback have adopted different approaches to address a variety of themes. Nevertheless, three broader themes can be distinguished in the literature.

Peer-feedback as an instructional method

Regarding students’ valuation of peer-feedback as an instructional method within their educational context, prior research have asked students questions such as how they value the peer-feedback activity, whether they believe that students should be involved in assessing their peers and whether they believe that peer-feedback contributes to their learning.

With respect to the involvement of students in formal feedback procedures through the use of peer-feedback and the valuation of these peer-feedback activities, students generally appear positive. For example, McGarr and Clifford (Citation2013) explicitly asked both undergraduate and postgraduate students how they valued peer-assessment within their educational program. They found that both groups of students regarded peer-assessment as valuable, although the postgraduate students valued it to a larger extent. Cheng and Warren (Citation1997) found that 63.5% of the students believed that students should take part in assessing their peers. Additionally, Li and Steckelberg (Citation2004) asked students whether they believed peer-assessment to be a worthwhile activity. On a 5-point scale, the 22 students scored a 4.18 on average, with all students scoring a 3 or higher. Also, Nicol, Thomson, and Breslin (Citation2014) found students to hold positive beliefs with respect to peer-feedback. After engaging in a peer-feedback activity, which was the first such experience for most students, 86% reported to have a positive experience and 79% reported that they would definitely choose to participate again on future occasions. McCarthy (Citation2017) also found that a majority of students was willing to receive peer-feedback on future occasions, although here students were more positive towards future peer-feedback in an online context (92% in favour) than in-class context (67% in favour). Other studies differentiated between students’ beliefs regarding the provision and reception of peer-feedback. For example, Palmer and Major (Citation2008) found that students valued both aspects of the peer-feedback process. In contrast to these generally positive findings, Liu and Carless (Citation2006) findings were more ambiguous. These authors reported on a survey asking 1740 students for their views on the purpose of assessment. Only 35% agreed with the notion that the development of ‘students’ ability to assess their classmates’ should be a purpose of assessment, whereas 40% was neutral and 25% disagreed. Also, the study by Mulder, Pearce, and Baik (Citation2014) shows that, although students were relatively positive before peer-feedback started, the experience of the peer-feedback process did lead to a small downward shift in their appreciation of peer-feedback.

With respect to the impact of peer-feedback, students generally believe that it can contribute to their own learning. For example, Saito and Fujita (Citation2004) asked 45 students how helpful they considered the comments and marks to be that they both received from and provided to peers. Their results suggested that students regard both aspects of the peer-feedback process as contributing to their own learning. Similarly, 55% of the surveyed students in the study by Nicol et al. (Citation2014) reported that they learned from both the provision and reception of peer-feedback. In the focus group data of the same study, however, students’ beliefs with respect to the benefits of providing peer-feedback appeared to be more salient, a finding that is corroborated by the in-depth case study by McConlogue (Citation2015). Wen and Tsai (Citation2006) also found that students were moderately positive with respect to the contribution of peer-feedback to their learning, although there was a notable variation in responses. Taken together, students appear to hold at least moderately positive beliefs about the value of peer-feedback as an instructional method.

Confidence

Across existing studies, questions revolving around students’ confidence addressed the extent to which students consider themselves or their peers as eligible assessors of quality and to what extent they believed their own or their peers’ comments or ratings to be reliable and helpful.

Students’ confidence in their own competence as an assessor could be considered as a context-specific self-efficacy belief (cf. Pajares, Citation1992). Sluijsmans, Brand-Gruwel, van Merriënboer, and Martens (Citation2004) investigated such beliefs that students hold, addressing students’ self-perceived assessment skills through items such as ‘I am able to analyse a product of a peer’. They found that students were fairly confident in their own competence. McGarr and Clifford (Citation2013) also asked students whether they regarded themselves as having the knowledge and skills to assess their peers. Both undergraduate and postgraduate students indicated that they were relatively confident in this respect. In contrast, students in the study by Cheng and Warren (Citation1997) were less confident in their own competence as an assessor. Possibly, the findings in these studies may differ as a result of differences in participant samples. In the Sluijsmans et al. (Citation2004) study, participants were student-teachers, who are likely to have encountered peer-feedback tasks to a larger extent than the first-year undergraduate students in the study by Cheng and Warren (Citation1997).

With respect to students’ confidence in the reliability and helpfulness of their peers’ feedback and the eligibility of their peers as assessors of quality, Wen and Tsai and colleagues (e.g. Wen & Tsai, Citation2006; Wen, Tsai, & Chang, Citation2006) asked students to respond to statements such as ‘I think students are eligible to assess their classmate’s performance’. Their results indicate a more or less even split with respect to students’ general belief about the role and responsibility of students in formal feedback. Focusing more on the notion of reliability, Saito and Fujita (Citation2004) directly asked students to what extent they considered their peers to be reliable raters. Here, students held moderately positive beliefs about the reliability of their peers’ ratings.

Peer-feedback skills as an important learning goal

In addition to these first three themes, we argue there is a fourth important aspect of students’ peer-feedback beliefs. This concerns the extent to which they regard peer-feedback skills as being an important learning goal in itself. Although we did not encounter empirical research that explicitly addressed this aspect of students’ peer-feedback beliefs, we believe that the theoretical relevance of this factor warrants its inclusion. After all, students’ engagement in the peer-feedback process may be contingent on the extent to which they regard peer-feedback skills as important to acquire or develop. According to expectancy-value theory, for example, subjective task value influences the achievement-related choices students to make (e.g. Wigfield & Eccles, Citation2000). In particular, the valued utility of a task appears to positively relate to students’ effort, time-on-task and performance (e.g. Hulleman et al., Citation2008). In addition, higher education students are the future members of academic or other professional organisations. Being able to provide, receive and utilise feedback from peers could – or indeed should – therefore in themselves be considered as important learning goals in higher education curricula (see also Liu & Carless, Citation2006; Sluijsmans et al., Citation2004; Topping, Citation2009). Hence, a total of four themes of students’ beliefs about peer-feedback were conceptualised (see ).

Table 1. Scales and items for the beliefs about the peer-feedback questionnaire.

Research aims

The current study describes the first steps in the development and testing of the Beliefs about Peer-feedback Questionnaire (BPFQ). The BPFQ covered three themes derived from the existing empirical research literature:

  • (1) students’ valuation of peer-feedback as an instructional method within their educational context

  • (2) students’ confidence in the quality and helpfulness of the feedback they provide to (a) peer(s)

  • (3) students’ confidence in the quality and helpfulness of the feedback they receive from their peer(s)

In addition, a fourth theme was conceptualised based on prior calls by multiple authors (e.g. Liu & Carless, Citation2006; Sluijsmans et al., Citation2004) and our own experience and informal conversations with students, namely:

  • (4) the extent to which students regard peer-feedback skills in themselves as an important learning goal.

Method

The BPFQ was constructed in three steps. In step one, a questionnaire was developed to address the four above mentioned themes, which were conceptualised in four scales: ‘Valuation of peer-feedback as an instructional method’ (VIM; four items), ‘Confidence in own peer-feedback quality’ (CO; two items), ‘Confidence in quality of received peer-feedback’ (CR; two items) and ‘Valuation of peer-feedback as an important skill’ (VPS; three items). Items of the VIM scale related to, for example, the questionnaires discussed by Cheng and Warren (Citation1997), Li and Steckelberg (Citation2004) and Palmer and Major (Citation2008). Items of the CO scale related to the questionnaires discussed by Sluijsmans et al. (Citation2004) and Cheng and Warren (Citation1997), whereas items of the CR scale were based on the findings by Wen and Tsai and colleagues (e.g. Wen & Tsai, Citation2006; Wen et al., Citation2006) and Saito and Fujita (Citation2004). Finally, the VPS scale was designed to assess how important students regarded three different skills within the peer-feedback process: providing peer-feedback, dealing with critical peer-feedback and utilising it for improving one’s work. These three were conceived as applicable and generalisable to future contexts, either within students’ studies or during their subsequent careers. All BPFQ items were addressed using a 5-point Likert scale. For the VIM and VPS scales, these ranged from 1 (‘completely disagree’) to 5 (‘completely agree’), whereas for the CO and CR scales these ranged from 1 (‘completely not applicable to me’) to 5 (‘completely applicable to me’). All questionnaires were administered in the paper-and-pencil format during the starting lecture of a course.

In step two an exploratory study was conducted. Using the data from this study, principal component analyses were performed to assess how the separate items congregated into different components, reflecting the initial bottom-up structure of the BPFQ. Based on the first principal component analysis, one item of the initial VIM scale (‘Involving students in feedback through the use of peer-feedback is instructive’) did not uniformly load on one single component and was therefore omitted in all subsequent analyses (see ). A second and third principal component analysis were performed on the remaining 10 items to compare the proposed model with four scales to a model without a predefined number of components.

In the third and final step, two confirmatory factor analyses were performed to compare the proposed and non-fixed models in terms of their fit to the data.

Participants, procedure and analyses

In the exploratory study, the questionnaire was completed by 220 second-year Biopharmaceutical Science students from a large research-intensive university in The Netherlands. The questionnaire was administered in students’ native language (Dutch). The data for one student were dropped as only cases without missing data were retained (‘list-wise deletion’). The mean age of the 219 included students was 19.51 years (SD = 1.39) with 140 students (63.9%) being female. During their undergraduate program, these students were introduced to peer-feedback as an instructional method through explanation, instruction, exercises, and formative peer-feedback activities. Over the course of the first three semesters, the role of peer-feedback gradually expanded, with the ultimate aim of the teaching staff being that students would perceive peer-feedback as a normal and integral part of formal feedback. Principal component analyses were performed using SPSS (v23) and oblique (oblimin) rotation was applied.

In the confirmatory study, the questionnaire was administered to a group of first-year students in Education & Child Studies (N = 121) attending the same large research-intensive university in The Netherlands. Here, too, the questionnaire was administered in students’ native language (Dutch). Their mean age was 19.48 years (SD = 1.62) with 114 students (94.2%) being female. These students had at least one prior experience with anonymous online peer-feedback in the context of an academic writing assignment. In particular, these students had participated in a similar writing assignment in the directly preceding semester, which included reciprocal peer-feedback on each other’s essay within an online learning environment. Confirmatory factor analyses were conducted using the ‘lavaan’ package (v0.5–23.1097; Rosseel, Citation2012) in R. For the final scales emerging from the confirmatory analyses, internal reliability was computed as Cronbach’s alpha.

Results

In the exploratory study, two principal component analyses were conducted to compare the a priori proposed model with four fixed components to a ‘bottom-up’ model without a pre-fixed number of scales. In comparison, the total common variance was higher for the items in the proposed model with four fixed components (average of communalities being 0.718) than for the items in the non-fixed model with three components (average of communalities being 0.624).

Confirmatory factor analyses were conducted on the sample of Education & Child Studies students to compare the a priori proposed four-component structure with the bottom-up three-component structure. The proposed four-factor model (χ2(29) = 56.78, p = .002, TLI = .91, CFI = .94, RMSEA = .089 [.05, .12], SRMR = .06) appeared to fit the data better than the bottom-up 3 factor model that emerged in the exploratory phase (χ2(32) = 117.69, p < .001, TLI = .75, CFI = .82, RMSEA = .15 [.12, .18], SRMR = .11). Therefore, the final BPFQ was considered to be best described in terms of the four scales that were conceptualised on forehand. The respective scale-reliabilities were acceptable (see ), especially given the concise nature of the individual scales (cf. Cohen, Citation1988; Cortina, Citation1993)Footnote1.

Table 2. BPFQ descriptive statistics, reliability indices and scale correlations.

Conclusion and discussion

The current study aimed to develop and test a questionnaire to assess students’ peer-feedback beliefs. An exploratory and a confirmatory study supported the four scales: students’ valuation of peer-feedback as an instructional method (VIM; three items), students’ confidence in the quality and helpfulness of the peer-feedback they provide to their peers (CO; two items), students’ confidence in the quality and helpfulness of the peer-feedback they receive from their peers (CR; two items) and students’ valuation of peer-feedback as an important skill (VPS; three items).

We believe the BPFQ is valuable both to academic researchers and higher education teaching staff. With respect to research into students’ peer-feedback beliefs, the availability of a comprehensive questionnaire could facilitate the comparability of research findings across contexts and disciplines, contributing to more coherent knowledge building in this area. The consistent use of one instrument in multiple educational contexts may shed light on how varying aspects of the design of peer-feedback tasks (see Gielen et al., Citation2011 for an overview) influence students’ peer-feedback beliefs. This could, for example, help to assess how varying peer-feedback format or guidelines, or variations in how students interact, affect their peer-feedback beliefs. In addition, students’ peer-feedback beliefs are likely to be influenced through cumulative experiences over time and measuring such changes requires longitudinal approaches with multiple measurements. The relatively concise nature of the BPFQ may facilitate such longitudinal research into students’ peer-feedback beliefs by minimising the burden on teachers’ and students’ time. The relatively concise nature of the BPFQ may also assist higher education teaching staff in understanding how their peer-feedback practice affects students’ experience of, and support for peer-feedback. In the higher education literature, effective peer-feedback is increasingly recognised as an important learning goal in itself (e.g. Liu & Carless, Citation2006; Sluijsmans et al., Citation2004; Topping, Citation2009). As students’ support for the peer-feedback process is pivotal to their engagement in it, it, therefore, seems particularly worthwhile to cultivate a classroom culture in which peer-feedback is the norm (Huisman, Citation2018). The BPFQ could function as an evaluative measure that informs higher education staff on how to improve peer-feedback during the course or curriculum. In terms of students’ support for peer-feedback, the BPFQ could, for example, be administered at the start of a course or semester. Having a priori information about students’ peer-feedback beliefs could provide teaching staff with the opportunity to address issues around students’ confidence or their awareness of the importance of peer-feedback skills. Especially in the case of student beliefs, it may be critical to act upon such information in a timely fashion given that students’ early experiences can strongly influence judgments, which in turn become beliefs that may be relatively resistant to change (Pajares, Citation1992).

Limitations and future research

Some limitations need to be addressed. For one, additional sampling is required to confirm the external validity of the BPFQ. Although we purposefully sampled different groups of students for the exploratory and the confirmatory analyses, all participants in the current study were undergraduate students within the same university. As a result, their beliefs about peer-feedback may be influenced by some common denominator, such as the general likelihood of being involved in peer-feedback or the (digital) tools used to organise peer-feedback. Hence, future applications within other higher education institutes and disciplines are needed to assess the extent to which the BPFQ continues to function consistently across contexts. Second, the BPFQ may not be exhaustive with respect to the potential variety of peer-feedback beliefs that students’ may hold, for example, because some may currently be underrepresented in the literature. One way to address this could be through systematic, in-depth interviews with both graduate and undergraduate students from varying institutes and disciplines. Despite these inherent limitations, we are confident that this study provides a practical (i.e. concise) and comprehensive questionnaire to address students’ beliefs about peer-feedback. In particular, we demonstrated that the construct validity of the BPFQ is acceptable and that individual scale reliabilities are sufficient. We, therefore, believe that this questionnaire can contribute to higher education research by facilitating the comparability of research findings. Additionally, we believe that the BPFQ can help higher education teaching staff in understanding how their peer-feedback practice affects students’ experience of, and support for peer-feedback. The relatively concise nature of this questionnaire may make it practical to administer both within a single course as in a more longitudinal manner, for example, when the development of students’ peer-feedback beliefs or assessment literacy is investigated over the course of a curriculum (e.g. Price, Rust, O’Donovan, & Handley, Citation2012).

Statement on Open Data

The anonymised data and syntaxes are accessible via the following link: https://osf.io/ja27g

Acknowledgments

We thank Kim Stroet for facilitating data collection and Marjo de Graauw for data collection in her class and for fruitful brainstorm sessions on students’ peer-feedback beliefs. Also, we would like to thank Kirsten Ran for her help with the questionnaires. Finally, thanks go out to Benjamin Telkamp for his assistance with data analyses and for inspiring confidence to use the R language.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Bart Huisman

Bart Huisman obtained his MSc in social psychology in 2013 at Leiden University and his PhD in 2018 at Leiden University Graduate School of Teaching (ICLON), The Netherlands. His primary research interest is peer feedback in higher education.

Nadira Saab

Nadira Saab is Assistant Professor at Leiden University Graduate School of Teaching (ICLON), The Netherlands. Her research interests involve the impact of powerful and innovative learning methods and approaches on learning processes and learning results, such as collaborative learning, technology enhanced learning, (formative) assessment and motivation.

Jan Van Driel

Jan Van Driel is professor of Science Education and Associate Dean-Research at Melbourne Graduate School of Education, The University of Melbourne, Australia. He obtained his PhD from Utrecht University (1990), The Netherlands and was professor of Science Education at Leiden University, the Netherlands (2006-2016) before he moved to Australia. His research has a focus on teacher knowledge and beliefs and teacher learning in the context of pre-service teacher education and educational innovations. He has published his research across the domains of science education, teacher education and higher education.

Paul Van Den Broek

Paul Van Den Broek is professor in Cognitive and Neurobiological Foundations of Learning and Teaching at Leiden University, the Netherlands, and director of the Brain and Education Lab. He obtained his PhD from the University of Chicago (1985), and was professor at the University of Kentucky (1985-1987) and the University of Minnesota (1987-2008) before moving to the Netherlands. His research interests center around reading, learning from texts, and mathematics. With his colleagues, he investigates the cognitive and neurological processes that underlie these activities –both when they succeed and when the fail-, and the development of these processes in children and adults. They also develop and test methods for improving reading comprehension and reading skills in struggling readers. http://www.brainandeducationlab.nl

Notes

1. For a more details with respect to the exploratory and confirmatory analyses, please see Huisman (Citation2018).

References

  • Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.
  • Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracin, B. T. Johnson, & M. P. Zanna (Eds.), The handbook of attitudes (pp. 173–221). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Cheng, W. N., & Warren, M. (1997). Having second thoughts: Student perceptions before and after a peer-assessment exercise. Studies in Higher Education, 22, 233–239.
  • Cohen, J. C. (1988). Statistical power analysis for the behavioural sciences. Hillsdale, NJ: Lawrence Erlbaum Associates.
  • Cortina, J. M. (1993). What is coefficient alpha – an examination of theory and applications. Journal of Applied Psychology, 78, 98–104.
  • Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer-assessment diversity. Assessment & Evaluation in Higher Education, 36, 137–155.
  • Huisman, B. A. (2018). Peer-feedback on academic writing: Effects on performance and the role of task-design (Doctoral dissertation). Retrieved from http://hdl.handle.net/1887/65378
  • Huisman, B. A., Saab, N., van Den Broek, P. W., & van Driel, J. H. (2018). The impact of formative peer-feedback on higher education students’ academic writing. A Meta-Analysis. Assessment & Evaluation in Higher Education, 44, 863–880. doi:10.1080/02602938.2018.1545896
  • Hulleman, C. S., Durik, A. M., Schweigert, S. B., & Harackiewicz, J. M. (2008). Task values achievement goals, and interest: An integrative analysis. Journal of Educational Psychology, 100, 398–416.
  • Li, L., & Steckelberg, A. L. (2004) Using peer-feedback to enhance student meaningful learning. Association for Educational Communications and Technology (ERIC Document Reproduction Service No. ED485111). Retrieved from https://eric.ed.gov/?id=ED485111.
  • Liu, N., & Carless, D. (2006). Peer-feedback: The learning element of peer-assessment. Teachingin Higher Education, 11, 279–290.
  • McCarthy, J. (2017). Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Active Learning in Higher Education, 18, 127–141.
  • McConlogue, T. (2015). Making judgements: Investigating the process of composing and receiving peer-feedback. Studies in Higher Education, 40, 1495–1506.
  • McGarr, O., & Clifford, A. M. (2013). ‘Just enough to make you take it seriously’: Exploring students’ attitudes towards peer-assessment. Higher Education, 65, 677–693.
  • Mulder, R. A., Pearce, J. M., & Baik, C. (2014). Peer review in higher education: Student perceptions before and after participation. Active Learning in Higher Education, 15, 157–171.
  • Nicol, D. J., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in highereducation: A peer review perspective. Assessment & Evaluation in Higher Education, 39, 102–122.
  • Pajares, M. F. (1992). Teachers beliefs and educational-research - Cleaning up a messyconstruct. Review of Educational Research, 62, 307–332.
  • Palmer, B., & Major, C. H. (2008). Using reciprocal peer review to help graduate students develop scholarly writing skills. Journal of Faculty Development, 22, 163–169. Retrieved from http://www.ingentaconnect.com/content/nfp/jfd/2008/00000022/00000003/art00001
  • Price, M., Rust, R., O’Donovan, B., & Handley, K. (2012). Assessment literacy: The foundation for improving student learning. Oxford, UK: The Oxford Centre for Staff and Learning Development. ISBN: 978-1-87-357683-0.
  • Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48. doi:10.18637/jss.v048.i02
  • Saito, H., & Fujita, T. (2004). Characteristics and user acceptance of peer rating in EFL writing classrooms. Language Teaching Research, 8, 31–54.
  • Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Martens, R. L. (2004). Training teachers in peer-assessment skills: Effects on performance and perceptions. Innovations in Education and Teaching International, 41, 59–78.
  • Topping, K. J. (2009). Peer-assessment. Theory Into Practice, 48, 20–27.
  • van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer-assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review, 4, 41–54.
  • van Zundert, M., Sluijsmans, D., & van Merriënboer, J. (2010). Effective peer-assessment processes: Research findings and future directions. Learning and Instruction, 20, 270–279.
  • Wen, M. L., & Tsai, C. C. (2006). University students’ perceptions of and attitudes toward (online) peer-assessment. Higher Education, 51, 27–44.
  • Wen, M. L., Tsai, C. C., & Chang, C. Y. (2006). Attitudes towards peer-assessment: A comparison of the perspectives of pre‐service and in‐service teachers. Innovations in Education & Teaching International, 43, 83–92.
  • Wigfield, A., & Eccles, J. S. (2000). Expectancy-value theory of achievement motivation. Contemporary Educational Psychology, 25, 68–81.