176
Views
0
CrossRef citations to date
0
Altmetric
Educational Assessment & Evaluation

Enhancing trust, safety and quality: exploring the role of dialogue in peer feedback on professional competencies

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2349355 | Received 24 Nov 2023, Accepted 25 Apr 2024, Published online: 09 May 2024

Abstract

Peer feedback can enhance learning but may introduce issues like peer pressure and distrust, particularly with professional competencies like teamwork. This jeopardizes the feedback process and skill development crucial for undergraduate students’ career preparation. To address this, two approaches are generally used: anonymizing feedback or incorporating feedback dialogue. However, the impact of anonymity on trust and safety is unclear due to a loss of dialogue. Additionally, the effect of feedback dialogue in the context of competencies remains largely unexplored. Employing a mixed-methods approach, we divided sixty-three participants into an experimental group receiving identifiable online peer feedback with dialogue and a control group receiving anonymous feedback only. We measured students’ psychological safety and trust in giving feedback on teamwork competencies, feedback quality and perceptions of the feedback process. Quantitative results showed no significant differences in safety and trust perceptions between groups, indicating that anonymity and feedback dialogue contribute to a comparably safe environment. However, the qualitative results indicated that the experimental group held more positive attitudes toward the feedback process and their feedback seemed more nuanced. This suggests that dialogue-enhanced peer feedback is preferred for fostering a safe and effective peer feedback exchange that supports professional competency development.

Introduction

Employers increasingly seek professional competencies beyond traditional skills such as academic writing (Chartered Management Institute, Citation2018; Kenworthy & Kielstra, Citation2015). These include communication, planning, leadership and other teamwork competencies, essential for effective collaboration in diverse teams. Universities can better prepare students for their careers by developing such skills (Dolce et al., Citation2020). This necessitates deliberate practice and feedback loops (Braadbaart et al., Citation2023; Cottrell, Citation2001; McLaughlin et al., Citation2019; Nicol & Macfarlane-Dick, Citation2006), which have historically received limited attention in higher education. This issue is exacerbated by a growing burden on teachers, as increasing cohort sizes make it more challenging for them to provide timely feedback.

Peer feedback is increasingly used as an instructional strategy in higher education to replace or supplement teacher feedback (Topping, Citation2017), employed in both online and face-to-face settings (Jongsma et al., Citation2022). Research indicates that the impact of peer feedback on learning outcomes is often comparable to, if not better than, teacher feedback (Double et al., Citation2020; Huisman et al., Citation2019; Topping, Citation1998). Peer feedback can also be an effective method to evaluate student teamwork skills or other professional competencies, including project planning, communication skills and leadership (Cruz et al., Citation2020; Dannefer et al., Citation2005).

However, receiving peer feedback on professional competencies can be difficult because students might perceive it as more personal than feedback on an artifact, like a scientific report. Students must trust each other and feel safe enough to provide and receive peer feedback without holding back critical comments, or feeling hurt when they receive critique (Van Gennip et al., Citation2009). To this end, anonymity is often used to create a safe environment for peer feedback as it is assumed to facilitate the provision of honest and high-quality feedback without the risk of retaliation. We define the quality of feedback in this context as feedback which is clear and balanced, highlighting both strengths and points for improvement as opposed to uncritical feedback only providing compliments. However, it remains unclear whether anonymity itself leads to a safe environment or whether feedback aids, such as rubrics and feedback training, significantly increase student perspectives of peer feedback (Panadero & Alqassab, Citation2019). Another common way to foster feelings of trust and safety is by using feedback dialogue, allowing students to respond to their peers’ feedback and resolve any ambiguities (Carless, Citation2012).

To our knowledge, no prior research conclusively points toward a preference for one of these two common modes to create a safe environment facilitating the exchange of quality peer feedback on professional competencies. As such, in this study, we examined the effect of a non-anonymous peer feedback approach supplemented with a peer feedback dialogue versus an approach using anonymity on students’ perceptions of trust, psychological safety and feedback quality. To do this we send out a survey to students questioning them about their perceived trust and safety, as well as interviewing students about their experience with a feedback dialogue.

Literature review

Peer feedback on professional competencies

Ample research has been conducted on peer feedback within academic writing tasks or tasks with artifacts as performance outcomes (Huisman et al., Citation2019). However, peer feedback can also be used to review and scaffold professional competencies, such as planning, communication and reaching deadlines. A quasi-experimental study conducted by Nelwati et al. (Citation2020) suggested that peer learning, where students engage in role-play and peer feedback, could be an efficient strategy for developing professional competence in the medical field. Dannefer et al. (Citation2005) found that medical students were capable of effectively reviewing each other’s work and interpersonal habits, such as problem-solving and trustworthiness. In other higher education fields, peer feedback is frequently used to evaluate the group work process and ascertain the equality of contribution among members (Meijer et al., Citation2020). As such, peer feedback can scaffold the development of critical professional skills integral to efficient teamwork (Dooley & Bamford, Citation2018), even though in many instances peer feedback is solely used for grading individual contributions accurately.

However, notwithstanding these potential positive learning outcomes, students can harbor negative sentiments toward peer feedback activities given its inherent social nature. In the case of peer feedback on professional competencies (eg communication skills), this risk increases because the feedback itself is personal (Falchikov, Citation2003). Furthermore, the validity of intra-group peer feedback can be jeopardized when friendships or peer pressure influence students’ evaluation of their peers (Meijer et al., Citation2020). Trust in one another and feeling sufficiently safe to provide critical feedback are prerequisites for high-quality and effective peer feedback.

Trust and safety in peer feedback

Trust and psychological safety are two important interpersonal variables in peer feedback, as this is inherently a social process that can be stressful for students (Falchikov, Citation2003; van Gennip et al., Citation2010). In this context, trust is two-fold: (1) trust in one’s competence to assess the work of others and (2) trust in the ability of others as assessors (van Gennip et al., Citation2009). Both aspects can be enhanced by clear instructions on delivering high-quality feedback, for instance, by providing rubrics (Er et al., Citation2020).

Psychological safety, the second variable, is defined by Edmondson (Citation1999) as the comfort of taking interpersonal risks within a group of people without fear of retaliation. In the educational context this would mean that a student feels comfortable to, for example, state what they do or do not want to do within group work and feels safe enough to give critical feedback to their peers. In the present work, we use ‘psychological safety’ and the more general ‘safety’ interchangeably. A quasi-experimental study by Van Gennip et al. (Citation2010) concluded that psychological safety positively impacts students’ conceptions of peer feedback, leading to an enhanced perception of learning. Similar results were found by Cheng and Tsai (Citation2012) for both psychological safety and trust. Conversely, peer pressure or the fear of disapproval can negatively affect feelings of trust and safety. Since trust and safety are essential for high-quality feedback, such negative influences can undermine the overall effectiveness of peer feedback (Falchikov, Citation2003). Consequently, it is paramount to mitigate undesirable social effects such as peer pressure while enhancing feelings of trust and safety. A common intervention to achieve this is ensuring anonymity in peer feedback activities (Panadero & Alqassab, Citation2019).

Anonymity in peer feedback

Anonymous peer feedback can incite more critical feedback and foster better learning than identifiable peer feedback (Howard et al., Citation2010; Lu & Bol, Citation2007) primarily because students typically feel more comfortable providing peer feedback anonymously (Raes et al., Citation2015). Nevertheless, anonymity is not always the ideal or even feasible option. In a literature review, Panadero and Alqassab (Citation2019) reported that while a slight majority of studies show that anonymous peer feedback is superior in its impact on academic performance compared to identifiable feedback, peer feedback aids such as rubrics and feedback training can moderate this effect of anonymity on academic performance. Consequently, Panadero and Alqassab (Citation2019) suggest that anonymity may not be the panacea for ensuring safety and trust in peer feedback activities.

In line with this perspective, Rotsaert et al. (Citation2018) showed that students who experienced both anonymous and identifiable peer feedback on each other’s work reported no significant differences in perceptions toward interpersonal variables such as safety and trust. Moreover, Van Heerden and Bharuthram (Citation2021) reported that receiving identifiable peer feedback does not always intimidate students. Their study revealed that students who were familiar with each other had more trust in their peers as assessors. These students participated in a feedback dialogue and their mutual familiarity facilitated easy access to such dialogues. In contrast, anonymous peer feedback may result in limited interaction between students, making the clarification of feedback comments challenging (Guardado & Shi, Citation2007).

Altogether, it remains inconclusive whether anonymity is universally the best approach to promote trust and safety across all peer feedback contexts, or if feedback dialogue could serve as a viable alternative.

Feedback dialogue

A feedback dialogue is a direct interaction between two individuals in which they reflect on the work or behavior of one of them (Nicol, Citation2010; Yang & Carless, Citation2013; Zhu & Carless, Citation2018). Carless (Citation2012) proposes that feedback is most effective when embedded within dialogic processes, emphasizing that interaction is key to building student trust. Yang and Carless (Citation2013) view feedback as a social practice influenced by (negative) emotional reactions, arguing that verbal feedback in a dialogue can foster student connections, enable the negotiation of meaning and assist in clarifying any feedback confusion. Given that verbal feedback is often more personal and detailed than written feedback, a feedback dialogue could enhance trust and safety among students (Glazzard & Stones, Citation2019). This notion is reinforced by research from Wood (Citation2022) and Schillings et al. (Citation2021), who concluded that a peer feedback dialogue improved feedback uptake due to the students’ ability to seek clarification on the feedback received. Moreover, students felt more comfortable as the dialogue helped strengthen relationships and made them realize that peer feedback was intended to support rather than criticize them (Wood, Citation2022).

Van den Berg et al. (Citation2006) argued that written and oral feedback serve different functions when reviewing a written text. Written feedback tends to be more focused on assessing the text, whereas verbal feedback is more revision-oriented. Consequently, these authors propose that the optimal approach would be to combine written and oral feedback (van den Berg et al., Citation2006). Whether this is also true for peer feedback on professional competencies remains yet to be determined.

Current study

Existing literature provides little insight into how anonymity and feedback dialogue affect psychological safety and trust in the context of professional competencies. Therefore, in this study, we aimed to explore which approach is preferred in fostering a safe and effective peer feedback environment in this context. To this end, using a mixed-methods approach, we assessed the impact of both approaches on perceived psychological safety, trust and feedback quality, as well as on the nature of feedback. Our research aimed to answer the following questions:

  1. How does identifiable online peer feedback with face-to-face dialogue differ from anonymous asynchronous peer feedback in terms of trust and psychological safety for undergraduate students in higher education working on teamwork competencies?

  2. Are there differences in the nature of the written peer feedback, ie feedback balance and feedback length, as well as in the perceived feedback quality, between the anonymous and identifiable conditions?

Method

Participants and context

The 'Drug Innovation Project’ is the course which provides the context for this study. This course is taught in Dutch at a research university in the Netherlands. It constitutes a four-week full-time interdisciplinary course, mandatory for first-year undergraduate students majoring in Pharmaceutical Sciences (Phar) or Science, Business & Innovation (SBI). Due to the Covid-19 restrictions in January 2021, the course was transitioned to an online format. The primary objective of the course is to familiarize the students with the process of the discovery and development of new drugs. Students, organized in interdisciplinary groups of five or six, were tasked with conceiving a hypothetical new drug for a specified disease, integrating both scientific and business considerations. To this end, the course consisted of weekly lectures to provide an overview of the drug discovery and development process, of which the knowledge was assessed at the end using an individual exam. Weekly group assignments were provided to work toward the final project report step-by-step. All students were allocated to their respective interdisciplinary groups and were collectively graded by one of the teachers on their case study report. Students did a Belbin Team Role® test (Belbin, Citation2010) and were encouraged to reach a consensus on team roles accordingly. The groups convened with their tutors (course teachers) weekly to discuss group progress and teamwork dynamics.

Adhering to the principles of the Helsinki Declaration (World Medical Association, Citation2013), students were informed about the research’s purpose and were requested to give consent for data collection. This resulted in a convenience sample of the available students in this course. Of the 178 students participating in group work, 63 agreed to participate in this study (32 Phar and 31 SBI students). The students participating in this study were randomly assigned to 11 groups of five or six with equal distribution of students majoring in Phar and SBI per group. This approach was consistent with that for students who did not participate in the study. Six of the 11 groups were randomly assigned to the control condition and five to the experimental condition. Three experienced teachers, each with at least three years of experience in supervising group projects, provided group guidance as tutors. All teachers were aware of the purpose of the current research.

Procedure

In the final week of the project, upon completing the case study, each student provided feedback on several professional competencies to every group member. A competence framework, used for this peer feedback exercise, was custom-developed by one of the co-authors, drawing inspiration from the broader European Entrepreneurship Competence Framework (European Commission, Citation2018). This framework, tailored for the course, included five main competencies related to teamwork, including associated sub-competencies, as listed below:

  1. Individual competence: motivation, professional functioning, decisiveness and perseverance.

  2. Communication: listening, writing and presenting.

  3. Group competence: respecting, dealing with emotions and behavior and giving and receiving feedback.

  4. Group process: roles, leadership and switching, decision making and negotiation.

  5. Project planning: goal setting, prioritization and adaptability, insight into progress, dealing with uncertainty and change.

The full competence framework can be found in the supplementary information Table S1. For each main competency, students were required to provide at least one compliment or suggestion for improvement. To enhance the validity of the peer feedback (Falchikov & Goldfinch, Citation2000), students were presented with a pre-recorded instructional video detailing the provision of high-quality feedback. This peer feedback activity was not graded; however, students were assigned either a pass or a fail based on their active participation.

outlines the overall research procedure. In the control condition, students within each group offered online anonymous feedback to each other using the tool FeedbackFruits (FeedbackFruits [Computer Software], Citation2021) consistent with the treatment for students not participating in the research. The setup of the online peer feedback activity in the experimental condition was similar, with the critical distinction being the visibility of the feedback provider’s identity (i.e., identifiable feedback). In addition, the students’ responses to and questions about peer feedback were discussed during a subsequent online meeting moderated by the group tutor, primarily aimed at resolving misunderstandings and providing clear directions for future improvement.

Figure 1. A visual representation of the course and outline of the research procedure. Note: For the experimental condition, video recordings of the focus groups were analyzed, whereas online peer feedback comments and the research questionnaire were collected and analyzed for both conditions. ‘Async’ refers to the online asynchronous nature of the peer feedback activity.

An illustration of four blocks shows the outline of the research procedure. The first block on the left shows the course contents and course work. The second block in the middle above shows the experimental condition. The third block in the middle below shows the control condition. Both middle blocks are connected by an arrow to the fourth block on the right showing the different data analysis methods.
Figure 1. A visual representation of the course and outline of the research procedure. Note: For the experimental condition, video recordings of the focus groups were analyzed, whereas online peer feedback comments and the research questionnaire were collected and analyzed for both conditions. ‘Async’ refers to the online asynchronous nature of the peer feedback activity.

After completing the peer feedback activity, each student wrote an individual reflection report, incorporating the insights obtained from the peer feedback and formulating a personal learning goal for future teamwork opportunities. These reports were not used in this study.

Instruments

Questionnaire

Upon completion of the course, all participants were requested to complete a questionnaire regarding their perceptions of the peer feedback and their feelings of safety and trust. The questionnaire consisted of scales from validated instruments (Edmondson, Citation1999; Huisman et al., Citation2020) concerning confidence in one's own and received peer feedback quality and psychological safety, supplemented with a new set of questions. provides an overview of the questionnaire scales. Students had to answer 27 questions on a five-point Likert scale (1 = totally disagree to 5 = totally agree), apart from the three open-ended questions and the demographic questions. These open-ended questions aimed to capture what students appreciated and disliked about the way they exchanged peer feedback. Since all of the students spoke Dutch, the questionnaire items from validated scales were translated from English to Dutch by the first author and back-translated to English by an independent individual to ensure translation accuracy (Brislin, Citation1970). The list of all included items can be found in Table S2 of the Supplementary Information.

Table 1. Overview of the scales used in the questionnaire.

Collection of peer feedback comments

Each student received peer feedback on their professional competencies from every group member in Dutch. All peer feedback comments from students participating in this research were collected for subsequent analysis.

Video recordings of feedback discussion sessions

The online group feedback discussions in the experimental condition were documented via video recording using Zoom. At the end of each meeting, the teacher inquired about the students’ overall experiences with the peer feedback activity. These conversations were short, semi-structured focus groups in Dutch, guided by the topic list outlined in Table S3.

Data analysis

Responses to the items of the predetermined scales from Huisman et al. (Citation2020) and Edmondson (Citation1999) were summed to obtain the total score of each scale. These scores, along with the remaining multiple-choice items (see Table S2), served as input for a principal components analysis (PCA) to condense the data into fewer variables. The PCA was followed by oblique rotation (direct oblimin) using IBM SPSS Statistics® to allow for correlation between factors. Factors with an Eigenvalue larger than 1 were retained for subsequent analysis. For each factor, we added the responses on all items that had a factor loading larger than 0.45. These sums were used as dependent variables representing the factor for a linear mixed model (LMM), with both student and student groups as levels since students were clustered within student groups, introducing dependencies in their data.

Answers to open-ended questions in the questionnaire were coded by the first author, in collaboration with the second author, adopting an inductive approach, employing open, axial and selective coding (Vollstedt & Rezat, Citation2019). The derived code groups were discussed and adjusted with the second author until a consensus was reached.

The video-recorded focus groups were transcribed verbatim. The transcripts were analyzed and coded using thematic analysis (Williams & Moser, Citation2019) with open, axial and selective coding. During selective coding, themes were formulated that represented student sentiments about the peer feedback activity. Select student quotes representative of the different themes were chosen and translated from Dutch to English by the first author and back-translated into Dutch by an independent individual to verify the translations.

To classify the type of written student feedback, a coding scheme was developed based on the instructions students received to provide both compliments and suggestions for improvements. Although there are other coding schemes available for peer feedback on written assignments, these were not suitable for our study as they do not focus on peer feedback related to teamwork. In our coding scheme, depending on the feedback’s scope, a single comment could contain a maximum of three codes, corresponding to the three sub-competencies within each of the five main competencies within the competence framework (Table S1). Individual feedback comments were coded based on four categories as either compliments, suggestions (constructive or otherwise), a combination of both compliments and suggestions on one sub-competence, or undefined, where the feedback could not be conclusively classified as a compliment or suggestion.

The complete coding scheme can be found in Table S4 of the supplementary information. Two researchers independently coded all feedback comments. The results were then compared to determine the intraclass correlation coefficient (ICC) (Hove et al., Citation2022). A third researcher coded feedback comments on which the two researchers disagreed. In the event of continued discrepancies, this subset of coded feedback comments was discussed among all three assessors. Independent t-tests for the first three code groups (compliment, suggestion, or combination) were performed to test for differences in their occurrence between the control and experimental conditions.

Finally, to determine the quantity of feedback for all conditions, the word count for each peer feedback comment was also computed. A t-test was conducted to examine the difference in the average word count between comments made by students in the experimental and the control conditions.

Results

Participants

A total of 49 out of 63 participants filled out the questionnaire (78%). T- and chi-square tests were performed to test whether the participants’ age and gender distribution differed from nonparticipants for the two programs (Phar and SBI). There was no difference in age between the questionnaire respondents and nonparticipants from both Phar (t(88) = 0.01, p = .99) and SBI (t(109) = 0.67, p = .50). Chi-square tests showed there was also no difference in gender distribution between questionnaire respondents and nonparticipants for both Phar (χ2 (1, N = 90) = 1.41, p = .24) and SBI (χ2 (1, N = 111) = 0.78, p = .38).

Analysis of Likert-scale questionnaire items

The PCA conducted on the questionnaire resulted in five factors, which in combination explained 75.96% of the variance. The factors captured trust in peer feedback on competencies given by others (factor 1), psychological safety within the group (factor 2), trust in peer feedback on the written assignment given by others (factor 3), trust in own peer feedback (factor 4) and perception of cooperation within the group (factor 5). However, factor five only consisted of one item and was discarded from the analysis. The remaining four factors explained 68.04% of the variance. The table with all the factor loadings after rotation can be found in the supplementary information (Table S5).

A linear mixed model was conducted with unstructured covariance to test for differences between the experimental and control conditions for each factor. The four factors extracted in the PCA were used as repeated measures, and the analysis results are summarized in . There was no effect of condition on any of the factors (p > .05). This means the students in the experimental condition did not differ in feelings of trust and safety from the students in the control condition.

Table 2. Estimated effects for the linear mixed model.

To further investigate how students perceived the mode of peer feedback (either anonymous or identifiable with feedback dialogue), we evaluated the answers to survey items related to this mode. The statements and descriptive statistics are shown in Table S6 for the control group and Table S7 for the experimental group in the supplementary information. In general, both groups found their mode of giving and receiving peer feedback pleasant. Students in the control group were mostly positive about giving and receiving peer feedback anonymously (M = 4.15, SD = 0.88). However, they did endorse the statement that they would have liked an explanation of the peer feedback they had received (M = 3.69, SD = 1.01). Interestingly, students from the experimental group, on average, did not mind that the process was not anonymous, as they responded with a mean of 2.74 (SD = 1.18) to the reverse statement, ‘I was uncomfortable with not giving the peer feedback anonymously’. In general, they found it helpful to discuss the peer feedback with others in conversation (M = 4.13, SD = 0.87)

Analysis of open-ended questions

Two out of the three open-ended questions in the questionnaire asked students what they liked and disliked about the peer feedback process. The third question was only shown when students had stated in an earlier question that they found the peer review activity somewhat or entirely ineffective. Only five students from the control group answered this question.

A total of 43 unique answers to these questions in the experimental group and 52 in the control group were coded into 115 codes. After selective coding, 13 code groups remained. These could be divided into six code groups reflecting positive comments about the peer review activity, six reflecting negative remarks, and one labeled as 'other’. shows each code group’s absolute and relative frequencies for the control and experimental conditions.

Table 3. Frequencies of code groups from the open-ended questions.

Students in the experimental group were overall more positive about the peer feedback procedure than students in the control group (sum of codes 1–6, 70.21% versus 50.01%, see ). They also reported more often that the feedback they received was clear and precise (12.77% versus 4.55%). Remarkably, students from the experimental group more often commented that the professional competencies framework was difficult to use (12.77% versus 4.55%). On the other hand, students in the control group more often indicated they received little or no useful feedback (15.2% versus 2.13%) or did not like the peer feedback activity in general. Interestingly, five comments from three different students in the control group were labeled as ‘giving peer feedback did not feel safe or honest’ because students might fear the reactions of their peers, despite the peer feedback in this control group being anonymous. None of the students from the experimental group made such remarks.

Focus groups on the feedback process and feedback dialogue

To further explore the student perceptions of the peer feedback process and dialogue, the five experimental student groups, with a total of 28 students, were interviewed by a teacher. The analysis of these interviews confirmed the results from the open-ended questions in the questionnaire. The students were very positive about their experience with a supervised meeting where peer feedback was discussed. A few themes emerged from the data, which are discussed further in the next section.

Theme 1: giving and receiving peer feedback on competencies is hard

The students found giving and receiving feedback on competencies difficult. One student mentioned that it was hard to provide peer feedback because the competence framework was quite elaborate and specific:

I did find it quite hard because the competencies were just very specifically described and then – to then give feedback to someone on precisely such a point, I always find that somewhat difficult. – student 19

Students also mentioned that they experienced difficulty in navigating feedback on their competencies, especially when the feedback lacked consensus or presented conflicting viewpoints:

If one person had given that suggestion, I would think like yeah, it may be true, it may not be true. But if you really get it from multiple people, multiple times, then you know that it is a negative point that you can improve. – student 4

Moreover, because the competence framework was so elaborate, some students mostly looked at the peer feedback they received overall rather than the specific sub-competencies and the expected behavior:

But I don’t always take the feedback too seriously. It is more just that I have an idea of how the group views me and if they still have a somewhat uh normal image of me. – Student 2

The finding that the students struggled with giving and receiving feedback on competencies may explain why some feedback was vague. A feedback dialogue could mitigate this vagueness.

Theme 2: an extra discussion of the peer feedback has added value

Most students find that having a tutor to steer the discussion and prompt students to explain their feedback enhances the peer feedback dialogue. Moreover, they emphasized that face-to-face meetings enable them to expand upon the feedback they provide and directly inquire about any ambiguities in their received feedback:

Yes, I find it [the feedback discussion session] quite handy, actually. Because I uhm, yes I didn’t understand some of the feedback, of what I can take with me as a suggestion. – student 3

Another advantage of discussing peer feedback face-to-face, according to the students, is that the feedback is more nuanced. Given the additional time and opportunity to expand on peer feedback, students could offer constructive criticism without appearing overly critical. Students highlight the significance of the feedback’s phrasing, especially since it pertains to competencies:

I thought it’s kind of nice because then you just see how someone says it, or something. Whereas if you read it, then of course you can read it differently [than how it is meant]. Not that we’ve been mean or anything, but haha. Then you do just know more how it really comes across or something. – Student 16

It is sometimes scary to hurt someone with my own opinion. That’s why I try to come across as gentle as possible. – student 4

Theme 3: receiving feedback from your peers is useful and important

Despite the feeling that it is not easy to give peer feedback on competencies and that it might be challenging to translate the feedback into adequate actions, most students agreed that it is still valuable and important to have a peer feedback activity to evaluate professional competencies. It gives them insight into what they do right or what could be improved, and it is also a good way to end the project:

But I do believe it [the feedback discussion session] is something that is important and good. Also, just to wrap up the project. So, you can just tell each other one last time, uhm, like well, I thought this went well and this went less well. Just that you end it well or something. – student 2

I think that maybe if you hear the learning goals of other people as well, you can then think for yourself like okay, I do this well or maybe I don’t do this good either, or I could work on that. – student 14

Analysis of student feedback comments

To answer the second research question, we investigated whether there is a difference between the type and quantity of peer feedback across conditions. To this end, a total of 699 online feedback comments in the experimental group and 740 comments in the control group were coded into categories using our coding scheme (compliments, suggestions, combinations and undefined; Table S4) by the first author and an independent rater, see for examples with the different codes. The ICC was computed for the first three categories (compliments, suggestions and combinations) using an absolute agreement, two-way mixed model to determine the interrater reliability. The fourth category, 'undefined,' was omitted due to a very low occurrence (N = 20). The analysis resulted in a high ICC = 0.83 (95% CI 0.80–0.85) for suggestions, an ICC = 0.89 (95% CI 0.88–0.90) for compliments and a moderately high ICC = 0.69 (95% CI 0.63–0.73) for combinations. For this reason, an independent third rater also coded the subset of feedback comments the two raters disagreed on. The agreement of this rater with either one of the other two raters was 68.6%. After a discussion between the three raters, a consensus was reached on how to code the remaining comments.

Table 4. Representative examples of the coding of peer feedback comments.

The means and standard deviations of each of the categories are shown in . As seen in , the students in the control group gave more compliments than the experimental group (t(1437) = −3.86, p = .00). In contrast, the students in the experimental group gave more comments combining compliments and suggestions than the control group (t(1437) = 2.85, p = .00). For the average number of words, a difference was found with students in the experimental condition using more words than those in the control condition (t(1437) = 3.18, p = .00). It is noteworthy that students in both conditions on average gave more compliments than suggestions or combination comments.

Table 5. Descriptive statistics of the student feedback comments.

Discussion

This study examined two research questions into the effect of anonymity and a peer feedback dialogue on feelings of trust, safety, and perception of feedback quality amongst students and their effect on the nature of given peer feedback.

Trust and psychological safety

The quantitative results indicate no significant differences in perceived feelings of trust and safety between students in the anonymous setting and in the non-anonymous setting who also participated in a feedback dialogue. In general, students appear to be satisfied with any of the two feedback modes provided when not exposed to the alternative condition. However, the qualitative findings highlight differences between the two conditions. Students participating in peer feedback dialogues reported that face-to-face discussions facilitated a more nuanced expression and explanation of their feedback than written responses alone. Students in the control group reported the opposite, as they often complained about ambiguous peer feedback that was difficult to translate into actionable steps. This would suggest that a feedback dialogue can positively affect the clarity of the feedback comments and, therefore, the trust in feedback quality between peers. As underscored by Glazzard and Stones (Citation2019), students perceive verbal feedback as more personal, which could account for the elevated sense of safety and trust experienced by the group involved in the feedback dialogue. However, this was not reflected in the quantitative self-report questions, as we did not see a significant difference.

These conflicting results align with Panadero and Alqassab (Citation2019), who show that the limited research on anonymity in peer assessment and its social effects show mixed results. In fact, Li (Citation2017) concluded that students in a non-anonymous setting who received training in peer feedback perceived less peer pressure than students in an anonymous setting who received no training. This suggests that feedback aids such as training or a rubric may have more profound effects on how students feel about the feedback activity.

Moreover, a few students in the control group wrote that the peer feedback was of lower quality because they thought people were afraid to give critical feedback. Despite students being able to review each other anonymously, feelings of safety were thus still compromised for a select few, again reinforcing the notion that anonymity is not per se the solution to establishing safety and trust. These findings suggest a strong correlation between feelings of safety and trust in the quality of peer feedback. The perceived psychological safety may significantly influence how students articulate their peer feedback, affecting the level of trust students have in their peers’ capacity to deliver adequate feedback.

Perceived feedback quality

Students across the different experimental conditions reported varying perceptions of the quality of the peer feedback. Based on the qualitative findings, we can conclude that students engaging in the identifiable condition supplemented with a feedback dialogue typically perceived the feedback as clear and valuable. Conversely, the students in the anonymous peer feedback condition expressed contrasting sentiments, as they often described the feedback as vague. These findings align with Yang and Carless (Citation2013), who assert that a feedback dialogue enables students to ask for clarification. They are also consistent with the proposition of Pitt and Winstone (Citation2018) that anonymous peer feedback disrupts feedback loops and depersonalizes feedback comments. Notably, students in the identifiable condition wrote, on average, lengthier comments and demonstrated a greater tendency to incorporate both suggestions and compliments for each competence. Interestingly, this pattern implies that students bring more nuanced feedback by integrating both positive and negative remarks rather than exclusively providing one or the other. The qualitative data also revealed that students found giving and receiving feedback on competencies challenging, primarily due to the complexity of the used framework with many specific (sub)competencies. This observation aligns with Lejk and Wyvill’s (Citation2002) argument, advocating for a more holistic approach to peer review of competencies.

Collectively, these findings suggest that a non-anonymous setting coupled with a feedback dialogue may ensure clearer, more nuanced and more valued peer feedback on professional competencies. This finding may appear to conflict with previous research, where students engaged in anonymous peer feedback exhibited improved performance on writing tasks and delivered more critical feedback than those in identifiable peer feedback settings (Howard et al., Citation2010; Lu & Bol, Citation2007). However, this discrepancy could potentially be attributed to our study’s emphasis on feedback dialogue or its focus on professional competencies as opposed to writing tasks.

Based on the findings of the current study, the course coordinator has decided to implement the experimental condition with peer feedback dialogue for all course participants in future instances of the course.

Strengths and limitations

A notable strength of the present study lies in its experimental design, enabling causal inferences. However, this approach also introduces a limitation due to convenience sampling. Only 35% of enrolled students consented to participate in our study, resulting in a relatively small sample size. Specifically, null findings should be interpreted with caution as these can originate from low statistical power. Additionally, the study was done at only one Dutch university, limiting the generalizability of the findings.

Owing to the low participation rate, we only used two of the three conditions initially planned. The omitted third condition would have involved students providing identifiable online peer feedback without a peer feedback dialogue. This scenario could have provided further insights into student perceptions of safety and trust in a context without anonymity, where opportunities for additional feedback clarification through feedback dialogue were absent. Thus, in the current study, it remains unclear whether anonymity or the presence of a feedback dialogue primarily contributes to the observed results.

Moreover, in this study, we did not investigate the effect of the peer feedback mode on learning gains. Consequently, it remains uncertain whether higher-quality feedback resulted in enhanced learning. Additionally, while quality measures were derived from students’ own assessments, word count and the integration of compliments and suggestions, we did not include expert ratings of feedback quality. These are notoriously difficult to establish (Price et al., Citation2010). Future studies should focus on the effects of a feedback dialogue versus anonymity on learning outcomes, including expert ratings of feedback quality.

The recording of the feedback dialogue sessions may have pressured some students to be more positive toward their fellow students and toward the feedback process itself. Especially since the student interviews took place directly after the feedback sessions. However, the qualitative results of the questionnaire still show that students in the experimental group perceived the added feedback dialogue as positive.

Another limitation is that we were unable to take the social network of students into account. We, therefore, did not know whether there were any friendships in the student groups which could have influenced the evaluation of their peers.

Finally, it is noteworthy that this study was conducted during a total lockdown due to the Covid-19 pandemic. Students never had in-person interactions, and the course was delivered entirely online. These factors may have affected group dynamics. The inherently virtual context could have also influenced the perception of the peer feedback dialogue within our study. This limitation is simultaneously a strength since more and more education is being given in virtual environments. This study can be used by educational practitioners who develop and teach courses which are completely online.

Implications

Although both peer feedback modes are suitable for educators to incorporate into their courses, anonymity may not always provide a safe environment and should be implemented with care to ensure the most optimal feedback process. Incorporating a feedback dialogue in the peer feedback process allows students to ask for clarification or seek guidance on how to improve themselves based on the feedback. The use of a feedback dialogue, along with feedback aids such as rubrics and training, can greatly enhance the uptake and effectiveness of peer feedback by students, especially when assessing their peers’ professional competencies. The results of this study can be of use to practitioners who teach online courses and want to optimize the peer feedback process.

Although we conclude that feedback dialogue is preferred over anonymity, more research is required to understand how feedback dialogues can contribute to a sense of safety and trust. Future research could explore different ways to incorporate a feedback dialogue and investigate their impact on perceived safety and trust, as well as the quality of feedback. For example, could written asynchronous dialogue provide a safe, practical and effective alternative to face-to-face synchronous dialogue? This can aid practitioners in deciding which feedback dialogue mode is most appropriate for their course context. Additionally, researchers can examine the effects of a peer feedback dialogue on expert ratings of the peer feedback and student performance.

Conclusion

This study investigated whether anonymity or feedback dialogue is preferred for enhancing students’ feelings of safety and trust when providing peer feedback on professional competencies. By comparing the two approaches, the results show that both anonymous peer feedback and non-anonymous dialogue-enhanced peer feedback are viable models to ensure a safe environment for students to give each other peer feedback. Therefore, practitioners can use either one of approaches when implementing peer feedback. However, despite the absence of quantitative differences in self-reported feelings of trust and psychological safety between the two conditions, the feedback-dialogue approach seemed to enhance students’ positive experiences with peer feedback. The peer feedback dialogue led to clearer, more nuanced and more in-depth feedback on professional competencies. As such, this can help students make more effective use of the feedback. Hence, in implementing peer feedback activities, we recommend educators to explore identifiable peer feedback activities enhanced by feedback dialogues as viable alternatives to using anonymity for fostering safety and trust. Particularly when students are tasked with peer-reviewing professional competencies or other highly personal and complex aspects of learning, a face-to-face (online) feedback dialogue could facilitate a safe and more effective peer feedback process.

This study is a step toward addressing the gap in the literature on interpersonal factors that can hinder or promote effective peer feedback on professional competencies. Further research is required to determine the optimal conditions for effective peer feedback in a feedback dialogue.

Supplemental material

Supplementary information Manuscript Enhancing Trust_.docx

Download MS Word (42.3 KB)

Acknowledgments

We thank Isabel Braadbaart for translating quotes from the student interviews. Thanks goes to Sonja van Scheijen and Salihanur Darici for their help in coding the peer feedback comments. We also want to thank Robin Straaijer for translating items from the questionnaire.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research received no specific grant from funding agencies in the public, commercial, or not-for-profit sectors. This work was supported by the Faculty of Science of the Vrije Universiteit Amsterdam, The Netherlands.

Notes on contributors

Mirella V. Jongsma

Mirella V. Jongsma is a PhD student of Innovations in Human Health and Life Sciences, Vrije Universiteit Amsterdam, The Netherlands. Her research interests include active learning, diversity and inclusion in education.

Danny J. Scholten

Danny J. Scholten is an assistant professor of Innovations in Human Health and Life Sciences at Vrije Universiteit Amsterdam, the Netherlands. His research interests include active learning approaches, such as peer feedback and reflection strategies, and teacher professionalization in STEM education.

Jorick Houtkamp

Jorick Houtkamp was a junior lecturer at Science, Business and Innovation, Vrije Universiteit Amsterdam, The Netherlands, and is currently employed at the Netherlands Enterprise Agency. His research interests include knowledge exchange, value-based entrepreneurial education, and experiential learning.

Martijn Meeter

Martijn Meeter is professor of Education Sciences, Vrije Universiteit Amsterdam, The Netherlands. His research interests include learning analytics, student motivation, and research into reading.

Jacqueline E. van Muijlwijk-Koezen

Jacqueline E. van Muijlwijk-Koezen is professor and group leader of Innovations in Human Health and Life Sciences, and Chief Education Officer at Vrije Universiteit Amsterdam, The Netherlands. Her teaching and learning research focuses on active learning in STEM education, Nature of Science (NoS) and teacher agency.

References

  • Belbin, R. M. (2010). Management teams: Why they succeed or fail (3rd ed.). Routledge.
  • Braadbaart, I., Vuuregge, A., Beekman, F., Scheijen, S., Muijlwijk-Koezen, J., & Scholten, D. (2023). Leveraging the ALACT reflection model to improve academic skills development in bachelor students: A case study. The Asian Conference on Education 2022: Official Conference Proceedings. https://doi.org/10.22492/issn.2186-5892.2023.76. ISSN: 2186–5892.
  • Brislin, R. W. (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1(3), 185–216. https://doi.org/10.1177/135910457000100301
  • Carless, D. (2012). Trust and its role in facilitating dialogic feedback. In D. Boud & E. Molloy (Eds.), Feedback in higher and professional education: Understanding it and doing it well (pp. 90–103). Routledge. https://doi.org/10.4324/9780203074336
  • Chartered Management Institute. (2018). 21st century leaders. Building employability through higher education. https://www.managers.org.uk/knowledge-and-insights/research/building-employability-through-higher-education/
  • Cheng, K. H., & Tsai, C. C. (2012). Students’ interpersonal perspectives on, conceptions of and approaches to learning in online peer assessment. Australasian Journal of Educational Technology, 28(4), 599–618. https://doi.org/10.14742/ajet.830
  • Cottrell, S. (2001). Teaching study skills and supporting learning. Palgrave Macmillan.
  • Cronbach, L. J. (1951). Coefficient Alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555
  • Cruz, M. L., Saunders-Smits, G. N., & Groen, P. (2020). Evaluation of competency methods in engineering education: A systematic review. European Journal of Engineering Education, 45(5), 729–757. https://doi.org/10.1080/03043797.2019.1671810
  • Dannefer, E. F., Henson, L. C., Bierer, S. B., Grady-Weliky, T. A., Meldrum, S., Nofziger, A. C., Barclay, C., & Epstein, R. M. (2005). Peer assessment of professional competence. Medical Education, 39(7), 713–722. https://doi.org/10.1111/j.1365-2929.2005.02193.x
  • Dolce, V., Emanuel, F., Cisi, M., & Ghislieri, C. (2020). The soft skills of accounting graduates: Perceptions versus expectations. Accounting Education, 29(1), 57–76. https://doi.org/10.1080/09639284.2019.1697937
  • Dooley, L. M., & Bamford, N. J. (2018). Peer feedback on collaborative learning activities in veterinary education. Veterinary Sciences, 5(4), 90. https://doi.org/10.3390/vetsci5040090
  • Double, K. S., McGrane, J. A., & Hopfenbeck, T. N. (2020). The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educational Psychology Review, 32(2), 481–509. https://doi.org/10.1007/s10648-019-09510-3
  • Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
  • Er, E., Dimitriadis, Y., & Gašević, D. (2020). A collaborative learning approach to dialogic peer feedback: A theoretical framework. Assessment & Evaluation in Higher Education, 46(4), 586–600. https://doi.org/10.1080/02602938.2020.1786497
  • European Commission. (2018). EntreComp: The European entrepreneurship competence framework. Publications Office of the European Union. https://doi.org/10.2767/88978
  • Falchikov, N. (2003). Involving students in assessment. Psychology Learning & Teaching, 3(2), 102–108. https://doi.org/10.2304/plat.2003.3.2.102
  • Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. https://doi.org/10.2307/1170785
  • FeedbackFruits (2021). FeedbackFruits Tool Suite (version 2.59) [Computer Software]. https://feedbackfruits.com/
  • Glazzard, J., & Stones, S. (2019). Student perceptions of feedback in higher education. International Journal of Learning, Teaching and Educational Research, 18(11), 38–52. https://doi.org/10.26803/ijlter.18.11.3
  • Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24(4), 443–461. https://doi.org/10.1016/j.compcom.2007.03.002
  • Hove, D. t., Jorgensen, T. D., & van der Ark, L. A. (2022). Updated guidelines on selecting an intraclass correlation coefficient for interrater reliability, with applications to incomplete observational designs. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000516
  • Howard, C., Barrett, A., & Frick, T. (2010). Anonymity to promote peer feedback: Pre-service teachers’ comments in asynchronous computer-mediated communication. Journal of Educational Computing Research, 43(1), 89–112. https://doi.org/10.2190/EC.43.1.f
  • Huisman, B., Saab, N., van den Broek, P., & van Driel, J. (2019). The impact of formative peer feedback on higher education students’ academic writing: A meta-analysis. Assessment & Evaluation in Higher Education, 44(6), 863–880. https://doi.org/10.1080/02602938.2018.1545896
  • Huisman, B., Saab, N., Van Driel, J., & Van Den Broek, P. (2020). A questionnaire to assess students’ beliefs about peer-feedback. Innovations in Education and Teaching International, 57(3), 328–338. (https://doi.org/10.1080/14703297.2019.1630294
  • Jongsma, M. V., Scholten, D. J., van Muijlwijk-Koezen, J. E., & Meeter, M. (2022). Online versus offline peer feedback in higher education: A meta-analysis. Journal of Educational Computing Research, 61(2), 329–354. https://doi.org/10.1177/07356331221114181
  • Kenworthy, L., & Kielstra, P. (2015). Driving the skills agenda: Preparing students for the future. Economist Intelligence Unit. https://www.eiuperspectives.economist.com/sites/default/files/Drivingtheskillsagenda.pdf
  • Lejk, M., & Wyvill, M. (2002). Peer assessment of contributions to a group project: Student attitudes to holistic and category-based approaches. Assessment & Evaluation in Higher Education, 27(6), 569–577. https://doi.org/10.1080/0260293022000020327
  • Li, L. (2017). The role of anonymity in peer assessment. Assessment & Evaluation in Higher Education, 42(4), 645–656. https://doi.org/10.1080/02602938.2016.1174766
  • Lu, R., & Bol, L. (2007). A comparison of anonymous versus identifiable e-peer review on college student writing performance and the extent of critical feedback. Journal of Interactive Online Learning, 6(2), 100–115.
  • McLaughlin, J. E., Minshew, L. M., Gonzalez, D., Lamb, K., Klus, N. J., Aubé, J., Cox, W., & Brouwer, K. L. R. (2019). Can they imagine the future? A qualitative study exploring the skills employers seek in pharmaceutical sciences doctoral graduates. PLoS One, 14(9), e0222422. https://doi.org/10.1371/journal.pone.0222422
  • Meijer, H., Hoekstra, R., Brouwer, J., & Strijbos, J. W. (2020). Unfolding collaborative learning assessment literacy: A reflection on current assessment methods in higher education. Assessment & Evaluation in Higher Education, 45(8), 1222–1240. https://doi.org/10.1080/02602938.2020.1729696
  • Nelwati, Abdullah, K. L., Chong, M. C., & McKenna, L. (2020). The effect of peer learning on professional competence development among Indonesian undergraduate nursing students: A quasi-experimental study. Journal of Professional Nursing: Official Journal of the American Association of Colleges of Nursing, 36(6), 477–483. https://doi.org/10.1016/j.profnurs.2020.03.008
  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517. https://doi.org/10.1080/02602931003786559
  • Nicol, D. J., & Macfarlane Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. https://doi.org/10.1080/03075070600572090
  • Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 44(8), 1253–1278. https://doi.org/10.1080/02602938.2019.1600186
  • Pitt, E., & Winstone, N. (2018). The impact of anonymous marking on students’ perceptions of fairness, feedback and relationships with lecturers. Assessment & Evaluation in Higher Education, 43(7), 1183–1193. https://doi.org/10.1080/02602938.2018.1437594
  • Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277–289. https://doi.org/10.1080/02602930903541007
  • Raes, A., Vanderhoven, E., & Schellens, T. (2015). Increasing anonymity in peer assessment by using classroom response technology within face-to-face higher education. Studies in Higher Education, 40(1), 178–193. https://doi.org/10.1080/03075079.2013.823930
  • Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: Its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. European Journal of Psychology of Education, 33(1), 75–99. https://doi.org/10.1007/s10212-017-0339-8
  • Schillings, M., Roebertsen, H., Savelberg, H., van Dijk, A., & Dolmans, D. (2021). Improving the understanding of written peer feedback through face-to-face peer dialogue: Students’ perspective. Higher Education Research & Development, 40(5), 1100–1116. https://doi.org/10.1080/07294360.2020.1798889
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249
  • Topping, K. (2017). Peer assessment: Learning by judging and discussing the work of other learners. Interdisciplinary Education and Psychology, 1(1), 1–17. https://doi.org/10.31532/interdiscipeducpsychol.1.1.007
  • van den Berg, I., Admiraal, W., & Pilot, A. (2006). Peer assessment in university teaching: Evaluating seven course designs. Assessment & Evaluation in Higher Education, 31(1), 19–36. https://doi.org/10.1080/02602930500262346
  • van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review, 4(1), 41–54. https://doi.org/10.1016/j.edurev.2008.11.002
  • van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: The role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290. https://doi.org/10.1016/j.learninstruc.2009.08.010
  • van Heerden, M., & Bharuthram, S. (2021). Knowing me, knowing you: The effects of peer familiarity on receiving peer feedback for undergraduate student writers. Assessment & Evaluation in Higher Education, 46(8), 1191–1201. https://doi.org/10.1080/02602938.2020.1863910
  • Vollstedt, M., & Rezat, S. (2019). An introduction to grounded theory with a special focus on axial coding and the coding paradigm. Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_4
  • Williams, M., & Moser, T. (2019). The art of coding and thematic exploration in qualitative research. International Management Review, 15(1), 45–55.
  • Wood, J. (2022). Making peer feedback work: The contribution of technology-mediated dialogic peer feedback to feedback uptake and literacy. Assessment & Evaluation in Higher Education, 47(3), 327–346. https://doi.org/10.1080/02602938.2021.1914544
  • World Medical Association. (2013). World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA, 310(20), 2191–2194. https://doi.org/10.1001/jama.2013.281053
  • Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback processes. Teaching in Higher Education, 18(3), 285–297. https://doi.org/10.1080/13562517.2012.719154
  • Zhu, Q., & Carless, D. (2018). Dialogue within peer feedback processes: Clarification and negotiation of meaning. Higher Education Research & Development, 37(4), 883–897. https://doi.org/10.1080/07294360.2018.1446417