2,806
Views
2
CrossRef citations to date
0
Altmetric
Feature Article

When students’ words hurt: 12 tips for helping faculty receive and respond constructively to student evaluations of teaching

ORCID Icon, , ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2154768 | Received 09 Aug 2022, Accepted 30 Nov 2022, Published online: 06 Dec 2022

ABSTRACT

Student evaluations of curricular experiences and instructors are employed by institutions to obtain feedback and guide improvement. However, to be effective, evaluations must prompt faculty action. Unfortunately, evaluative comments that engender strong reactions may undermine the process by hindering innovation and improvement steps. The literature suggests that faculty interpret evaluation feedback as a judgment not just on their teaching ability but on their personal and professional identity. In this context, critical evaluations, even when constructively worded, can result in disappointment, hurt, and shame. The COVID pandemic has challenged institutions and faculty to repeatedly adapt curricula and educational practices, heightening concerns for faculty burnout. In this context, the risk of ‘words that hurt’ is higher than ever. This article offers guidance for faculty and institutions to support effective responses to critical feedback and ameliorate counterproductive effects of learner evaluations.

Introduction

Student evaluations of teaching (SETs) are commonly used as part of institutional, course, or clerkship evaluations to guide improvement [Citation1,Citation2]. For SETs to be effective, they must prompt faculty action that involves reflection, professional learning and change [Citation3]. Unfortunately, these actions by faculty may be undermined when SETs engender strong reactions that hinder improvement steps [Citation4,Citation5].

SETs can lead to positive or negative emotions depending both on their content and the faculty experience. SETs can be categorized as critical versus reinforcing according to whether they find fault with or compliment teaching, and as constructive or non-constructive according to whether they facilitate improvement through a non-judgmental, behavior-based framing. Non-constructive, critical SETs fail to elucidate actionable improvements and in rare instances include abusive (0.04%) or unprofessional (0.15%) narrative comments [Citation6]. Critical, constructive SETs are instrumental for improvement, but may still lead to negative faculty emotions, including feelings of disappointment, hurt and shame [Citation4].

Recent events have added stress to faculty’s roles, which may further undermine their productive response to SETs. The COVID pandemic has added anxiety for faculty with clinical roles [Citation7] at the same time that faculty and institutions have been challenged to adapt and readapt their curricula and educational practices to suit a range of remote and hybrid formats [Citation8,Citation9]. Simultaneously, the urgency to develop anti-racist curricula that combat healthcare disparity has meant innovation of content that is emotionally charged. In the setting of increasing concern for faculty burnout, the risk of ‘words that hurt’ is higher than ever.

There is little guidance in medical education for faculty to navigate student feedback that hurts. Such guidance has the potential to support faculty efficacy, promote a greater appreciation for the faculty experience, and enable school leadership to create frameworks that enhance quality improvement. To build feedback literacy [the meta-feedback skills that enable getting the most out of feedback; Citation10] and to encourage ongoing productive responses to critical feedback, we outline steps to 1) facilitate a shared understanding for the role of learner feedback in the educational endeavor (tips 1–3), 2) foster among the faculty a growth mindset and productive response to words that hurt (steps 4–8), and 3) provide institutions a framework for support and oversight that enable long-term faculty educator improvement and retention (9–12).

Tips for both faculty and institutions

Tip 1: appreciate both the complexity of feedback from SETs and its purpose to guide improvement

Feedback is not just an exchange of information between teacher and learner but an inherently complex educational phenomenon [Citation11] which can be affected by many factors (). Both the giving and receiving of feedback are part of a social process influenced by cultural values, personal beliefs, and expectations. At times, SETs may reflect learner dissatisfaction with an unmet expectation that is not related to the specific educational activity. Faculty reconciliation and assimilation of critical feedback with their own individual views is influenced by their professional culture, emotions, reflection, and personal self-efficacy [Citation3,Citation12]. Faculty and institutions should examine these factors as they apply within their working environment and reflect on their interconnectedness before developing a plan for learning and practice change [Citation13].

Figure 1. A concept map showing the impact of students’ evaluations of teaching (SETs) on institutions and faculty as well as factors affecting SETs.

Figure 1. A concept map showing the impact of students’ evaluations of teaching (SETs) on institutions and faculty as well as factors affecting SETs.

Despite the complexity of feedback generally, all stakeholders – learners, faculty, and institutions – should appreciate that feedback in the form of SETs is critical informants of continuous quality improvement. Given learners’ proximity to and involvement in teaching events, learners are ideal partners for faculty to identify points of correction, missing or redundant topics, or issues that may hinder learning. In the vein of participatory evaluation approaches, which involve the stakeholders of a program in the evaluation process, evaluation instruments should be developed with the intent to partner with learners and with the goal of enabling improvements rooted in the educational mission [see tip 9; Citation2, Citation14, Citation15].

Feedback should not be a judgment of the faculty’s professional identity, personal qualities or worth [Citation16]. Faculty should engage in perspective-taking [Citation17] and appreciate the distinction between their intent and the impact on learners. For example, if a teacher implements a flipped classroom approach with the intent to promote self-directed learning, but learners perceive it as a lack of guidance and direction, the result is a misalignment between the teacher’s intent and the expected impact. Imagining the learner's perspective helps faculty and institutions model the growth mindset we ask of our learners and capitalize on the opportunity to promote a feedback culture within the entire organization in which one improves through persistent hard work with input and mentoring from others [Citation18,Citation19].

Tip 2: understand the limitations of SETs

Faculty and institutions should be familiar with the limitations and uses of SETs. Limitations include threats to validity, sources of bias, conflation with learner satisfaction, and other factors related to learner context and challenges [Citation1, Citation20–22; Citation23]. Particularly for faculty with less contact with learners, poor learner recollection may undermine the accuracy of comments. In fact, in one study, students completed required evaluations including narrative comments for a fictional faculty [Citation24]. Attention to biases related to gender, under-represented backgrounds, and part-time status are particularly important when SETs are used as a factor in advancement and promotion [Citation1,Citation20,Citation22,Citation25]. An observation that part-time teachers receive more critical comments than full-time teachers may warrant further investigation as a reflection of learner evaluation fatigue or a need to consider a threshold number of teaching hours to warrant productive evaluation. While SETs do have the potential to reflect learner outcomes [Citation1,Citation21], faculty and institutions should not interpret learner satisfaction as the sole marker of a faculty’s pedagogical competence and should be aware of the impact of other variables on satisfaction, such as factors related to innovation (Tip 3), learning context (Tip 6), and even faculty leniency. There is literature to suggest that faculty who challenge their learners or provide needed constructive feedback may receive poorer evaluations [Citation26]. Conversely, faculty perceived by learners as being outstanding educators do not necessarily have better learning outcomes [Citation27].

Tip 3: recognize and account for the challenges of innovation

Curricular innovations deserve special consideration in the design and review of SETs as they are likely to generate critical feedback in the context of change. Implementing new content or practices involves a series of processes that allows the assimilation of an educational activity into an organization [Citation28] and is a critical aspect in the continuous improvement of curriculum necessary for any educational program. However, the introduction of an innovation may create growing pains and implementation stutters that may lead to learner frustration with resulting harsh evaluations. The faculty and institution may overestimate learner readiness for new materials and tasks during the implementation phase of new curricula or, alternatively, the innovation may miss the mark.

While faculty generally recognized and embraced the need for implementation related to COVID and anti-racism, these innovations stretched them to work rapidly in areas outside of prior expertise [Citation9]. In that setting, critical learner evaluations may be related to implementation challenges, and special consideration needs to be given to develop a thoughtful and rigorous evaluation process as well as systems of accountability [Citation29]. Particularly during implementation, learner feedback provides an important, unique, and critical perspective on aspects of the learning environment and hidden curriculum that enable iterative improvement and ultimate success of the innovation [Citation30].

Faculty and institutions can take several steps to help ameliorate the impact of implementation challenges on learners and improve the evaluation process: 1) provide clear instructions and set appropriate expectations for new content and methods, 2) explain the reason for the change and why it will help learning, 3) involve learners in the design and planning of the activity to foster learner agency, 4) create opportunities for early feedback that encourages students to separate their evaluation of the faculty from their evaluation of the innovation and 5) allow enough time for learners to adjust. In addition to these actions, institutions should anticipate the potential for critical feedback from learners during and following innovation and adjust expectations to allow this feedback to be a valued part of early phases of adoption. Failure to embrace critical feedback that is constructive during implementation could otherwise limit positive educational reform or lead institutions to abandon initiatives that would become valuable in time.

Tips for faculty

Tip 4: observe and reflect on one’s immediate response to feedback

Faculty should understand that it is common to experience strong emotional reactions to critical feedback in SETs, including shame, guilt, and anger [Citation16,Citation31,Citation32] and develop a mindfulness-based approach to examining these reactions. A mindful approach to feedback includes openness and curiosity about the reaction, perspective-taking and letting go of judgment [Citation17,Citation33] and can reduce faculty fragility, in which marked discomfort and defensiveness impedes appropriate and productive actions and may lead to disengagement [Citation34,Citation35]. Since most SETs are in written form, faculty can first prepare by controlling the time and place in which they review them. Additional preparation involves assessing whether one is experiencing other feelings that might interfere with the ability to effectively receive the feedback, including being hungry, angry, lonely, or tired [Citation36] When faculty take note of an emotional response, they may reflect on the trigger – was it a truth trigger, a relationship trigger, or an identity trigger [Citation37]. A truth trigger could be characterized as a piece of feedback that feels discountable because it is untrue or inaccurate. Relationship triggers lead to discounting or defensiveness because of the perceived intent or characteristics (e.g., trustworthiness or credibility) of the feedback giver. Identity triggers are threatening or discordant with how one sees oneself. In reflecting on feedback, awareness of these triggers enables faculty to see past them to the potential opportunity for growth. As has been described for learner responses to feedback, managing affect and maintaining emotional equilibrium in the face of critical feedback enables a mindset that supports striving for continuous improvement [Citation38].

Tip 5: don’t overlook the positive

In analyzing data, faculty should be aware of the significance they give to individual pieces of feedback and the impact this may have on their development actions [Citation4]. Some faculty have a tendency to attribute disproportionate weight or to fixate on critical comments [Citation16]. Faculty may exhibit a sense of impostership and give greater significance to critical comments while discounting positive ones. Feelings of impostership can lock faculty into a static state, where one is less able to take chances, draw meaning from experiences or continue to improve [Citation39]. In addition, faculty may introduce unjustified changes into their teaching in this case merely to please students [Citation40].

Focusing on the negative not only results in a skewed perspective on faculty’s teaching but represents a missed opportunity to reflect and build upon strengths. Reinforcing feedback serves to illuminate behaviors that are working well, raising faculty awareness of effective methods to allow deliberate continuation of these behaviors. Reflecting on reinforcing feedback can also help faculty maintain a positive outlook and a continued growth mindset that enables more productive responses when constructive points arise and may even illuminate a strategy to address the area of necessary growth [Citation17]. Finally, the presence of reinforcing feedback in complete contradiction to critical constructive feedback may point to an area that needs further exploration to understand varied student experiences and guide effective change [Citation4].

Tip 6: examine the learning context

Faculty should be aware that aspects of the learning context, including curricular timing and sequencing, environment (for example, clinical or classroom-based), and even broader sociopolitical climate may influence learning experiences and SETs. When critical evaluations cite instructional misalignment or problematic timing, faculty should be aware of the relation of the content to the broader curriculum. Has there been a change in a prior curricular component that had rendered the session more challenging (e.g., movement of a session that previously functioned as an introduction to the topic)? When learners question the relevance of a topic, consideration should also be given to the timing and location of the evaluation and whether learners are more likely to value the topic at a later stage [Citation41] or different setting (e.g., evaluation of health systems science curriculum after starting clinical rotations). In other instances, the topic itself may be a factor. Is it innately challenging or polarizing? Is the challenging nature of the topic experienced disproportionately by a subset of students. For example, as mentioned in Tip 3, in the case of anti-racist curricular innovations, content will likely be experienced differently by persons of color than persons who are white [Citation29]. In that case, discovering, exploring, and understanding the lack of equity in the learning experience will be critical to guide effective interventions and improvements.

Tip 7: triangulate additional feedback data to make sound judgements about your work

Before taking action towards improvement, faculty should gather and triangulate evaluations of teaching with additional feedback data to make sound judgements about their work. Given the limitations of SETs, faculty benefit from additional data about the quality of their work. These data sources may include learning outcomes from assessments, faculty peer observation or coaching [Citation42,Citation43], or informal discussion with a colleague or mentor who can provide an additional perspective. The latter is not dependent on the availability of formal observation or coaching programs and can encourage socialization among teachers. As teaching is ultimately a solitary action, connections with colleagues have myriad potential benefits for educators by enabling the formation of communities of practice, defined as groups of people who interact on an ongoing basis to share concerns and engage in deepening their knowledge and expertise on common practices [Citation44,Citation45]. Finally, faculty should include in the available data their own self-assessment and personal development goals. Does the feedback relate to known areas of personal growth, or areas within the course that are in need of further development? Engaging local experts or those working in similar content areas for consultation and to learn from their experience may be helpful.

Tip 8: take action with a focus on learning and learner agency

After taking time to reflect, contextualize feedback, and gather available data to make sound judgements, faculty should close the loop on the feedback with a focus on learning and learner agency. When the next steps are clear, communicating the actions being taken may be sufficient. When improvement steps are not immediately clear, the invitation for ongoing conversation is critical, and SETs can serve as a reminder that the educational effort benefits from productive communication. Conversations should focus learners on their learning, shifting them away from thinking about ‘likes’ and ‘dislikes’, and steer educators away from blaming learners for their dissatisfaction. Moreover, a focus on learning fosters a partnership with learners to improve the educational activity and better promotes achievement of key knowledge and skills. Moreover, partnership enhances learner agency, defined as their capacity to act purposefully to foster change [Citation46]. By building learner agency, faculty contribute to the learners’ identities as changemakers, cultivate engagement in the learning endeavor, build self-efficacy skills, and enhance learning [Citation47]. Increasingly, over the past decade, learner-led collaboration has become a core mechanism for curricular revision and innovation with some institutions formalizing a role for learner representatives in the continuous course review and improvement process [Citation48].

Tips for institutions

Tip 9: provide active oversight of the evaluation process

Institutions or programs should provide active oversight of SETs that fosters alignment with the educational mission and includes mechanisms to learn from critical feedback. The standardized use of a set of core questions can facilitate consistency in data collection and allow for comparative data across courses, rotations, or curricular experiences. Core questions should reflect the value the institution places on certain teaching behaviors and learning outcomes. Attention to the wording of these questions can help focus learner responses on these behaviors and outcomes rather than on general satisfaction or a teacher’s personal characteristics, which are more susceptible to bias [Citation27]. For instance, learners could rate the consistency between stated objectives and the content taught by a lecturer, a lab instructor’s organization of lab content, or a small group facilitator’s ability to facilitate group discussion and provide feedback.

Consideration should be given to the use of anonymous, confidential, and signed evaluations as well as the timing of collection. Anonymous evaluations facilitate honest upward feedback [Citation49] for learners who might fear reprisal but may also enable comments that can feel hurtful [Citation50,Citation51]. Non-anonymous feedback, however, can lead to inflated scores and the loss of critical, constructive feedback [Citation52,Citation50,Citation53,Citation54]. A mechanism to provide feedback during the course can enable just-in-time adjustments when appropriate. Chosen approaches should be accompanied by learner and faculty development that fosters a culture of openness to feedback.

Finally, institutions and programs should actively monitor and review learner comments and have a process for identifying and responding to critical comments to support both faculty and students. If individual critical comments are unheeded in favor of the ‘majority’ voice, the institution risks eliminating the valuable feedback from learners in minoritized or historically oppressed groups. Institutional processes for responding should include consideration of how critical comments are communicated to the faculty to provide a supportive environment for reflection (Tip 4). Ultimately, feedback from SETs should help a faculty member grow as a teacher.

Tip 10: develop learners’ ability to provide effective feedback

Institutions should develop learners’ abilities to complete evaluations in a manner that provides effective feedback to teachers. Learners may not understand the purpose of SETs or their role in continuous quality improvement. They may not consider the impact their comments can have when read by the individual under evaluation (versus a third party) and often do not realize that their evaluations have consequences for faculty promotion. Just as learners are expected to demonstrate competency in practice-based learning and improvement, which necessitates feedback literacy in their feedback response, they should similarly demonstrate feedback literacy in feedback provision. Learners should receive instructions on how to provide specific, skills-based feedback that helps faculty grow, that is professional and honest [Citation11]. Creating a culture of bidirectional feedback [Citation35] helps reduce hierarchy while providing faculty who are receptive to feedback an opportunity to learn and grow [Citation55,Citation56,Citation57].

Institutions and programs may take a variety of approaches to promote feedback literacy. At minimum, they can provide basic guidelines for constructive and professional comments and train learners on effective ways to express their concerns to result in change. Some institutions specifically train a core group of learners to be the designated evaluators for their cohort, ensuring valid interpretations of the rating tool, rater reliability, and constructive comments [Citation48,Citation58,Citation33]. Other institutions engage a learner leadership group in peer review of comments to help improve the quality and professionalism of student comments. Still other institutions have moved away from anonymous or confidential evaluations to signed evaluations to emphasize to learners that they ought not to put in writing anything they would be unwilling to say directly to a faculty member.

Tip 11: develop and support faculty to respond constructively to feedback and provide faculty with resources to help them improve

Institutions should implement robust faculty development programs that foster constructive responses to feedback (see Tips 4–8; Esterhazy), improve teaching skills (i.e., the subject of the feedback), and create a growth mindset for continuous development. Care must be taken to not create a punitive culture around critical feedback. Faculty should feel supported to acknowledge difficulties or need for help or additional skills and feel capable of effecting change. Institutions must be sensitive to the challenges of developing faculty to teach in what may be perceived as drastically different ways (e.g., virtual strategies; anti-racist approaches to content and content delivery) while maintaining the accountability demanded of the work. By supporting faculty in their own learning actions (through connections to internal or external resources, such as those within a professional society), the institution can support a reflective practice amongst its educators that fosters improvement [Citation59]. At the same time, institutions should recognize that some faculty may be demoralized by critical evaluations and feel their identity as an educator threatened. On rare occasions, both the institution and a faculty may realize that the faculty needs to step out of their teaching role. Institutions should have compassionate off-ramps for these circumstances when teaching is no longer a good fit.

Tip 12: provide faculty with other evidence of teaching outcomes and implement a holistic system for evaluation of teaching

Whereas faculty can gather and triangulate data regarding their teaching (Tip 7), institutions can support faculty by putting processes in place to provide faculty with additional evidence of teaching quality beyond learner satisfaction. Decreasing the weight placed upon learner satisfaction ratings can encourage a more holistic evaluation of a faculty’s teaching contributions. For instance, institutions have data on learning outcomes from local and often national assessments with benchmarking data, which could be provided to faculty. Institutions can support peer observation programs for faculty to facilitate their ability to collect additional feedback and improve. By allowing the submission of an educator’s portfolio for promotion, institutions can enable faculty to communicate their educational philosophy and make their teaching visible [Citation60]. The educator’s portfolio also allows faculty to provide context for their evaluations by including reflections on their intentional development plan. Expanding teaching data beyond learner evaluations and enabling faculty to articulate their teaching narrative helps ensure a more comprehensive picture of a faculty member’s teaching ability and quality while fostering their overall career development.

Discussion

Learner evaluations of teaching can inform improvements when combined with steps that enable a productive response to critical comments. While studies have shown that SETs on the whole contain more positive reinforcing than critical comments, critical comments may engender negative emotional responses from faculty that have the potential to hinder the teaching development plan. In this time of rapid change in medical education due to the COVID-19 pandemic and amplified calls for anti-racism, creating a shared understanding of the role of feedback to guide improvement is all the more important. Students, faculty, and institutional leadership should understand the benefits and limitations of SETs, the potential for learning challenges during implementation processes, and the necessary steps to enable productive responses to learner feedback. Faculty should also be supported to develop skills for reflecting on their emotional response to feedback that hurts, approaches to triangulate sources of data on their teaching, and steps to enable a timely and appropriate response. Finally, institutions should provide oversight of the evaluation process, develop learners and faculty to effectively engage in feedback and provide faculty with evidence of their teaching outcomes that enable them to share their educational portfolio for advancement. This guide provides a framework for faculty and institutions to manage counterproductive effects of learner evaluation and productive steps to support key actions even in the face of words that hurt.

Acknowledgments

The authors would like to thank the resilient faculty and learners from our various institutions who continue to engage in the necessary partnership to improve medical education for the benefit of our patients.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported that there is no funding associated with the work featured in this article.

References

  • Hammonds F, Mariano GJ, Ammons G, et al. Student evaluations of teaching: improving teaching quality in higher education. Perspectives: Policy Pract in Higher Educ. 2017;21:26–8.
  • Palmer S. The performance of a student evaluation of teaching system. Assess Eval Higher Educ. 2012;37(8):975–985.
  • Sargeant JM, Mann KV, Van Der Vleuten CP, et al. Reflection: a link between receiving and using assessment feedback. Adv Health Sci Educ. 2009;14(3):399–410.
  • Moore S, Kuol N. Students evaluating teachers: exploring the importance of faculty reaction to feedback on teaching. Teach Higher Educ. 2005;10(1):57–73.
  • Wong WY, Moni K. Teachers’ perceptions of and responses to student evaluation of teaching: purposes and uses in clinical education. Assess Eval Higher Educ. 2014;39(4):397–411.
  • Tucker B. Student evaluation surveys: anonymous comments that offend or are unprofessional. Higher Educ. 2014 Sep 1;68(3):347–358.
  • Chou CL. How COVID-19 disrupts—and enhances—my clinical work. J Patient Exp. 2020;7(2):144–145.
  • Chesak SS, Perlman AI, Gill PR, et al. Strategies for resiliency of medical staff during COVID-19. Mayo Clin Proc. 2020;95(9):S56–S59.
  • Herrmann-Werner A, Erschens R, Zipfel S, et al. Medical education in times of COVID-19: survey on teachers’ perspectives from a German medical faculty.GMS J Med Educ. 2021 [Published 2021 Jun 15];38(5):Doc93.
  • Quigley D. When I say … feedback literacy. Med Educ. 2021;55(10):1121–1122.
  • Lefroy J, Watling C, Teunissen PW, et al. Guidelines: the do’s, don’ts and don’t knows of feedback for clinical education. Perspect Med Educ. 2015;4(6):284–299.
  • Sargeant J, Mann K, van der Vleuten C, et al. “Directed” self‐assessment: practice and feedback within a social context. J Contin Educ Health Prof. 2008;28(1):47–54.
  • Cutrer WB, Miller B, Pusic MV, et al. Jr PhD fostering the development of master adaptive learners: a conceptual model to guide skill acquisition in medical education. Acad Med. 2017 January;92(1):70–75. –
  • Alok K. Student evaluation of teaching: an instrument and a development process. Int J Teach Learn Higher Educ. 2011;23(2):226–235.
  • Stufflebeam DL, Coryn CJ. Evaluation theory, models, and applications. 2014. Second Edition (chapter 9)
  • Arthur L. From performativity to professionalism: lecturers’ responses to student feedback. Teach Higher Educ. 2009;14(4):441–454.
  • Ryznar E, Levine RB. Twelve tips for mindful teaching and learning in medical education. Med Teach. 2022;441:249–256.
  • Dweck C. Carol Dweck revisits the growth mindset. Educ Week. 2015;35(5):20–24.
  • Molloy E, Boud D, Henderson M. Developing a learning-centred framework for feedback literacy. Assess Eval Higher Educ. 2020;45(4):527–540.
  • Boring A, Ottoboni K, Stark P. Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Res. 2016:0:1–11. doi:10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1.
  • Cohen PA. Student ratings of instruction and student achievement: a meta-analysis of multisection validity studies. Rev Educ Res. 1981;51(3):281–309.
  • Kreitzer RJ, Sweet-Cushman J. Evaluating student evaluations of teaching: a review of measurement and equity bias in SETs and recommendations for ethical reform. J Acad Ethics. 2022;20(1): 73–84 .
  • Spooren P, Brockx B, Mortelmans D. On the validity of student evaluation of teaching. Rev Educ Res. 2013;83(4):598–642.
  • Uijtdehaage S, O’Neal C. A curious case of the phantom professor: mindless teaching evaluations by medical students. Med Educ. 2015;49(9):928–932.
  • Heffernan T. Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assess Eval Higher Educ. 2021;47(1):1–11.
  • Marsh HW, Roche LA. Effects of grading leniency and low workload on students’ evaluations of teaching: popular myth, bias, validity, or innocent bystanders? J Educ Psychol. 2000;92(1):202–228.
  • Uttl B, White CA, Gonzalez DW. Meta-analysis of faculty’s teaching effectiveness: student evaluation of teaching ratings and student learning are not related. Stud in Educ Eval. 2017 Sep 1;54:22–42.
  • Davidoff F. Focus on performance: the 21 century revolution in medical education. Mens Sana Monogr. 2008 Jan;6(1):29–40.
  • Ona FF, Amutah-Onukagha NN, Asemamaw R, et al. Struggles and tensions in antiracism education in medical school: lessons learned. Acad Med. 2020;95(12S):S163–S168.
  • Burk-Rafel J, Jones RL, Farlow JL. Engaging learners to advance medical education. Acad Med. 2017;92(4):437–440.
  • Bynum, William E, Artino IVMD, Anthony R. Jr PhD. Who am i, and who do i strive to be? Applying a theory of self-conscious emotions to medical education, academic medicine: June 2018 93 – 6 p 874–880.
  • Cain J, Romanelli F, Smith KM. Academic entitlement in pharmacy education. Am J Pharm Educ. 2012;76(10):189.
  • Scott KW, Callahan DG, Chen JJ, et al. Fostering student–faculty partnerships for continuous curricular improvement in undergraduate medical Education. Acad Med. 2019;94(7):996–1001.
  • DiAngelo RJ. White fragility: why It’s so hard for white people to talk about racism. Boston: Beacon Press; 2018.
  • Dudek NL, Dojeiji S, Day K, et al. Feedback to supervisors. Acad Med. 2016;91(9):1305–1312.
  • Ragau S, Hitchcock R, Craft J, et al. Using the HALT model in an exploratory quality improvement initiative to reduce medication errors. Br J Nurs. 2018 Dec 13;27(22):1330–1335.
  • Stone, Douglas, Heen, Sheila Thanks for the feedback: The science and art of receiving feedback well. (Penguin) 2015.
  • Carless D, Boud D. The development of student feedback literacy: enabling uptake of feedback. Assess Eval Higher Educ. 2018;43(8):1315–1325.
  • Brems C, Baldwin MR, Davis L, et al. The imposter syndrome as related to teaching evaluations and advising relationships of university faculty members. J of Higher Educ. 1994;65(2):183.
  • Flodén J. The impact of student feedback on teaching in higher education. Assess Eval Higher Educ. 2017;42(7):1054–1068.
  • Koens F, Mann KV, Custers EJ, et al. Analysing the concept of context in medical education. Med Educ. 2005;39(12):1243–1249.
  • Esterhazy R, De Lange T, Bastiansen S, et al. Moving beyond peer review of teaching: a conceptual framework for collegial faculty development. Rev Educ Res. 2021;91(2):237–271.
  • O’Keefe M, Lecouteur A, Miller J, et al. The colleague development program: a multidisciplinary program of peer observation partnerships. Med Teach. 2009;31(12):1060–1065.
  • Abigail LKM. Do communities of practice enhance faculty development? Health Professions Educ. 2016;2(2):61–74.
  • Farnsworth V, Kleanthous I, Wenger-Trayner E. Communities of practice as a social theory of learning: a conversation with etienne wenger. British J Edu Stud. 2016;64(2):139–160.
  • Nieminen JH, Tai J, Boud D, et al. Student agency in feedback: beyond the individual. Assess Eval Higher Educ. 2022;47(1):95–108.
  • Stenalt MH, Lassesen, B. Does student agency benefit student learning? Asystematic review of higher education research. Assessment and Evaluation in Higher Education. 2022;47(5): 653–669 .
  • Kumar P, Pickering CM, Atta L, et al. Student curriculum review team, 8 years later: where we stand and opportunities for growth. Med Teach. 2021;43(3):314–319.
  • Walker AG, Smither JW. A five-year study of upward feedback: what managers do with their results matters. Personnel Psychol. 1999;52(2):393–423.
  • Afonso NM, Cardozo LJ, Mascarenhas OA, et al. Are anonymous evaluations a better assessment of faculty teaching performance? A comparative analysis of open and anonymous evaluation processes. Fam Med. 2005 Jan 1;37(1):43–47.
  • Daberkow DW, Hilton C, Sanders CV, et al. Faculty evaluations by medicine residents using known versus anonymous systems. Med Educ Online. 2005;10(1):4380.
  • Pelsang RE, Smith WL. Comparison of anonymous student ballots with student debriefing for faculty evaluations. Med Educ. 2000;34(6):465–467.
  • Olvet DM, Willey JM, Bird JB, et al. Third year medical students impersonalize and hedge when providing negative upward feedback to clinical faculty. Med Teach. 2021;43(6):700–708.
  • Lindahl MW, Unger ML. Cruelty in student teaching evaluations. College Teach. 2010;58(3):71–76.
  • Fluit C v, Bolhuis S, Klaassen T, DE Visser M, Grol R, Laan R and Wensing M. (2013). Residents provide feedback to their clinical teachers: Reflection through dialogue. Medical Teacher, 35(9), e1485–e1492. 10.3109/0142159X.2013.785631
  • Menachery EP, Wright SM, Howell EE, et al. Physician-teacher characteristics associated with learner-centered teaching skills. Med Teach. 2008;30(5):e137–e144.
  • Van Wyk J, Mclean M. Maximizing the value of feedback for individual facilitator and faculty development in a problem-based learning curriculum. Med Teach. 2007;29(1):e26–e31.
  • Odera EL. Capturing the Added value of participatory evaluation. Am J Eval. 2021;42(2):201–220.
  • Mandouit L. Using student feedback to improve teaching. Educ Action Res. 2018;26(5):755–769.
  • Shinkai K, Chen C (Amy), Schwartz BS, et al. Rethinking the educator portfolio. Acad Med. 2018;93(7):1024–1028.