2,669
Views
26
CrossRef citations to date
0
Altmetric
Research Article

Residents provide feedback to their clinical teachers: Reflection through dialogue

, , , , , & show all
Pages e1485-e1492 | Published online: 22 Apr 2013

Abstract

Background: Physicians play a crucial role in teaching residents in clinical practice. Feedback on their teaching performance to support this role needs to be provided in a carefully designed and constructive way.

Aims: We investigated an evaluation system for evaluating supervisors and providing formative feedback.

Method: In a design based research approach, the ‘Evaluation and Feedback For Effective Clinical Teaching System’ (EFFECT-S) was examined by conducting semi-structured interviews with residents and supervisors of five departments in five different hospitals about feedback conditions, acceptance and its effects. Interviews were analysed by three researchers, using qualitative research software (ATLAS-Ti).

Results: Principles and characteristics of the design are supported by evaluating EFFECT-S. All steps of EFFECT-S appear necessary. A new step, team evaluation, was added. Supervisors perceived the feedback as instructive; residents felt capable of providing feedback. Creating safety and honesty require different actions for residents and supervisors. Outcomes include awareness of clinical teaching, residents learning feedback skills, reduced hierarchy and an improved learning climate.

Conclusions: EFFECT-S appeared useful for evaluating supervisors. Key mechanism was creating a safe environment for residents to provide honest and constructive feedback. Residents learned providing feedback, being part of the CanMEDS and ACGME competencies of medical education programmes.

Introduction

Physicians play a crucial role in teaching residents in the clinical workplace. They can be supported to do so effectively by evaluating clinical teaching and providing them with feedback on their clinical teaching performance (Snell et al. Citation2000). In this process, four steps are understood to be involved: (1) assessing the performance, (2) providing assessment feedback, (3) reflection and decision-making and (4) using feedback for learning and change (Sargeant et al. Citation2009). Not much is published about systematic ways to realise these steps in a clinical practice and how this is perceived by both clinical teachers and residents.

Most of the literature on evaluating clinical teaching is restricted to the study of validating assessment instruments (Fluit et al. Citation2010; Nation et al. Citation2011, Arah Citation2012). A variety of assessment instruments is available, the most prevalent one being the completion of a standardized teacher-rating form by learners (Snell et al. Citation2000; Beckman et al. Citation2003; Fluit et al. Citation2010). In choosing a questionnaire, it is essential that it is valid, reflects relevant teacher tasks, and is based on theory of learning. However, having a good instrument is just the beginning: if feedback is to be used for reflection, decision-making, learning and change, it needs to be provided in a carefully designed and instructive way. Both residents and supervisors need to understand the purpose of such a system and find the way of using the instrument acceptable and useful (Smith & Fortunato Citation2008). Assessment feedback reports, containing written feedback based on learner ratings combined with self-assessments, are useful and can stimulate improvement of clinical teaching (Stalmeijer et al. Citation2010). Reflection and decision-making can be enhanced by discussing feedback with a facilitator who helps the recipient (Penny & Coe Citation2004; Kluger & DeNisi Citation2006; Nicol & Macfarlane Citation2006). This facilitator is often an expert faculty developer or a peer.

In a clinical setting, the facilitator could be a peer, the programme director, the head of the department, an external clinical teaching expert, but also a resident, although it is not common in a traditionally hierarchic environment to choose residents as facilitators for providing feedback. However, residents actually experience their supervisors’ teaching qualities, should be able to give practical feedback and suggestions, and are the future peers of their supervisors.

Research outside the medical domain shows that such so-called upward feedback helps supervisors to interpret evaluation results, gives directions for improvement, results in more active additional feedback seeking behaviour, and has a positive effect on communication (Hall et al. Citation1996; Waldman Citation2001; Nicol & Macfarlane Citation2006). For the feedback providers, it enhances their feeling they have a voice in the organisation. Both supervisors and feedback providers increase their awareness of behaviours that are pursued by the organisation (Hall et al. Citation1996; Dierendonck et al. Citation2007). Other studies, however, stress that upward feedback can undermine supervisors’ authority and find only limited power of upward feedback in enhancing supervisors’ behaviour (Bernardin & Redmon Citation2006). Several conditions for upward feedback are described in the literature (Hall et al. Citation1996; Waldman Citation2001; Smith & Fortunato Citation2008). One important prerequisite is rater honesty as validity may be attenuated when raters provide inaccurate performance ratings. Factors that contribute to honest ratings include a context in which people trust the organization, understand the upward feedback process, have the opportunity to observe their supervisor's performance, perceive the process to be beneficial, have little fear of retaliation, and are able to rate accurately (Smith & Fortunato Citation2008).

Based on this literature, we conclude that evaluating clinical teachers and providing upward feedback effectively needs a design that meets three principles: (1) a carefully designed evaluation system based on a validated quality assessment instrument, (2) providing feedback that is useful for learning and (3) creating acceptability of the evaluation system. For assessing the clinical teachers we use the validated questionnaire Evaluation and Feedback For Effective Clinical Teaching (EFFECT) (Fluit et al. Citation2012). describes the three principles specified into characteristics and the way they were applied in the design of the evaluation system (EFFECT-S). This study explores the value of the design principles and translation in the EFFECT-S by posing the following questions: (a) does EFFECT-S realises the conditions for useful feedback by residents?; (b) do supervisors and residents regard EFFECT-S acceptable for evaluating supervisors in clinical practice? and (c) do the effects of using EFFECT-S support its design?

Table 1  Design principles and characteristics for providing clinical teachers with formative upward feedback, applied in the EFFECT-S

Method

Design-based research approach

Design-based research (DBR) is the systematic study of designing, developing and evaluating educational interventions (such as programs, teaching-learning strategies and materials, products and systems) as solutions for complex problems in educational practice, which also aims at advancing our knowledge about the characteristics of these interventions and the processes of designing and developing them (Plomp & Nieveen Citation2010). It focuses on clarifying why a specific design with a specific aim does work in a specific context and as such contributes toward the advancement of theories and design principles (Dolmans & Tigelaar Citation2012).

Within this approach this study employed focus group interviews, a qualitative approach that is particularly well suited for research that is exploratory and that involves personal and social constructs (Bradley et al. Citation2002). In this design study we investigated the effectiveness and conditions of an evaluation system for providing clinical teachers with useful feedback. We called this system EFFECT-S.

The system for Evaluation and Feedback For Effective Clinical Teaching

The system for Evaluation and Feedback For Effective Clinical Teaching (EFFECT-S) starts with an introduction meeting with staff and residents at the department to inform (and involve) them about the formative purpose of the evaluation procedure at their department and to make tailor-made appointments. A tailor-made EFFECT-S includes a careful planning; a discipline-specific questionnaire; agreement on who fills in the questionnaires (residents on a voluntary basis, anonymous ratings), how the feedback procedure is organised, and who has access to the results. The evaluation itself consists of (a) an internet-based self-evaluation questionnaire for supervisors and a questionnaire to be completed by residents, (c) a feedback report, including the mean scores per item and domain, a group score (the mean scores of all staff of the particular department) and the written comments, and (d) a face-to-face meeting (dialogue) between the supervisor and two residents (representing their group) and guided by a moderator (an experienced educationalist) from outside the department. We used the EFFECT questionnaire, a validated instrument, based on workplace learning and consisting of 58 items in 7 domains: (1) role modelling, (2) task allocation, (3) planning, (4) feedback, (5) teaching methodology, (6) assessment and (7) personal support (Fluit et al. Citation2012).

Participants

We invited residents who actually conducted the feedback sessions to participate in the focus groups. As it was important for this study that both residents and staff would feel free to answer all questions honestly, we did not choose to have mixed focus groups. Groups were stratified on the basis of type of hospital (university or teaching hospital), discipline (surgical, non-surgical) and function (staff or resident). We conducted 11 focus group interviews with two (in a department with no more residents) to seven members (in larger departments). In total, 21 residents and 52 clinical teachers took part in five departments, each in a different hospital (Psychiatry, Pulmonary Diseases, Orthopaedic Surgery, Radiology and Paediatrics).

Interviews

The interviews took one hour each. An experienced interviewer (SB or TK) guided the session, while the main researcher (CF) took notes and asked questions to clarify points where necessary. We designed a semi-structured interview using five guiding questions to explore supervisors’ and residents’ perceptions of and experiences with the EFFECT-S and the upward feedback. To start the discussion, the interviewer asked the participants to think of the evaluation procedure and what they felt was its most important element. Then, the moderator asked (1) how they valued having clinical supervisors evaluated by residents and (2) what they learned from the evaluation, the written feedback, the oral feedback and the procedure. All discussions were audio-taped.

Data analysis

All interviews were transcribed and entered into qualitative data analysis software (ATLAS-ti). Coding was accomplished in a series of iterative steps, based on Strauss & Corbin (t Hart et al. Citation2005).The main researcher (CF), with a medical and educational background, and two other researchers (TK and MdV) with an educational background started with an open coding of the transcript of one interview, using a provisional list of codes that was based on the interview questions. Open coding is a process of breaking down, examining, comparing, conceptualizing and categorizing data.14

Next, the three researchers (CF, TK, MdV) independently coded four interviews and discussed the results. Disagreements were mostly about codes that were close in meaning and were resolved through further discussion and enhanced definition of the codes. The researchers recorded memos and thoughts during the coding process. These were also discussed. There was initial agreement on about 90% of the codes applied, indicating good reliability in qualitative research. Six themes, related to the effectiveness of the design, emerged from the analysis (including the discussion of memos).

Next, an axial coding of the four interviews was performed. During axial coding data are put back together in new ways after open coding, by making connections between categories and with a view to defining the important elements of the research (t Hart et al. Citation2005). Researcher CF coded all four interviews; TK and MdV each coded half of the interviews. The code structure and themes were confirmed and related to each other.

During the last phase of our coding process, selective coding, the transcripts of the two remaining interviews were coded in order to get a deeper understanding of the evaluation process and to define factors for success in relation to the different components of EFFECT-S. As no new information appeared, we concluded saturation was reached. The final codes and themes have been listed and clarified in . In three departments we had the opportunity to discuss the results of our study as a way of ‘member-checking’.

Table 2  Codes and themes used for analyzing the interviews

Ethical approval

Our institute waived approval for this study. For this study, all participants were invited to participate by a personal email explaining design and purpose. Participation was entirely voluntary; participants received no reward. The researchers obtained verbal consent from clinical faculty and residents at the start of each interview.

Results

Does EFFECT-S realises the conditions for useful feedback by residents?

Written feedback report

The written reports were valued differently: some staff found these very informative, whereas others indicated that ‘having scores only is pointless’. The staff found comparing the self-evaluations with the residents’ scores informative and helpful to prepare for the feedback sessions with the residents, especially in the case of discrepancies. These discrepancies helped residents to raise difficult points during the interview that would never be mentioned otherwise. Written comments were informative and useful starting-points for the feedback interview, unless they had been formulated in too general or too offensive a way.

The dialogue

In all departments, staff noticed that residents were well prepared and that they provided feedback in a correct manner: ‘like we learned it ourselves in our teacher training course’. The staff valued that residents clarified results, gave practical tips and suggestions, and helped supervisors to focus on the most important aspects. To be able to do so, residents who provide feedback should be familiar with that supervisor. Residents also indicated that having a dialogue with a supervisor they were familiar with was preferable, as this allowed them to give more specific feedback.

It was perceived as helpful that the moderator had an educational background, actively guided the dialogue, and played a mediating role when staff members were uncooperative, or, conversely, when residents were too friendly. Some residents and supervisors suggested that a moderator was no longer needed once the EFFECT-S process was embedded in a new culture. One of the staff members said: ‘Junior residents will learn this from senior residents in practice’. Others, however, believed that a moderator would always be needed, as new residents without experience in providing feedback would keep arriving.

Conditions for useful feedback: safety and honesty

The coding process led to the formulation of six themes (). Honesty of feedback and a safe environment were recognized as characteristics of useful feedback (). However, these two conditions appeared to have a different meaning for residents and supervisors. An overview of items promoting and inhibiting safety and honesty for both residents and supervisors, based on the interviews, is shown in . For residents, safety is created by anonymous evaluations, but this may impair safety for supervisors. Safety for residents is impaired when the feedback report contains rude written comments made by other residents, as they are asked to explain these comments during the dialogue.

Table 3  Conditions for adequate use of the different elements of EFFECT-S

Honesty of the resident when filling in questionnaires and honesty during the dialogue was important for creating a safe environment for a supervisor. This honesty, in return, will be encouraged by a positive attitude of supervisors towards the evaluation: ‘In this way, it is a really safe situation. And we thought: how will this work? We didn’t have the easiest supervisors for the dialogue, so we were wondering how this would work. During the dialogue it was fine although it will be a bit threatening every next time’. Being really honest in the dialogue process can be difficult in hierarchical relations, but the written feedback report and the presence of a moderator proved to be helpful here. Remarkably, supervisors did not make any comments about their own honesty during the interviews; some residents felt their supervisors would be more honest without the presence of a moderator.

Do supervisors and residents regard EFFECT-S acceptable for evaluating supervisors in clinical practice?

The general appreciation of EFFECT-S was high. All departments were very positive about the questionnaire and the different steps in the evaluation system.

Involving departments

During the introduction of EFFECT-S, staff members were told what the purpose of the evaluation was and what would happen with the results. For residents, this was a sign that the evaluation was taken seriously. It was important to take enough time at this stage, as one of the supervisors observed: ‘If the implementation of EFFECT goes too fast the first time, then forget it for at least the next five years’.

The EFFECT instrument

In all staff interviews, there was general agreement that the self-evaluation was a necessary element of EFFECT-S. ‘It is like trying to look into a mirror’, as one of the participants said. Some supervisors found this the most important element, because the questionnaire offered them such a complete picture of clinical teaching. Some residents indicated they also learned about the important aspects of clinical teaching, e.g. that ‘clinical teaching is more than just bedside teaching and giving feedback’.

Suggestions for improving EFFECT-S

As filling in a number of questionnaires is time-consuming, residents at larger departments suggested making agreements on the number of questionnaires to be filled in by each resident and for whom. When asked what elements were missing from the EFFECT-S, part of the supervisors and residents suggested a follow-up team meeting for sharing results and for discussing how clinical teaching can be improved further.

Do the effects of using EFFECT-S support its design?

Awareness of clinical teaching

Residents and supervisors became aware of elements of clinical teaching during the evaluation process (). Awareness raising occurred at all steps of the evaluation process. During the introduction, firstly, supervisors realised what was expected from them and residents realised what they could expect from their supervisors. Secondly, completion of the questionnaire made residents and supervisors think about all aspects of good teaching and made supervisors reflect on their own teaching. ‘It creates a frame of reference. It deals with domains of teaching that you wouldn’t think of immediately, but when you see it, you think: Ah right, that's part of teaching too’. Thirdly, the comparison of self-scores with the residents’ scores and with the mean group scores made supervisors aware of their own strong and weak points; residents could read the supervisors’ own view and how other residents appreciated their supervisors. Finally, the feedback dialogue made supervisors aware of how they could improve their teaching, while residents became aware of the supervisors’ thoughts of their teaching and of their own feedback skills. Although this was not a goal of the EFFECT-S, residents indicated they themselves also learned a lot from filling in the EFFECT questionnaire and the feedback sessions. They learned not only about what they can expect from their supervisors, but also about how to provide feedback and how to deal with difficult situations, as when a staff member was unwilling to receive feedback.

Table 4  Effects of EFFECT-S for residents and supervisors

Relations and hierarchy

Supervisor–resident relations play a role in creating safety (), and these relations may improve as a result of the evaluation process (). Though the hierarchical relation may impair residents’ honesty, hierarchy can also be lowered as an effect of the evaluation. Some residents had a fear of retaliation, but this was reduced after the dialogue had taken place: ‘When we started this whole evaluation, I thought this was a point that would be criticized, but in hindsight this did not happen’.

In general, staff tended to be more critical of their own performance then residents’ evaluation. Residents indicated they filled in the questionnaires honestly and thought that the staff rated themselves too low. During the interviews with supervisors several explanations were offered. Some suggested that supervisors may give themselves low ratings, so that comparison with residents’ scores would turn out positive. Others thought that residents who only experienced low-quality supervision rated supervisors with intermediate skills very positively. Finally staff suggested a lack of clear criteria on good or bad teaching as a reason for not rating themselves too positively.

Climate

Residents indicated that the learning climate had improved after the evaluation (): ‘Absolutely, more open, I mean. it's become easier to discuss things. You notice that people are open to feedback afterwards’. Supervisors said that clinical teaching had changed over time, e.g. ‘When I was a resident, I just had to do what the boss told me to do, without any comment’.

Discussion and conclusion

The design principles and characteristics as realised in the EFFECT-S contribute to the intended outcomes of reflection and learning, not only for supervisors but also for residents. This study shows that EFFECT-S, a carefully designed system in which residents provide formative feedback to their clinical supervisors, is highly acceptable to supervisors and residents. Feedback provided by residents during the dialogue is the highest valued element of EFFECT-S. It is the heart of the system, but all other elements are necessary in order to evaluate supervisors effectively. The study added a more precise understanding of safety and honesty in providing and receiving upward feedback, the inclusion of a team evaluation, as well as practical suggestions for implementation of the system.

Our findings support the idea that residents are able to facilitate reflective feedback processes by helping supervisors to understand differences between external feedback and self-perceptions, by interpreting feedback content, identifying learning and performance needs, and setting goals and plans for change (Kluger & DeNisi Citation2006; Sargeant et al. Citation2009). The presence of a moderator to guide the dialogue is not always needed, as our results suggest, but depends on the extent to which supervisors and residents themselves are ready to realize the different steps in the procedure.

Important conditions for successful implementation include creating a safe environment for both residents and supervisors and honest feedback provided by residents. However, creating safety for supervisors and residents require different actions: whereas anonymous ratings will create a safe evaluation environment for residents, they impair safety for supervisors. This can in part be overcome by organizing the dialogue for a supervisor with residents that work(ed) with that particular supervisor. In the introduction of EFFECT-S, residents should be told not only to fill in the questionnaires honestly, but also to add their written comments with care, as offensive comments will diminish safety for both supervisors and the residents who provide the feedback.

Supervisors feel safer when residents are honest, but they should realise that they themselves can play an important part in residents’ honesty by showing commitment to the evaluation process and willingness to change throughout the evaluation process. Fear of retaliation was mentioned as a condition that could impair residents’ honesty, but experience with the implementation of the EFFECT-S took away this fear. Our results indicate that a carefully planned evaluation can improve the learning climate and soften hierarchical relations at a department without undermining the authority of clinical teachers (Bernardin & Redmon Citation2006).

Interestingly, not only specialists but residents as well learn from filling in the questionnaire and talking about their training. In this way supervisors and residents co-create shared knowledge on what the profession is about, and they start to co-create a shared understanding of learning in practice and how to optimise this workplace learning. They both learn to have an open communication that can be mutually critical while safeguarding respect and trust, foundational to all learning (Billett Citation2001). An unintended learning objective but very important learning outcome of EFFECT-S was that it offers an excellent opportunity for residents to practice their feedback skills, which is part of the Scholarship role as described in the CanMEDS competencies and the ‘Practice Based Learning and Improvement’ domain of the ACGME core competencies (Leach Citation2000, Citation2001; Frank & Danoff Citation2007). This can be seen as a first step to effective peer feedback and a useful outcome for residents when they supervise clerks and/or themselves become clinical supervisors in the future (Ramsey et al. Citation1993; Norcini Citation2003).

Both residents and supervisors proposed to have a team session after the feedback interviews, indicating that, although they felt the individual steps were valuable, they needed to share the experiences and make agreements with the whole group to realize further change. This was the reason for us to add a team evaluation to the EFFECT-S. In this session, residents and supervisors are invited to discuss the mean group scores, to compare these with the mean self-evaluation scores and to propose improvements in the teaching programme at the department. The revised EFFECT-S is shown in .

Figure 1. The EFFECT system (EFFECT-S).

Figure 1. The EFFECT system (EFFECT-S).

To encourage the use of EFFECT-S in other institutions, we developed a package consisting of a manual, workshops for residents, staff, facilitators and technical staff for the online use of the EFFECT questionnaire. Also a website (www.effectsurvey.nl) was created with background information and for sharing experiences.

This study is limited by having been conducted with small groups of doctors that participated in the EFFECT-S. As they had chosen to engage in such an evaluation, these data may be biased by a positive attitude towards evaluating clinical teachers and having residents provide feedback. Further study of clinical teachers’ experiences with formative assessment and upward feedback from residents and its effectiveness is needed. Such studies can further investigate the EFFECT-S and/or build on the three principles to design and study variants. It may be interesting to compare upward feedback to other ways of providing feedback, such as feedback provided by the head of department or peer feedback. Also, the moderator role may be investigated further.

Future research is needed to look at the impact of this evaluation on actual supervisors’ behaviour, and on the residents’ behaviour. Furthermore, it is important to examine the interaction between formative feedback regarding clinical teachers and the medical departmental culture, including the learning climate and (hierarchical) relations. These may affect the implementation of EFFECT-S (or variants), but the EFFECT-S may also affect the learning climate, supervisor–resident relations.

We conclude that the EFFECT-S is a promising approach for stimulating the quality of clinical teaching as well as improving the learning climate. A valid questionnaire is mandatory, but the dialogue is at the heart of the EFFECT-S. The EFFECT-S proved a strong learning tool for both clinical teachers and residents.

Acknowledgements

The authors would like to thank staff and residents of the participating departments. They also want to thank Rikkert Stuve (The Text Consultant) for editing the final version.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

References

  • Arah OA, Heineman MJ, Lombarts KMJMH. Factors influencing residents' evaluations of clinical faculty member teaching qualities and role model status. Med Educ 2012; 46: 381–389
  • Beckman TJ, Lee MC, Rohren CH, Pankratz VS. Evaluating an instrument for the peer review of inpatient teaching. Med Teach 2003; 25(2)131–135
  • Bernardin H DS, Redmon G. Attitudes of first-line supervisors toward subordinate appraisals. Hum Resource Manage 2006; 32(2–3)315–324
  • Billett S. Learning in the workplace: Strategies for effective practice. Allen & Unwin, Crows Nest 2001
  • Bradley E MS, Curry L, Buckser A, King KL, Kasl V, Anderson R. Expanding the Anderson model: The role of psychosocial factors in long-term care use. Health Serv Res 2002; 37(5)1221–1241
  • Dolmans DHJM, Tigelaar D. Building bridges between theory and practice in medical education using a design-based research approach: AMEE guide No. 60. Med Teach 2012; 34(1)1–10
  • Dierendonck D, van HC, Borrill C, Stride C. Effects of upward feedback on leadership behavior toward subordinates. J Manage Dev 2007; 26(3)228–238
  • Fluit CR, Bolhuis S, Grol R, Laan R, Wensing M. Assessing the quality of clinical teachers: A systematic review of content and quality of questionnaires for assessing clinical teachers. J Gen Intern Med 2010; 25(12)137–145
  • Fluit CR, Bolhuis S, Grol R, Ham M, Feskens R, Laan R, Wensing M. Evaluation and feedback for effective clinical teaching in postgraduate medical education: Validation of an assessment instrument incorporating the CanMEDS roles. Med Teach 2012; 34(11)893–901
  • Frank JR, Danoff D. The CanMEDS initiative: Implementing an outcomes-based framework of physician competencies. Med Teach 2007; 29(7)642–647
  • Hall JL, Leiaecker JK, DiMarco C. What we know about upward appraisals of management: Facilitating the future use of UPAs. Hum Resource Dev Quart 1996; 7(3)209–226
  • Kluger AN, DeNisi A. The effects of feedback interventions on performance: Historical review, a meta-analysis and a preliminary feedback intervention theory. Psychol Bull 2006; 119: 254–284
  • Leach DC. Evaluation of competency: An ACGME perspective. Accreditation Council for Graduate Medical Education. Am J Phys Med Rehabil 2000; 79(5)487–489
  • Leach DC. The ACGME competencies: Substance or form? Accreditation Council for Graduate Medical Education. J Am Coll Surg 2001; 192(3)396–398
  • Nation JG, Carmichael E, Fidler H, Violato C. The development of an instrument to assess clinical teaching with linkage to CanMEDS roles: A psychometric analysis. Med Teach 2011; 33(6)e290–e296
  • Nicol D, Macfarlane-Dick D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 2006; 31(2)199–218
  • Norcini JJ. Peer assessment of competence. Med Educ 2003; 37(6)539–543
  • Penny AR, Coe R. Effectiveness of consultation on student ratings feedback: A meta-analysis. Rev Educ Res 2004; 74(2)215–253
  • Plomp T, Nieveen N, (editors). 2010. An introduction to educational design research. Enschede: Netzodruk
  • Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA 1993; 269(13)1655–1660
  • Sargeant JM, Mann KV, Vleuten van de C, Metsemakers JF. Reflection: A link between receiving and using assessment feedback. Adv Health Sci Educ Theory Pract 2009; 14(3)399–410
  • Smith AFR, Fortunato VJ. Factors influencing employee intentions to provide honest upward feedback ratings. J Bus Psychol 2008; 22: 191–207
  • Snell L, Tallett S, Haist S, Hays R, Norcini J, Prince K, et al. A review of the evaluation of clinical teaching: New perspectives and challenges. Med Educ 2000; 34(10)862–870
  • Stalmeijer RE, Dolmans DH, Wolfhagen IH, Peters WG, van CL, Scherpbier AJ. Combined student ratings and self-assessment provide useful feedback for clinical teachers. Adv Health Sci Educ Theory Pract 2010; 15(3)315–328
  • 't Hart H, Boeije BH, Hox J, (editors). 2005. Onderzoeksmethoden. Amsterdam: Boom Onderwijs.
  • Waldman DAL. Attitudinal and behavioral outcomes of an upward feedback process. Group Org Manage 2001; 26(2)189–205

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.