2,332
Views
15
CrossRef citations to date
0
Altmetric
Research Articles

Feedback in formative OSCEs: comparison between direct observation and video-based formats

, , , , &
Article: 32160 | Received 06 May 2016, Accepted 21 Sep 2016, Published online: 08 Nov 2016

Abstract

Introduction

Medical students at the Faculty of Medicine, University of Geneva, Switzerland, have the opportunity to practice clinical skills with simulated patients during formative sessions in preparation for clerkships. These sessions are given in two formats: 1) direct observation of an encounter followed by verbal feedback (direct feedback) and 2) subsequent review of the videotaped encounter by both student and supervisor (video-based feedback). The aim of the study was to evaluate whether content and process of feedback differed between both formats.

Methods

In 2013, all second- and third-year medical students and clinical supervisors involved in formative sessions were asked to take part in the study. A sample of audiotaped feedback sessions involving supervisors who gave feedback in both formats were analyzed (content and process of the feedback) using a 21-item feedback scale.

Results

Forty-eight audiotaped feedback sessions involving 12 supervisors were analyzed (2 direct and 2 video-based sessions per supervisor). When adjusted for the length of feedback, there were significant differences in terms of content and process between both formats; the number of communication skills and clinical reasoning items addressed were higher in the video-based format (11.29 vs. 7.71, p=0.002 and 3.71 vs. 2.04, p=0.010, respectively). Supervisors engaged students more actively during the video-based sessions than during direct feedback sessions (self-assessment: 4.00 vs. 3.17, p=0.007; active problem-solving: 3.92 vs. 3.42, p=0.009). Students made similar observations and tended to consider that the video feedback was more useful for improving some clinical skills.

Conclusion

Video-based feedback facilitates discussion of clinical reasoning, communication, and professionalism issues while at the same time actively engaging students. Different time and conceptual frameworks may explain observed differences. The choice of feedback format should depend on the educational goal.

Formative objective structured clinical examinations (OSCEs) are considered to be useful opportunities for medical students to improve their clinical skills such as history taking, physical examination and communication skills. External feedback plays a crucial role in enhancing appropriate learning, correcting deficiencies, and monitoring students’ self-learning (Citation1, Citation2) since self-assessment is not enough to ensure accurate identification of areas for improvement and to develop effective learning (Citation3, Citation4).

Formative feedback can be given to students immediately after the clinical encounter or subsequently at a scheduled time. Immediate feedback after direct observation OSCE stations appears to improve subsequent student performance in a quick and durable way (Citation5) and to enhance students’ self-assessment skills (Citation6). Whereas students’ video review of their own performance does not lead to increased self-assessment accuracy (Citation1, Citation2) (Citation7, Citation8), video review with expert feedback appears to be more effective in terms of students’ satisfaction and performance and is superior to expert feedback alone (Citation9, Citation10).

In 2013, we conducted a study among preclinical medical students to evaluate how well they perceived the feedback received from their supervisors in two different formative OSCEs: one based on direct observation and the other based on subsequent video review. Students rated the quality of feedback received during the video-based feedback session higher than feedback received during the direct observation (difference of 0.39 on a 1–5 Likert scale; p<0.001). This difference was independent of the clinical content of the OSCE and the characteristics of the supervisors involved in the feedback session (Junod Perron et al).

The aim of the study was to evaluate differences in the content and process of feedback in direct observation versus video-based OSCE formats.

Methods

Design, setting and participants

We conducted the study at the Faculty of Medicine, University of Geneva, Switzerland. The medical school offers a 6-year curriculum divided into three preclinical years (bachelor) and three clinical years (master) to 150 medical students per class. During the second and third preclinical years, medical students practice clinical skills during formative OSCEs in two different formats:

  1. A 20-min interaction with a simulated patient followed by a 15-min feedback (direct observation) by a supervisor and 2–3 min of feedback by the simulated patient.

  2. A 20-min videotaped interaction with a simulated patient, followed by immediate feedback by the simulated patient and a delayed 40-min feedback session by the supervisor 1 week later (including observation of the videotaped consultation), after the student has reviewed and analyzed his own performance (video-based feedback).

Data collection

During the 2012–2013 academic year, all feedback sessions given to second- and third-year medical students during formative OSCEs were audiotaped. For each supervisor who had given both types of feedback, two feedback sessions given in the ‘direct observation’ format and two feedback sessions given in the ‘video-based format’ were randomly selected.

Instrument

We adapted a previously developed instrument to assess the effectiveness of training on faculty feedback skills (Citation11). The instrument, which assessed the way feedback is given, follows the structure of the MAAS-Global-Score, a well-known communication skills coding instrument, including both specific items, five global dimensions and a global rating item (Citation12). Content validity was insured by an extensive review of the literature on effective feedback (Citation5, Citation13Citation17). Construct validity was previously demonstrated in the ability of the instrument to detect improvements in faculty feedback skills (Citation11). We added four items related to the content derived from the OSCE checklist used by supervisors and included three additional items specifically evaluating clinical reasoning and communication/professionalism.

Outcome measures

Content of the feedback

Content analysis consisted of identifying and quantifying all themes mentioned and/or discussed during the feedback session. Themes were divided into seven categories (): global, history taking, physical examination, content of the explanation-end of encounter, communication, elaboration on clinical reasoning, and elaboration on communication and professionalism. ‘Elaboration’ referred to the number of times the supervisor elaborated in a directive or facilitative way on the importance of collecting elements of the history taking or physical examination in relation to the clinical reasoning process or underlined the importance of communication skills or attitudes for future practice during the feedback session.

Table 1 Content of the feedback given by the supervisors according to two types of formatives OSCE (12 supervisors, 48 audiotaped feedback)

Process of the feedback

The process of feedback, focusing on the structure and the content of teaching skills used by the supervisors, was coded by N.J.P. and M.L.S. according to the 14 themes displayed in , using a 0–5 Likert scale (0 being absent and 5 optimal).

Interrater reliability, which was calculated on the basis of 10% of the audiotaped feedback sessions, was good (intraclass correlation coefficient=0.89).

In addition, all students who received feedbacks from supervisors involved in both formats were also asked to rate the utility and quality of the feedback immediately after the feedback session. The self-administrated questionnaire consisted of 14 items used in a previous study () (Junod Perron et al).

Table 2 Process of the feedback given by the supervisors according to two types of formative OSCE (12 supervisors, 48 audiotaped feedback)

The overall project was approved by the research ethics committee of the University Hospitals of Geneva. A complete review was waived by this committee.

Analysis

Sociodemographic data were summarized by percentages, means, and standard deviation (SD). Feedback content data were expressed as the mean number of items addressed per session of feedback. Feedback process data were summarized by mean scores of Likert scale and SD. Comparisons of feedback scores (for both feedback content and process) were made with multivariate analysis of variance including a supervisor effect, and a feedback duration effect categorized in four groups (the quartiles were chosen as cut-off values). Differences in students’ perceptions regarding the quality of feedback between both formats were analyzed using the ANOVA test.

All analyses were run on R 2.15.3 (the R Foundation for Statistical Computing), and TIBCO Spotfire S+® 8.1 for Windows (TIBCO Software Inc).

Results

Twelve supervisors were involved in both direct observation and video-based feedback sessions. For each supervisor, two feedback sessions per format were randomly selected and analyzed (n=48).

Supervisors included seven women (7/12, 58%) and supervisors’ mean age was 45 years old (SD 10). Three worked in hospital-based general internal medicine, four in ambulatory general internal medicine, three as educationalists, and one as a hospital specialist. They had on average 18.2 years (SD 9.9) of clinical experience, had been clinical supervisors for an average of 9.4 years (SD 8.7), and were involved as supervisors in the formative OSCE for 4.8 years (SD 5.7).

Mean length of feedback was 26.2 min (SD 7.9) for the video-based sessions and 13.5 min (SD 4.0) for the direct observation sessions (p<0.001). Length of feedback in video-based sessions was measured as the actual time of verbal interaction, excluding video observation time.

During videotaped feedback sessions, supervisors addressed more often communication issues and elaborated more often on clinical reasoning and communication and professionalism issues than during direct observation sessions, independent of the length of feedback ().

Supervisors were also more learner centered during video-based feedback sessions () and scored higher for several elements such as students’ self-assessment, student's participation in the problem-solving process, and checking of students’ understanding, and transversal dimensions such as feedback structure and the amount of verbal interaction.

The 140 students who received feedback from supervisors involved in both formats perceived that in the video-based feedback, the supervisor made them more active by evaluating their learning needs, and by involving them more actively in the problem-solving. There was also some evidence that the video format was more effective than the direct observation format at improving their physical exam and communication skills ().

Table 3 Students’ perceptions of the quality of feedback according to the type of feedback format (12 supervisors and 140 students)

Discussion

The results show that supervisors gave different feedback both in terms of content and process during video-based or direct observation formats, independent of the duration of feedback. In the video-based format, supervisors are more learner centered, address more communication and professionalism issues, and make more active links between history taking and physical examination and clinical reasoning. Students also perceived that the video format was more learner centered, and there was also some evidence that it was more effective at improving their communication and physical exam skills.

We found no literature about how the format in which formative OSCE takes place influences the way supervisors give feedback. In the field of simulation, the use of video-assisted debriefing is not superior to the use of non-video-assisted debriefing for the acquisition of technical and non-technical skills (Citation18). However, regarding non-technical skills, as stated in two reviews, no study specifically analyzed the impact of the debriefing structure on learners’ perceptions and performance (Citation19, Citation20).

Several factors may explain such differences of feedback formats. Direct observation and video-based formats use different frames of time. The first takes place immediately after the observed performance while the second is delayed. During the direct observation format, supervisors have a rather short and rigid time for feedback and teaching and may feel under pressure, preventing them to discuss and match their observations to students’ needs. The video-based format provides a longer and quieter time during which the supervisor may have more freedom and flexibility on how to organize time between observation, discussion, and teaching.

More importantly, supervisors may be encouraged to rely on different conceptual frameworks of learning according to the type of format. Argyris and Schön described two ways of learning inside an organization (Citation21): a single-loop learning in which errors are detected and reflection is directed towards an immediate solution (problem-solving); a double-loop learning where errors are detected and reflection results more into questioning and even changing the overall framework than finding the more effective strategy. In the direct observation format, the fact that the supervisor spends more time talking and giving advice suggests a single-loop type of learning/teaching. Because the video-based format offers the student the opportunity to self-assess and reflect on his or her own performance, the supervisor may be more likely to act as a facilitator and to question students’ assumptions and frames of reference in a double-loop approach rather than providing immediate solutions. This results in more time for reflection, exchange, and elaboration of concepts and frames.

The reflective practice framework may also bring additional explanation to our observations (Citation22, Citation23). In direct observation feedback, the student self-assessment and reflection ‘in action’ may be insufficient to provide the supervisor with elaborated materials to discuss. In the video-based feedback session, the students may have had sufficient time and opportunity to reflect on their actions, thus allowing for discussion about more elaborated self-reported materials during the feedback session.

Finally, because the video displays concrete and observable behaviors in a neutral way, it also avoids any source of misunderstanding between the student and the supervisor (Citation9) and may stimulate the sharing of perceptions. This can be especially useful to teach dimensions such as communication and professionalism. Such issues often remain unaddressed during feedback not only because of supervisors’ lack of training or frames of reference (Citation24) but also perhaps because of their reluctance and fear of being threatening or intrusive. This is of importance since several studies have shown a decline in empathy among students and residents during undergraduate and graduate medical education (Citation25). Self-reflection during review of student videos linked to supervisor or peer feedback seems to be an effective and valued way to learn communication and professional attitudes (Citation26, Citation27) and is more beneficial than traditional feedback on students’ communication skills (Citation9). Video-based stimulated recall has also been shown to be useful to stimulate clinical reasoning or to facilitate development of shared cognition (Citation28, Citation29). Thus, it seems that video-based feedback, by stimulating self-assessment, improves both self-awareness and skills (Citation9, Citation30), especially if feedback is coupled to explicit performance expectations and benchmarks (Citation8).

There are several limitations to our study. The sample of supervisors involved and the number of feedback sessions per supervisor analyzed were small. Second, the analysis of the feedback content, focused on counting the elements addressed during feedback, gives only a limited view of what was discussed during the session. Third, we only linked supervisors’ feedback to student perceptions of improvement and not to objective outcomes such as skill improvement or performance change. Finally, the fact that the study was conducted in only one medical school may limit the generalizability of our findings.

Implications for practice

Our findings suggest that these two feedback formats may be applied differently according to the goals of the teaching session. For daily clinical supervision in busy clinics, short, immediate feedback following direct observation focusing on ‘single-loop’ reflection appears to be the most suitable format given time constraints. It is facilitated by the widespread use of the Mini-Clinical Evaluation Exercise (Mini-CEX), a work-based assessment used to evaluate a trainee's clinical performance in real-life settings (Citation31). Delayed and video-based feedback is more adapted for longer and less frequent sessions addressing issues such as professionalism, communication, and clinical reasoning where prior self-assessment, analysis and discussion of performance, and questioning of frames of reference may both increase students’ self-awareness and facilitate pedagogical diagnosis and remediation.

Conflict of interest and funding

The authors declare that they have no competing interests. The Edmond J. Safra Philanthropic Foundation supported the cost of the data collection.

References

  • Hasnain M, Connell KJ, Downing SM, Olthoff A, Yudkowsky R. Toward meaningful evaluation of clinical competence: the role of direct observation in clerkship ratings. Acad Med. 2004; 79: S21–4.
  • Sandars J. The e-learning site. Educ Prim Care. 2011; 22: 443–4.
  • Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. Acad Med. 2005; 80: S46–54.
  • Ward M, Gruppen L, Regehr G. Measuring self-assessment: current state of the art. Adv Health Sci Educ Theory Pract. 2002; 7: 63–80.
  • Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007; 77: 81–112.
  • Reiter HI, Rosenfeld J, Nandagopal K, Eva KW. Do clinical clerks provide candidates with adequate formative assessment during Objective Structured Clinical Examinations?. Adv Health Sci Educ Theory Pract. 2004; 9: 189–99.
  • Backstein D, Agnidis Z, Regehr G, Reznick R. The effectiveness of video feedback in the acquisition of orthopedic technical skills. Am J Surg. 2004; 187: 427–32.
  • Srinivasan M, Hauer KE, Der-Martirosian C, Wilkes M, Gesundheit N. Does feedback matter? Practice-based learning for medical students after a multi-institutional clinical performance examination. Med Educ. 2007; 41: 857–65.
  • Hammoud MM, Morgan HK, Edwards ME, Lyon JA, White C. Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Adv Med Educ Pract. 2012; 3: 19–30.
  • Ozcakar N, Mevsim V, Guldal D, Gunvar T, Yildirim E, Sisli Z, etal. Is the use of videotape recording superior to verbal feedback alone in the teaching of clinical skills?. BMC Public Health. 2009; 9: 474.
  • Junod Perron N, Nendaz M, Louis-Simonet M, Sommer J, Gut A, Baroffio A, etal. Effectiveness of a training program in supervisors’ ability to provide feedback on residents’ communication skills. Adv Health Sci Educ Theory Pract. 2013; 18: 901–15.
  • Van Thiel J, Kraan HF, Van Der Vleuten CP. Reliability and feasibility of measuring medical interviewing skills: the revised Maastricht History-Taking and Advice Checklist. Med Educ. 1991; 25: 224–9.
  • Cantillon P, Sargeant J. Giving feedback in clinical settings. BMJ. 2008; 337: a1961.
  • Kluger NA, DeNisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull. 1996; 119: 254–84.
  • Hewson MG, Little ML. Giving feedback in medical education: verification of recommended techniques. J Gen Intern Med. 1998; 13: 111–6.
  • Kurtz S, Silverman J, Draper J. Teaching and learning communication skills in medicine. 2005; Oxford: Radcliff Publishing.
  • Richardson BK. Feedback. Acad Emerg Med. 2004; 11: e1–5.
  • Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ. 2014; 48: 657–66.
  • Garden AL, Le Fevre DM, Waddington HL, Weller JM. Debriefing after simulation-based non-technical skill training in healthcare: a systematic review of effective practice. Anaesth Intensive Care. 2015; 43: 300–8.
  • Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simul Healthc. 2016; 11: 209–17.
  • Argyris C, Schön D. Organizational learning: a theory of action perspective. 1978; Readiing, MA: Addison Wesley.
  • Mamede S, Schmidt HG. The structure of reflective practice in medicine. Med Educ. 2004; 38: 1302–8.
  • Schön DA. The reflective practitioner: how professionals think in action. 1983; New York: Basic Books.
  • Kogan JR, Conforti LN, Iobst WF, Holmboe ES. Reconceptualizing variable rater assessments as both an educational and clinical care problem. Acad Med. 2014; 89: 721–7.
  • Neumann M, Edelhauser F, Tauschel D, Fischer MR, Wirtz M, Woopen C, etal. Empathy decline and its reasons: a systematic review of studies with medical students and residents. Acad Med. 2011; 86: 996–1009.
  • Eeckhout T, Gerits M, Bouquillon D, Schoenmakers B. Video training with peer feedback in real-time consultation: acceptability and feasibility in a general-practice setting. Postgrad Med J. 2016; 92: 431–5.
  • Kalish R, Dawiskiba M, Sung YC, Blanco M. Raising medical student awareness of compassionate care through reflection of annotated videotapes of clinical encounters. Educ Health (Abingdon). 2011; 24: 490.
  • Balslev T, de Grave W, Muijtjens AM, Eika B, Scherpbier AJ. The development of shared cognition in paediatric residents analysing a patient video versus a paper patient case. Adv Health Sci Educ Theory Pract. 2009; 14: 557–65.
  • Nendaz MR, Gut AM, Perrier A, Reuille O, Louis-Simonet M, Junod AF, etal. Degree of concurrency among experts in data collection and diagnostic hypothesis generation during clinical encounters. Med Educ. 2004; 38: 25–31.
  • Lane JL, Gottlieb RP. Improving the interviewing and self-assessment skills of medical students: is it time to readopt videotaping as an educational tool?. Ambul Pediatr. 2004; 4: 244–8.
  • Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: a method for assessing clinical skills. Ann Intern Med. 2003; 138: 476–81.