Abstract
Background: The faculty development community has been challenged to more rigorously assess program impact and move beyond traditional outcomes of knowledge tests and self ratings. Purpose: The purpose was to (a) assess our ability to measure supervisors' feedback skills as demonstrated in a clinical setting and (b) compare the results with traditional outcome measures of faculty development interventions. Methods: A pre–post study design was used. Resident and expert ratings of supervisors' demonstrated feedback skills were compared with traditional outcomes, including a knowledge test and participant self-evaluation. Results: Pre–post knowledge increased significantly (pre = 61%, post = 85%; p < .001) as did participant's self-evaluation scores (pre = 4.13, post = 4.79; p < .001). Participants' self-evaluations were moderately to poorly correlated with resident (pre r = .20, post r = .08) and expert ratings (pre r = .43, post r = −.52). Residents and experts would need to evaluate 110 and 200 participants, respectively, to reach significance. Conclusions: It is possible to measure feedback skills in a clinical setting. Although traditional outcome measures show a significant effect, demonstrating change in teaching behaviors used in practice will require larger scale studies than typically undertaken currently.
This study was funded by University of Ottawa Education Initiatives in Residency Education Fund and Royal College of Physicians and Surgeons of Canada Faculty Development Grant.
Notes
a N = 10
b N = 10
c N = 10
∗p < .05
∗∗p < .001