136
Views
2
CrossRef citations to date
0
Altmetric
Original Research

Is Asking Questions on Rounds a Teachable Skill? A Randomized Controlled Trial to Increase Attendings’ Asking Questions

ORCID Icon, , , , , & show all
Pages 921-929 | Published online: 01 Dec 2020

Abstract

Background

Morning bedside rounds remain an essential part of Internal Medicine residency education, but rounds vary widely in terms of educational value and learner engagement.

Objective

To evaluate the efficacy of an intervention to increase the number and variety of questions asked by attendings at the bedside and assess its impact.

Design

We conducted a randomized, controlled trial to evaluate the efficacy of our intervention.

Participants

Hospitalist attendings on the general medicine service were invited to participate. Twelve hospitalists were randomized to the experimental group and ten hospitalists to the control group.

Intervention

A one-hour interactive session which teaches and models the method of asking questions using a non-medical case, followed by practice using role plays with medical cases.

Main Measures

Our primary outcome was the number of questions asked by attendings during rounds. We used audio-video recordings of rounds evaluated by blinded reviewers to quantify the number of questions asked, and we also recorded the type of question and the person asked. We assessed whether learners found rounds worthwhile using anonymous surveys of residents, patients, and nurses.

Key Results

Blinded analysis of the audio-video recordings demonstrated significantly more questions asked by attendings in the experimental group compared to the control group (mean number of questions 23.5 versus 10.8, p< 0.001) with significantly more questions asked of the residents (p<0.003). Residents rated morning bedside rounds with the experimental attendings as significantly more worthwhile compared to rounds with the control group attendings (p=0.009).

Conclusion

Our study findings highlight the benefits of a one-hour intervention to teach faculty a method of asking questions during bedside rounds. This educational strategy had the positive outcome of including significantly more resident voices at the bedside. Residents who rounded with attendings in the experimental group were more likely to “strongly agree” that bedside rounds were “worthwhile”.

Introduction

Morning bedside rounds remain an essential part of Internal Medicine residency education, but bedside rounds vary widely in terms of educational value and learner engagement.Citation1Citation6 When done well, attendings impart evidence-based, relevant teaching that encourages clinical reasoning and impacts patient care.Citation1 At other times, rounds can be lengthy and task-oriented, with little team engagement.Citation2 Merritt reported that more than half the learners on their study’s rounds were dissatisfied with the teaching.Citation2 With increasingly high patient censuses and time pressures, others have noted that it is important for education during rounds to be focused and high-yield.Citation7,Citation8

In 1997, Jack Ende, M.D. suggested that an effective teaching strategy was to ask questions as a discussion leader at Harvard Business School might do.Citation7,Citation9,Citation10 The conceptual framework that underlies this strategy is the case method of teaching that uses the “question, listen, respond” technique in schools of business, law and education.Citation9,Citation10 These schools have focused on the importance of formalized training for their faculty member’s role as a discussion leader, using the teaching strategy of “question, listen, respond”, summarizing and providing key takeaways.Citation9Citation13 Our conceptual framework translates this teaching strategy to the inpatient bedside where the attending acts as a discussion leader to encourage the frank exchange of data, differential diagnoses, explanations, rationale and prioritizations of planned procedures and/or studies among all participants at the bedside including the patient, resident, nurse, pharmacist and medical student.

In 2018, we published a pilot study assessing whether asking questions on rounds could improve learner engagement on rounds and whether learners felt rounds were worthwhile.Citation11 We demonstrated that the “Asking Questions” intervention using the “question, listen, and respond” techniqueCitation9,Citation10,Citation12 was associated with a significant improvement in perceived educational value and engagement on rounds compared to pre-intervention data.Citation11 However, because there was no randomization, no definite relationship could be established between the intervention and the improvement in ratings.Citation11

To demonstrate that “Asking Questions on Rounds” is a teachable skill and to evaluate the efficacy of this teaching strategy, we designed a randomized, controlled trial of this intervention for hospitalist attendings.

Methods

Setting, Participants and Randomization

All 31 hospitalists who were scheduled to rotate on Internal Medicine resident teaching services at Brigham and Women’s Hospital during the period, beginning September 28, 2018 and ending March 28, 2019, were invited to be part of the study by email. Hospitalists were told that they would receive an honorarium of two hundred dollars ($200.00) if they participated in the study. Three hospitalists did not wish to participate and “opted out”. Faculty were excluded from the study if they were directly involved as authors (LW, CR; N = 2) or had attended an intervention session during the pilot study two years previously (N = 2). The remaining 24 hospitalists who rotated on a teaching service during the 6-month study window were randomized to the experimental group or control group and were notified of their assignment. Two hospitalists, who had been randomized to the control group, never responded to the email invitation, so the assumption was made that they did not wish to participate. All remaining attendings in the experimental group (N = 12) and in the control group (N = 10) agreed to be audio-video recorded during only one of their morning bedside rounds. Those in the control group received no training in the intervention of asking questions. Those in the experimental group attended one 60-minute intervention session prior to their scheduled rotation.

Description of the Intervention

The intervention “Asking a Variety of Questions on Rounds” was a one-hour interactive program delivered in a hospital conference room during four time slots to accommodate the twelve experimental group hospitalists’ schedules. The program consisted of four parts:

1. The program was led by Dr. James Honan Ed. D, an expert teacher of the question, listen, and respond method of discussion leadership.Citation11,Citation12 As in the pilot study, Dr. James Honan started the hour with “The French Lesson” a copyrighted Harvard Business School teaching case (The French Lesson Case Parts A and B) by Abby Hansen (Case No.9–384-066) for 30 minutes.Citation11 This case describes a French class that goes significantly awry for one student. Dr. Honan’s questions revolve around the reasons why.

2. Following the interactive case discussion, Dr. Honan outlined the Lesson Plan he had used with the exact sequence of questions. Then, he gave examples of each type of question on PowerPoint slides (Appendix 1). Each attending was given a laminated card (4 X 6 inches) listing each type of question and examples (Appendix 2).

3. For the next 20 minutes, participants practiced asking questions in two role-play scenarios modeling an intern’s presentation on morning bedside rounds to the attending and resident. During the role-play, each participant was asked to use one or two new types of questions. To make this task easier, each type of question was printed in large font on a 5 X 7 inch cardstock creating a set of ten cards for each of the ten types of different questions. Each attending was given a set of question cards to role-play the scripted case presentation of Frank Miller (See Appendix 3) and Laura Chen (See Appendix 4).

4. During the last 5 minutes, attendings had the opportunity to ask questions about the intervention and how to successfully implement it into their own bedside rounds.

Protocol and Consents for Evaluation of Bedside Rounds

Partners Human Research Committee (IRB), Boston, Massachusetts, approved this research project on May 1, 2018 (Protocol # 2015P001422BWH, Amendment 7).

All team members (attendings, pharmacists, residents, nurses, medical students) and patients were read and given the verbal IRB consent form. Each team member was asked to sign the Brigham and Women’s Hospital Audio-Video Recording consent form in order to be audio-video recorded during morning bedside rounds.

Audio-Video Recording Protocol and Surveys

Morning bedside rounds were audio-video recorded for every attending in the control group (N = 10) and the experimental group (N =12) (HS). Attendings in the experimental group had completed the intervention at the time of their recorded rounds.

Prior to the scheduled recording of rounds, patients were selected who would be appropriate and willing to participate in audio-video recording of bedside rounds. In total, 68 patients were invited to participate in one of the 22 audio-video recorded morning bedside rounds; 48 patients accepted and were audio-video recorded. Twenty patients declined.

On the day of rounds, HS arrived prior to the start of rounds to ensure patient and team member consents were signed. The attendings were instructed to round at the bedside, and were audio-video recorded for the entire bedside rounds of their first and second consented patients only.

There were two exceptions: 1) Due to having only one patient consent for rounds, one doctor had to be filmed on two different days to record two separate patients. 2) In one attending’s case, sensitive and key issues, that could not be discussed inside the room, were discussed outside the room, and this discussion was included in the audio-video recording in addition to the bedside portion of rounds.

After consultation with our statistician (SP), the decision was made to include only the first and second patients for each attending to standardize the audio-video recordings.

Audio-Video Recording Analysis by Two Independent Judges

All audio-video recordings were reviewed by HS independently for quality and completeness and edited with Angel Ayala, (AA), (Audio-Visual Department, Brigham and Women’s Hospital) to uniformly raise the volume on difficult to hear conversations at the bedside. A random number (from a random number table) was inserted onto the audio-video recording along with the date of the rounds.

Two independent physician judges (JG from Pathology and RM from Radiology), who were not familiar with the hospitalist attendings, reviewed the audio-video recordings. Each audio-video recording was analyzed for 1) the number of questions asked, 2) the type of questions asked, and 3) to whom the questions were addressed (Appendix 5). Finally, each judge was asked to submit to HS an analysis of the same randomly chosen attending audio-video recording using the scoring sheet so each judge could demonstrate his/her own preliminary analysis, ask one-on-one-questions and get answers from HS about his/her scoring before proceeding to analyze all the audio-video recordings independently.

Team Surveys

All team members, including residents, nurses, and pharmacists, as well as each patient who was filmed were asked (HS) to complete an anonymous survey (Appendix 6). Nurses and pharmacists were asked to complete their surveys following rounds and leave them at the nursing station face down for anonymity. Residents were asked to leave their completed surveys face down in the residents’ work room for anonymity.

Patient Surveys

Each of the patients who was audio-video recorded was asked to complete a paper survey immediately following rounds (HS).

Statistical Methods

The reviews of audio-video recordings by the two independent judges were averaged for data analysis. There were no significant differences in question counts between Judge #1 and Judge #2.

Box plot visualizations of the distribution of scores indicated that three attendings (two in the experimental group and one in the control group) asked significantly more questions than all other attendings. Data from these three outlier attendings were removed prior to the statistical analysis of the questions. The mean number of 1) total questions asked, 2) each type of question asked, and 3) questions asked to specific members of the team by experimental and control attendings were compared using Independent Samples t-Tests.

All survey data measuring perceptions of rounds were analyzed using two-proportion Z-tests.

Lengths of audio-video recordings for experimental and control group attending rounds were compared by an Independent Samples t-Test of the means in seconds.

Results

Twenty-two Internal Medicine teams participated in audio-video recorded morning bedside rounds from 9/28/2018 to 3/28/2019. After removal of the three outlier attendings noted above, data from 19 teams were included in the final audio-video recording analyses (N = 10 attendings in experimental group, N = 9 attendings in control group for the question analyses) ().

Table 1 Number of Questions Asked

Table 2 Types of Questions Asked During Bedside Rounds

Table 3 To Whom Questions Were Asked During Bedside Rounds

Surveys were collected from all twenty-two teams’ residents (N = 42 in experimental group, N = 24 in control group), nurses (N = 18 in experimental group, N = 19 in control group), and pharmacists (N = 5 in experimental group, N = 6 in control group) on each team. Three residents did not complete the surveys due to patient care duties. One nurse and two pharmacists declined to be audio-video recorded and stayed outside of the patient’s room. The higher number of surveys for the experimental group was partly attributable to requiring two separate sessions for one experimental attending due to only one patient consenting on the first day of rounds. On this particular service, the resident teams were larger than on other medical floors with two residents and three interns.

Question Analysis

The experimental group attendings asked significantly more questions during rounds overall compared to control attendings (23.5 versus 10.8, p<0.001) ().

Experimental attendings asked more questions of every type compared to control attendings, and this difference was significant for open-ended questions (p<0.001), extension questions (p=0.044), and challenge questions (p=0.021) ( and Appendix 2). While experimental attendings directed their increased number of questions to all members of the team, the number of questions directed at residents (p=0.003) and patients (p=0.042) were significantly greater in the experimental group compared to the control group (). No significant difference in the number of questions directed to nurses was noted between the experimental and control attendings ().

Length of Rounds

The length of rounds was compared for the experimental and control group attendings. Experimental rounds averaged 965.2 seconds (16.09 minutes) and control rounds averaged 777.3 seconds (12.96 minutes). The mean difference is 3.13 minutes (187.8 seconds.) The difference is not statistically significant with a p-value of p=0.225.

Anonymous Survey Data on Perceptions of Rounds from Residents, Nurses, Pharmacists and Patients

Based on anonymous paper surveys after rounds, a greater percentage of residents who were part of the bedside rounding teams with attendings in the experimental group (N = 42) strongly agreed that morning bedside rounds were “worthwhile” compared to residents with attendings in the control group (N = 24) (59.5% versus 29.2%, p=0.009) (). There was no significant difference in the percentage of residents who rated rounds as more “engaging” in the experimental group compared to the control group (83.3% versus 66.7%, p=0.121) ().

Table 4 Resident Perception of Rounds Based on Anonymous Survey Data

Based on anonymous paper surveys, nurses rated morning bedside rounds with attendings in the experimental group similar to those with attendings in the control group (N = 37, 18 experimental and 19 control) (Appendix 7).

There were also no significant differences for the patients’ survey data between experimental and control attendings (N = 48, 27 experimental and 21 control) (Appendix 8).

Survey data from pharmacists were not included in the statistical analyses due to small numbers (N = 11, 5 experimental and 6 control).

Survey Comments from Residents, Nurses and Pharmacists

displays representative verbatim comments from anonymous residents, nurses and pharmacists in response to the survey question, “What would make morning rounds more vibrant, inclusive and high yield?” The comments are shown for the experimental group and control group attending bedside rounds.

Table 5 Verbatim Resident, Nurse and Pharmacist Comments

Survey Comments from Patients

Appendix 9 shows verbatim anonymous survey comments from patients in response to the question “What did you like best about today’s bedside rounds?” Out of the many comments written on the paper surveys, we focused on those that mentioned the use of questions and answers during bedside rounds for both the experimental and control group attendings.

Discussion

This randomized controlled trial of a one-hour faculty development intervention significantly increased the number of questions experimental attendings asked on rounds, demonstrating that asking questions during bedside rounds is a teachable skill. Residents who rounded with attendings in the experimental group were more likely to “strongly agree” that bedside rounds were “worthwhile” compared to residents who rounded with attendings in the control group.

While significantly more “open-ended” or clarifying questions were asked by the experimental attendings, “challenge” and “extension” questions, which are “higher level” and “analytic”, were also significantly increased in the experimental attending group. Berbano et al studied the impact of the Stanford Faculty Development Program on Ambulatory Teaching.Citation14 Their results differ from our results in that they demonstrate significantly less questions asked of standardized learners after their ambulatory faculty development program.Citation14 However, their results are similar to ours in that Berbano et al noted an increase in the proportion of “higher-level”, “analytic questions”.Citation14 Our data also show a significant increase in“higher level”, “analytic” questions, of the “challenge” and “extension” type after faculty development.

In 1997, Dr. Jack Ende encouraged attendings at the bedside to consider themselves as discussion leaders who engage and educate at the bedside through a series of questions that encourage a thoughtful and inclusive dialog on important issues.Citation7,Citation9,Citation10 The underlying conceptual framework for using the “question, listen, response” method of teaching derives from business, law and education schools where the case method of teaching is the standard method for discussion and seminar classes to involve everyone in the discussion process.Citation9,Citation10,Citation13 In 2003, Professor David Garvin discussed the differences and similarities of Harvard Law School, Harvard Business School and Harvard Medical School in their use of the Case Method for teaching.Citation13 He noted: “The case-based, guided inquiry approach uses questions to steer the discussion of pre-identified learning issues and assigned preparatory readings” and

All professional schools face the same difficult challenge: how to prepare students for the world of practice. Time in the classroom must somehow translate into real-world activity: how to diagnose, decide, and act.

Faculty development interventions have been studied extensively, but few have used rigorous methods to evaluate these efforts. SteinertCitation15 and LeslieCitation16 provide detailed and systematic reviews of faculty development initiatives. Of the 53 studies evaluated by Steinert, only six were randomized controlled trials including three that used audio-video recordings of teaching sessions pre- and post-feedback and/or seminars to improve teaching skills.Citation15,Citation17Citation19 Leslie’s reviewCitation16 noted that self-reported changes in teaching were the most frequently assessed results of a faculty development program.Citation16 Rabinowitz recommended improving attending teaching strategies and understanding the purpose of rounds.Citation20,Citation21 Rabinowitz specifically noted the paucity of data guiding faculty development with regards to conducting bedside rounds to “maximize clinical education while minimizing inefficiencies” in today’s hurried and fragmented inpatient climate.Citation20

To our knowledge, our study is the first to use a randomized controlled study design and rigorous evaluation methods to assess the impact of a faculty development intervention to learn to ask an increased number and variety of questions during bedside rounds of more team members. We did not rely on self-reported assessment or resident survey data alone. Instead, we used audio-video recording and blinded, independent physician judges to evaluate the number of questions, type of question and to whom it was asked, using an objective assessment tool.

This study has several limitations. The intervention was delivered and evaluated at a single institution. Surveys were distributed by a senior educator (HS) and although they were collected anonymously, residents, nurses and pharmacists might have been less likely to decline participation. Although the study was sufficiently powered for our primary outcome, the number of participants is still small and susceptible to individual variations within our hospital medicine group. We did not collect self-assessment data from the attendings and did not conduct pre-intervention and post-intervention analyses for the number of questions asked. The intervention spanned several months. Since many hospitalists work alongside one another, there is a risk of cross-contamination which could potentially bias our results towards the null. While our study increased the number of questions asked overall, we did not focus on specifically classifying the quality of each question asked by attendings into analytic questions versus clarifying/recall questions as was previously done by Berbano et al.Citation14

In summary, our study used rigorous design and evaluation methods to show that a one-hour faculty development session focused on “Asking Questions” is associated with the successful outcome of significantly more questions asked by attendings during rounds. Residents who rounded with attendings in the experimental group were more likely to “strongly agree” that bedside rounds were “worthwhile” compared to residents who rounded in the control group. In the future, we hope to tailor the program to emphasize the importance of routinely including nurses with questions about their patients as part of rounds. We look forward to scaling up our one-hour “Asking Questions” program so that all attendings and residents have the opportunity to learn how to ask more questions of each other, nurses and patients on morning rounds.

Compliance with Ethical Standards

Partners Human Research Committee (IRB), Boston, Massachusetts, Approved this Research Project on May 1, 2018.

Acknowledgments

We are extremely grateful to the outstanding and collegial hospitalists at Brigham and Women’s Hospital listed below who joined us in the Randomized Controlled Trial as Experimental Group or Control Group Attendings. These Attendings’ names in alphabetical order are as follows: Ali Bahadori, M.D., Ebrahim Barkoudah, M.D., M.P.H., Ralph Blair, M.D., Anuj Dalal, M.D., Matthew DiFrancesco, M.D., Morgan Esperance, M.D., M.P.H., Matthew Gartland, M.D., Clare Horkan, M.B., B.Ch., William Martin-Doyle, M.D., M.P.H., Michelle Morse, M.D., M.P.H., Stephanie Mueller, M.D., M.P.H., Ranganath Papanna, M.D., M.S., Rajesh Patel, M.D., M.P.H., Joseph Rhatigan, M.D., John Ross, M.D., David Rubins, M.D., Agustina Saenz, M.D., MPH, Adam Schaffer, M.D., M.P.H., Jeffrey Schnipper, M.D., M.P.H., Evan Shannon, M.D., Anant Vasudevan, M.D., and Matthew Vitale, M.D.

We thank members of the Brigham and Women’s Hospital Teaching on Rounds Committee for their excellent advice and suggestions: Dr. Nadaa Ali, Dr. Christopher Cannon, Dr. James Colbert, Dr. Julia Caton, Dr. Elliot Israel, Dr. Joel Katz, Dr. Natalia Khalaf, Dr. Walter Kim, Dr. Anthony Komaroff, Patricia McCormick, MBA, Dr. Vanessa Mitsialis, Dr. Jennifer Nayor, Dr. Alyssa Perez, Dr. Taha Sonny Qazi, and Dr. Marshall Wolf.

We greatly appreciate the expert help of Angel Ayala, A.A., Department of Audio-Visual Communications at Brigham and Women’s Hospital in Boston.

Disclosure

Jeffrey D Goldsmith and Rachna Madan contributed equally as co-third authors for this study. The authors report no conflicts of interest in this work.

References

  • Stickrath C, Noble M, Prochazka A, et al. Attending rounds in the current era: what is and is not happening. JAMA Intern Med. 2013;173(12):1084–1089. doi:10.1001/jamainternmed.2013.6041
  • Merritt FW, Noble MN, Prochaza AV, et al. Attending rounds: what do the all star teachers do? Med Teach. 2017;39(1):100–104. doi:10.1080/0142159X.2017.1248914
  • Janicik RW, Fletcher KE. Teaching at the bedside: a new model. Med Teach. 2003;25:127–130.
  • Abdool MA, Bradley D. Twelve tips to improve medical teaching rounds. Med Teach. 2013;35(11):895–899. doi:10.3109/0142159X.2013.826788
  • Gonzalo JD, Heist BS, Duffy BL, et al. Art of bedside rounds: a multi-center qualitative study of strategies used by experienced bedside teachers. JGIM. 2012;28(3):412–420. doi:10.1007/s11606-012-2259-2
  • Crumlish CM, Yialamas MA, McMahon GT. Quantification of bedside teaching by an academic hospitalist group. J Hosp Med. 2009;4(5):302–307. doi:10.1002/jhm.540
  • Ende J. What if Osler were one of us? J Gen Intern Med. 1997;12(Supplement 2):S41–S48. doi:10.1046/j.1525-1497.12.s2.6.x
  • Carlos WG, Kritek PA, Clay AS, et al. Teaching at the bedside: maximal impact in minimal time. Ann Am Thorac Soc. 2016;13:545–548.
  • Christensen CR. Premises and practices of discussion teaching. In: Christensen CR, Garvin DA, Sweet A, editors. Education for Judgment: The Artistry of Discussion Leadership. 1991:15–33.
  • Christensen CR. The discussion teacher in action: questioning, listening, and response. In: Christensen CR, Garvin DA, Sweet A, editors. Education for Judgment: The Artistry of Discussion Leadership. 1991:153–170.
  • Shields HM, Pelletier SR, Roy CL, Honan JP. Asking a variety of questions on walk rounds: a pilot study. J Gen Intern Med. 2018;33(6):969–974. doi:10.1007/s11606-018-4381-2
  • Shields HM, Guss D, Somers SC, et al. A faculty development program to train tutors to be discussion leaders rather than facilitators. Acad Med. 2007;82(5):486–492. doi:10.1097/ACM.0b013e31803eac9f
  • Garvin D. Making the case. Professional education for the world of practice. Harv Mag. 2003;106:56–65.
  • Berbano EP, Browning R, Pangaro L, et al. The impact of the Stanford Faculty Development Program on ambulatory teaching behavior. J Gen Intern Med. 2006;21(5):430–434. doi:10.1111/j.1525-1497.2006.00422.x
  • Steinert Y, Mann K, Centeno A. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME No.8. Med Teach. 2006;28(6):497–526. doi:10.1080/01421590600902976
  • Leslie K, Baker L, Egan-Lee E, et al. Advancing faculty development in medical education: a systematic review. Acad Med. 2013;88(7):1038–1045. doi:10.1097/ACM.0b013e318294fd29
  • Skeff K. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;74(3):465–470. doi:10.1016/0002-9343(83)90351-0
  • Skeff KM, Stratos G, Campbell M, et al. Evaluation of the seminar method to improve clinical teaching. J Gen Intern Med. 1986;1(5):315–322. doi:10.1007/BF02596211
  • Litzelman DK, Stratos GA, Mariott DJ, et al. Beneficial and harmful effects of augmented feedback on physicians’ clinical-teaching performance. Acad Med. 1998;73(3):324–332. doi:10.1097/00001888-199803000-00022
  • Rabinowitz R, Farnan J, Hulland O, et al. Rounds today: a qualitative study of internal medicine and pediatrics residents perceptions. J Grad Med Educ. 2016;8(4):24–31. doi:10.4300/JGME-D-15-00106.1
  • Hulland O, Farnan J, Rabinowitz R, et al. What’s the purpose of rounds? A qualitative study examining the perception of faculty and students. J Hosp Med. 2017;12(11):892–897. doi:10.12788/jhm.2835