2,699
Views
30
CrossRef citations to date
0
Altmetric
Research Article

High-fidelity simulation is superior to case-based discussion in teaching the management of shock

, , , &
Pages e1003-e1010 | Published online: 05 Nov 2012

Abstract

Background: Case-based discussion (CBD) is an established method for active learning in medical education. High-fidelity simulation has emerged as an important new educational technology. There is limited data from direct comparisons of these modalities.

Aims: The primary purpose of this study was to compare the effectiveness of high-fidelity medical simulation with CBD in an undergraduate medical curriculum for shock.

Methods: The subjects were 85 third-year medical students in their required surgery rotation. Scheduling circumstances created two equal groups. One group managed a case of septic shock in simulation and discussed a case of cardiogenic shock, the other group discussed septic shock and experienced cardiogenic shock through simulation. Student comprehension of the assessment and management of shock was then evaluated by oral examination (OE).

Results: Examination scores were superior in all comparisons for the type of shock experienced through simulation. This was true regardless of the shock type. Scores associated with patient evaluation and invasive monitoring, however, showed no difference between groups or in crossover comparison.

Conclusions: In this study, students demonstrated better understanding of shock following simulation than after CBD. The secondary finding was the effectiveness of an OE with just-in-time deployment in curriculum assessment.

Introduction

High-fidelity simulation is coming of age in medical education and now requires critical evaluation. Some 40 years after early implementation in anesthesiology, there has been an explosive growth of medical simulation across specialties, disciplines, and missions. The computer-based, model-driven, full-scale simulator is a technology at the forefront of this movement. This tool has matured precisely at a time when medical educators seek methods of active learning, competency demonstration, and individualized education within constantly evolving curricula. The purported advantages of simulation (SIM) are myriad and include learner-centered educational experiences in an environment free of risk to patients, repeated and controlled exposure to clinically rare events, individualization of learner experiences with standardization of competencies, and team development opportunities. The success of broad and long-standing applications in aviation, the military, and industry are cited in further support of SIM.

Developments thus far are similar to the evolution of other technologies in medical education and clinical practice. The first phase involved adaptation with the development of SIM devices, task training, and crisis resource management by anesthesiologists, often translating practices from aviation (Howard Citation1992). Next, surveys of learners demonstrated the positive impact of realistic practice on the confidence and attitude toward both routine and rare, high-stakes events. Other specialties and disciplines are now voraciously adopting SIM at essentially all levels of education, training, and practice (Grenvik Citation2004; de Leng Citation2006; Gordon Citation2006; Shukla Citation2007; Fraser Citation2009; Fernandez Citation2010; Schout Citation2010). The current era is one in which simulation is being evaluated critically, sometimes in comparison with other educational methods, for educational outcomes, learner competencies, and clinical outcomes (McGaghie Citation2010; Cook Citation2011). With such investigation, this resource-intensive modality can be used with evidence-based prioritization and cost-effectiveness. In an effort to continue the critical analysis of simulation education, we report data comparing the comprehension of shock demonstrated by medical students following a SIM experience to that demonstrated after case-based discussion (CBD). Such comparison is particularly relevant now, in an era of frequent curricular reform and widespread simulation implementation.

Methods

All described activities occurred during one day of our third-year medical student (MS3) surgical clerkship rotations. The day started with a presentation of the basic presentation, evaluation, and hemodynamic monitoring of shock. Students were then divided alphabetically by last name into two equal groups and attended an airway workshop, a CBD, and a SIM session. The schedule of the student groups is detailed in , wherein students who attended SIM for cardiogenic shock and CBD for septic shock are designated the SIMcardiac group, while students undergoing the contraposed assignments are designated SIMsepsis.

Table 1  Student schedule

The SIM and CBD sessions were based upon the same patients. The cardiogenic case presented with previous cardiac history, nausea and dyspnea following a complex cholecystectomy. The sepsis case was that of an elderly man with confusion following admission for urinary obstruction. The formats of the two sessions were necessarily different. CBD sessions began with the case presentation. This was followed by group development of (1) differential diagnoses, (2) patient assessment and diagnostic strategy, and (3) both immediate and intermediate management plans. The format of this portion of the experience was that of a problem-based learning discussion (PBLD). As opposed to PBLD, the students did not receive preparation materials, including patient presentation, before the activities. Additionally, the faculty facilitator did complete the session with the review of key elements of shock.

For the SIM experience, the student team was given a brief patient history by their busy supervising house officer. They were then sent to evaluate the patient on the ward with symptoms that had alarmed the floor nurse. At the patient's bedside, the students were presented initially with non-specific signs and symptoms in a patient whose condition deteriorated over about 10–15 minutes and first required stabilization and then transfer to a critical care unit. After alerting the supervising housestaff and during patient transfer, the students briefly re-conferenced to discuss management. A faculty member facilitated discussion of the group's working diagnoses and strategy. The group then attended to the patient in the critical care setting during which they further developed and tested their differential diagnoses, implemented immediate management, and planned intermediate management strategies. A final debriefing was then conducted, focusing on key elements of shock recognition, stabilization, assessment, and treatment. Team function was sometimes addressed, but not emphasized. The simulation scenarios were conceived by faculty and implemented by simulation specialists and educators. High-fidelity simulators (ECS simulator, formerly METI®, now CAE® Healthcare, Sarasota, Florida, USA) were utilized in mock ward and ICU bays.

Data collection

After completion of all sessions, students underwent an oral examination (OE). The OE was implemented as an attempt to familiarize our students with this type of examination and for program evaluation. Most MS3's had no experience with OE, yet their final surgery clerkship evaluation would include an OE at the end of the rotation six weeks later. Those students going on to surgery or anesthesiology would eventually undergo specialty OE's following residency training. As a program evaluation tool, we specifically sought to determine if students could explain key elements of patient evaluation in crises, invasive monitoring, and the pathophysiology and management of shock following educational activities designed for these issues. The students were assured that the OE was “no-stakes” in terms of individual grading for the clerkship.

The OE was based on a written case presented to the student 5–10 minutes before the examination. The presentation (Appendix) included distinct possibilities of hypovolemia, ischemic cardiac dysfunction, and systemic inflammatory response. Anesthesiology faculty and senior residents administered the OE over about 30 minutes. The examiner's introduction to the scoring instrument (Appendix) was minimal by intent. Examiners were told that a score of 3 represented MS3 level-appropriate understanding or decision-making. Scores of 2 or 1 represented progressively inferior performance. The expected performance of a new medical school graduate would be scored as 4, while any higher-level performance was scored as 5. A few questions had specific scoring guidelines that superseded this schema. The examiners were asked to complete as many of the questions as possible except for hypovolemic shock. Examiners then reviewed the examination with the student with the discussion of relative strengths and weaknesses as well as key concept reinforcement.

The results of the examination were separated into four topics for analysis. The first six questions regarded initial patient evaluation, prioritization, diagnostics, and differential diagnoses. These questions were termed the evaluation (EVAL) section, as marked in the Appendix. The next five questions, marked as MON, compared the monitoring modalities of central venous pressure and saturation, pulmonary artery catheter and mixed venous saturation, and echocardiography. Four questions each were then devoted to septic shock (SEP) and cardiogenic shock (CRD).

Data treatment

The results from all examinations were entered into statistical software (SPSS 16.0, SPSS of IBM Company, Chicago, IL). Data were entered into a duplicate file and inconsistencies resolved by the review of original scoring sheets. These data were used for graphic representations of group performance during curricula evaluation. Institutional Review Board exemption was granted for the retrospective in-depth analysis of de-identified data. An administrative assistant stripped identifying data and the de-identified and randomly reordered data was provided to author KEL for analysis.

Student scores were calculated both as mean topic scores and as indexed topic scores. The arithmetic mean of all non-missing scores except those for hypovolemia were calculated to determine each student's ALLavg scores. The scores from each separate section described above were averaged to generate EVALavg, MONavg, SEPavg, and CRDavg. Individually indexed scores were calculated by dividing a student's mean score for each module by the arithmetic mean of all students for that section to respectively generate EVALi, MONi, SEPi, and CRDi. All calculated values were evaluated by the Kolmogorov–Smirnov (K-S) test. Results indicated that all values were normally distributed, and parametric methods were thus utilized for data analysis.

The data was first analyzed by student's t-test to compare individual raw and indexed scores between groups, as summarized in . Since the circumstances of scheduling provided a crossover assignment pattern, the data was then analyzed to compare the indexed scores of students by paired t-test (). This analysis was performed for all students, and separately for the two different assignment groups (SIMcardiac versus SIMsepsis), blinded versus non-blinded examiners, and SIM-first versus CBD-first groups, as will be discussed below. The variable names CBDi and SIMi were used to denote the index for the type of shock experienced by a student through CBD and SIM, respectively. While this analysis was performed for all pair permutations for completeness, includes only the EVALi/MONi and SIMi/CBDi pairings. This is consistent with (1) the expectation that scores for evaluation and monitoring would be similar since they were taught in lecture format and (2) the null hypothesis that there should be no difference between SIMi and CBDi if the sessions were equally effective. (There were, incidentally, no findings of statistical differences for any pairs not shown in .)

Table 2  Summary of OE topic results

Table 3  Paired sample t-test of all subjects and various sub-groups as noted

Results

presents a graphical representation of SIMi versus CBDi. A summary of the parameters calculated as described above is presented in . The results of analysis are summarized in . No difference was demonstrated between EVALi and MONi in any comparison. In all comparisons, however, of SIMi and CBDi were statistically different.

Figure 1. Graph of SIMi versus CBDi labeled by the SIM group. The diagonal line represents SIMi = CBDi and the size of each marker is proportional to the number of cases at that point. Only 8 of the 85 subjects fall below this line, reflecting the predominant pattern of superior performance following simulation as compared to CBD.

Figure 1. Graph of SIMi versus CBDi labeled by the SIM group. The diagonal line represents SIMi = CBDi and the size of each marker is proportional to the number of cases at that point. Only 8 of the 85 subjects fall below this line, reflecting the predominant pattern of superior performance following simulation as compared to CBD.

Impact of study design

There are critical issues of study design that must be considered in the interpretation of our results, and the impact of which we attempted to evaluate as described below.

Chronology of SIM and CBD experiences

A possible confounding factor is that three-fourths of the students experienced CBD before their SIM. This raises the concern that superior performance of the second experience, usually SIM, actually reflected positive impact from the first experience, usually CBD. The second experience could be more productive because of any number of issues such as priming, activation of prior knowledge, cross-knowledge, or situational and team acclimation. To address this possibility, we considered separately the group of students who had experienced CBD first (n = 64) and the group who experienced SIM first (n = 21). As shown in , both groups demonstrated statistically superior performance on the indexed score experienced through simulation (SIMi) compared to case discussion (CBDi).

Non-blinded examiners

The majority of examinations were performed by faculty from SIM and/or CBD sessions and a minority by faculty from the airway workshop. The possibility of examiner bias is obvious. For this reason, the results of blinded examiners were compared with those of non-blinded examiners. As shown in , the same pattern of statistical differences between SIMi and CBDi and non-statistical difference between EVALi and MONi remained despite the smaller sample size. The effect size was, as implied, actually higher amongst blinded examiners for SIMi compared to CBDi.

OE instrument

We used a new OE instrument without prior validation. This demands analysis of the instrument's performance. Face and content validity are reflected in the examination design. The primary purposes of the OE were to prepare students for later high-stakes examination(s) and also for program evaluation. The questions were derived from educational goals and objectives for this medical student experience. The tool was thus not adapted from some prior use but created de novo for this experience.

Construct validity can be approached more quantitatively. Authors KEL and CJS began using the OE as part of their educational activities with both senior medical students and housestaff during the study period. Senior medical students and junior off-service residents were examined during rotations on the anesthesiology service without the benefit of focused educational experiences immediately preceding the OE. As shown in , there is a strong association between the learner level and mean raw score. Pearson's correlation coefficient was found to be 0.71 for ALLavg, 0.65 for EVALavg, 0.61 for MONavg, 0.54 for SEPavg, and 0.66 for CRDavg. These findings were statistically significant with p < 0.0001 for all parameters.

Figure 2. Mean scores for overall OE and individual topics for different learner levels. MS = Medical Student, PGY = Post Graduate Year of housestaff. Examinations by authors CJS and KEL.

Figure 2. Mean scores for overall OE and individual topics for different learner levels. MS = Medical Student, PGY = Post Graduate Year of housestaff. Examinations by authors CJS and KEL.

Discussion

The chief finding of this study is that students demonstrated better understanding of shock following a SIM experience as compared to a CBD experience. The secondary finding is that a new OE tool with just-in-time implementation and without faculty development provided meaningful information for program evaluation. There are both limitations and advantages incurred by our study design. The study is retrospective and there was no randomization of subjects. Conversely, the results reflect normal behavior of faculty and students in small group sessions without self-selection for study participation or likelihood of variously described Hawthorne effects (Holden Citation2001).

Our data includes results from non-blinded examiners. For this reason, blinded and non-blinded examiner results were separated and analyzed as described above. Blinded examiners found the same pattern of statistically significant differences between SIMi and CBDi and non-differences between EVALi and MONi as did the non-blinded examiners. The effect size within the blinded group was much larger, as might be logically surmised from the smaller group size. These findings indicate that non-blinded examiners, at the very least, did not demonstrate bias towards SIM when compared to the patterns of blinded examiners.

Similarly, statistically superior performance was associated with the SIM experience regardless of the order in which the two different methods were experienced, although the difference was less when SIM preceded CBD.

By convention, educational encounters with an effect size of 0.2, 0.5, and 0.8 are considered to respectively have had small, moderate, and large impact in qualitative terms (Colliver Citation2000). Despite a long-standing and central role in the active learning movement in medical education, PBLD has typically been found to have a small or moderate effect size (Hartling Citation2010). The effect size of SIM compared to CBD was large in our data (Cohen's d was 0.68 for septic shock and 0.89 for cardiogenic shock). Because the actual implementation of PBLD in medical education is as variable today as when these inconsistencies were discussed a decade ago (Lloyd-Jones Citation1998; Smits Citation2002), it is impossible to compare our CBD to a non-existent “standard” PBLD. If the CBD in this study is considered to be a PBLD, then our study shows large differences between SIM and PBLD, a gold standard of learner centered education. Considering the interactive, small-group nature of the CBD, it would be extreme to consider the experience as only a lecture. But even in this most conservative interpretation, the effect size of SIM is still greater than that demonstrated by PBLD in most studies.

These findings corroborate recent work demonstrating improved educational outcomes following SIM as compared to PBLD, also amongst medical students (Steadman Citation2006). Further, our study complements prior studies with (1) a large number of subjects, (2) a method of assessment different than one of the educational modalities being investigated, and (3) the enhanced statistical power of a crossover analysis. Critical analysis of SIM's effectiveness, particularly in comparison to CBD, is timely. The current exponential growth in simulation centers will be associated with wider integration of SIM experiences into medical education. Of note, institutional pioneers of the PBLD movement have recently described the integration of SIM into educational programs (McMahon Citation2005; Neville Citation2007; Gordon Citation2010). However, the extensive equipment, space, time, and faculty resources required by SIM require selective integration. It is therefore imperative that curricula design be based on data regarding areas of proven efficacy (and non-efficacy) for SIM, especially when it can be compared to other active-learning methodologies.

The secondary finding is that a simple OE with just-in-time deployment demonstrated adequate performance for program evaluation. The development of such an instrument is also timely. We live in an era of seemingly constant curricular refinement (Hecker Citation2009; Maccarrick Citation2009; Patel Citation2009; Snelgrove Citation2009) even as calls are being made for fundamental retooling of medical education (Irby Citation2010; Prislin Citation2010; Taylor Citation2010). Educators will need metrics that rapidly and reliably provide the impact of curriculum changes. The OE utilized in this study appeared to meet these requirements as well as provide an experience identified by faculty as being educationally important for our students in its own right. It is important to note that the OE was not developed as a grading tool for individual students and is not recommended as such.

In summary, our data shows that SIM resulted in markedly superior demonstration of understanding of key clinical concepts than did CBD, and that a simple OE provided meaningful data for curricular evaluation.

Acknowledgments

This study evolved from KEL's Scholar Project at the Harvard-Macy Institute's Program for Educators in Health Professions. It was funded in part by the Department of Anesthesiology, Academy of Distinguished Educators, and School of Medicine of the University of Virginia.

The authors thank former anesthesiology residents Aric Jorgenson, Shawn Tritt, and Christopher Wyatt for serving as faculty and examiners, Patty Jenkins for administrative support, and the Harvard Macy Institute faculty and scholars who generously provided advice and feedback.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

References

  • Colliver JA. Effectiveness of problem-based learning curricula: Research and theory. Acad Med 2000; 75: 259–266
  • Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ, Hamstra SJ. Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA 2011; 306: 978–988
  • de Leng BA, Dolmans DHJM, Muijtjens AMM, van der Vleuten CPM. Student perceptions of a virtual learning environment for a problem-based learning undergraduate medical curriculum. Med Educ 2006; 40: 568–575
  • Fernandez A. Simulation in perfusion: Where do we go from here?. Perfusion 2010; 25: 17–20
  • Fraser K, Peets A, Walker I, Tworek J, Paget M, Wright B, McLaughlin K. The effect of simulator training on clinical skills acquisition, retention and transfer. Med Educ 2009; 43: 784–789
  • Gordon JA, Brown DFM, Armstrong EG. Can a simulated critical care encounter accelerate basic science learning among preclinical medical students?. A pilot study. Simul Healthc J Soc Med Simul 2006; 1: 13–17
  • Gordon JA, Hayden EM, Ahmed RA, Pawlowski JB, Khoury KN, Oriol NE. Early bedside care during preclinical medical education: Can technology-enhanced patient simulation advance the Flexnerian ideal?. Acad Med 2010; 85: 370–377
  • Grenvik A, Schaefer J. From Resusci-Anne to Sim-Man: The evolution of simulators in medicine. Crit Care Med 2004; 32: S56–S57
  • Hartling L, Spooner C, Tjosvold L, Oswald A. Problem-based learning in pre-clinical medical education: 22 Years of outcome research. Med Teach 2010; 32: 28–35
  • Hecker K, Violato C. Medical school curricula: Do curricular approaches affect competence in medicine?. Fam Med 2009; 41: 420–426
  • Holden JD. Hawthorne effects and research into professional practice. J Eval Clin Pract 2001; 7: 65–70
  • Howard SK, Gaba DM, Fish KJ, Yang G, Sarnquist FH. Anesthesia crisis resource management training: Teaching anesthesiologists to handle critical incidents. Aviat Space Environ Med 1992; 63: 763–770
  • Irby DM, Cooke M, O'Brien BC. Calls for reform of medical education by the Carnegie Foundation for the Advancement of Teaching: 1910 and 2010. Acad Med 2010; 85: 220–227
  • Lloyd-Jones G, Margetson D, Bligh JG. Problem-based learning: A coat of many colours. Med Educ 1998; 32: 492–494
  • MacCarrick G. Curriculum reform: A narrated journey. Med Educ 2009; 43: 979–988
  • McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ 2010; 44: 50–63
  • McMahon GT, Monaghan C, Falchuk K, Gordon JA, Alexander EK. A simulator-based curriculum to promote comparative and reflective analysis in an internal medicine clerkship. Acad Med 2005; 80: 84–89
  • Neville AJ, Norman GR. PBL in the undergraduate MD program at McMaster University: Three iterations in three decades. Acad Med 2007; 82: 370–374
  • Patel VL, Yoskowitz NA, Arocha JF. Towards effective evaluation and reform in medical education: A cognitive and learning sciences perspective. Adv Health Sci Educ 2009; 14: 791–812
  • Prislin MD, Saultz JW, Geyman JP. The generalist disciplines in American medicine one hundred years following the Flexner Report: A case study of unintended consequences and some proposals for post-Flexnerian reform. Acad Med 2010; 85: 228–235
  • Schout BMA, Hendrikx AJM, Scheele F, Bemelmans BLH, Scherpbier AJJA. Validation and implementation of surgical simulators: A critical review of present, past, and future. Surg Endosc 2010; 24: 536–546
  • Shukla A, Kline D, Cherian A, Lescanec A, Rochman A, Plautz C, Kirk M, Littlewood KE, Custalow C, Srinivasan R, et al. A simulation course on lifesaving techniques for third-year medical students. Simul Healthc J Soc Med Simul 2007; 2: 11–15
  • Smits PBA, Verbeek JHAM, de Buisonje CD. Problem based learning in continuing medical education: A review of controlled evaluation studies. BMJ (Clin Res Ed.) 2002; 324: 153–156
  • Snelgrove H, Familiari G, Gallo P, Gaudio E, Lenzi A, Ziparo V, Frati L. The challenge of reform: 10 Years of curricula change in Italian medical schools. Med Teach 2009; 31: 1047–1055
  • Steadman RH, Coates WC, Huang YM, Matevosian R, Larmon BR, McCullough L, Ariel D. Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills. Crit Care Med 2006; 34: 151–157
  • Taylor CR. Perspective: A tale of two curricula: A case for evidence-based education?. Acad Med 2010; 85: 507–511

Appendix

Instructions: Please read the case below. You will then spend 10–15 minutes discussing both this case and several general concepts that were addressed during today's activities.

Your resident asks you to evaluate a patient whose nurse has called from the intensive care unit. The 62-year-old patient is POD #1 from emergency surgery for drainage of an abdominal abscess. She had a partial colon resection two weeks ago for cancer and was admitted yesterday with progressive fevers and malaise, workup resulted in radiographic diagnosis of the abscess. Yesterday's surgery was uneventful, but the patient's status has deteriorated today with declining blood pressure, elevated heart rate, and poor urine output. Vital signs are reported as BP 92/60, HR 124. Urine output over the last 4 hours has been a total of 100 ml.

Past medical history includes:

  • Coronary artery disease – Right coronary bare metal stent placed two years ago for acute ischemia. Good exercise tolerance (>5 mets) since and negative stress test three months ago.

  • Adenocarcinoma of colon – Discovered by colonoscopy and resected as described with negative nodes and biomarkers.

  • Outpatient medications – Metoprolol, lisinopril, and baby aspirin.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.