1,348
Views
8
CrossRef citations to date
0
Altmetric
Education and Practice

Compassionate Options for Pediatric EMS (COPE): Addressing Communication Skills

Pages 334-343 | Received 30 Aug 2016, Accepted 10 Nov 2016, Published online: 19 Jan 2017

Abstract

Introduction: Each year, 16,000 children suffer cardiopulmonary arrest, and in one urban study, 2% of pediatric EMS calls were attributed to pediatric arrests. This indicates a need for enhanced educational options for prehospital providers that address how to communicate to families in these difficult situations. In response, our team developed a cellular phone digital application (app) designed to assist EMS providers in self-debriefing these events, thereby improving their communication skills. The goal of this study was to pilot the app using a simulation-based investigative methodology. Methods: Video and didactic app content was generated using themes developed from a series of EMS focus groups and evaluated using volunteer EMS providers assessed during two identical nonaccidental trauma simulations. Intervention groups interacted with the app as a team between assessments, and control groups debriefed during that period as they normally would. Communication performance and gap analyses were measured using the Gap-Kalamazoo Consensus Statement Assessment Form. Results: A total of 148 subjects divided into 38 subject groups (18 intervention groups and 20 control groups) were assessed. Comparison of initial intervention group and control group scores showed no statistically significant difference in performance (2.9/5 vs. 3.0/5; p = 0.33). Comparisons made during the second assessment revealed a statistically significant improvement in the intervention group scores, with a moderate to large effect size (3.1/5 control vs. 4.0/5 intervention; p < 0.001, r = 0.69, absolute value). Gap analysis data showed a similar pattern, with gaps of −0.6 and −0.5 (values suggesting team self-over-appraisal of communication abilities) present in both control and intervention groups (p = 0.515) at the initial assessment. This gap persisted in the control group at the time of the second assessment (−0.8), but was significantly reduced (0.04) in the intervention group (p = 0.013, r = 0.41, absolute value). Conclusion: These results suggest that an EMS-centric app containing guiding information regarding compassionate communication skills can be effectively used by EMS providers to self-debrief after difficult events in the absence of a live facilitator, significantly altering their near-term communication patterns. Gap analysis data further imply that engaging with the app in a group context positively impacts the accuracy of each team's self-perception.

Introduction

Each year in the United States, 16,000 children suffer cardiopulmonary arrest.Citation1 Pediatric out-of-hospital (OOH) deaths represent nearly one third of pediatric deaths in the United States, and in one urban study 2% of pediatric Emergency Medical Services (EMS) calls were attributed to pediatric OOH arrests, most of which have poor outcomes.Citation1–4 For the families of these children, much of the initial medical communication will be delivered by EMS personnel or other pre-hospital providers and will have a significant impact on their ability to cope with the loss. In addition, stressful events such as this can have lasting psychological effects on prehospital providers, potentially resulting in anxiety, depression, and post-traumatic stress disorder (PTSD).Citation5 Such reactions may contribute to the high rates of premature career abandonment and suicide observed among this population.Citation6–9

Previous literature has shown that simulation-based instructional methodologies can significantly enhance the communication skills needed to address cardiopulmonary arrest and difficult medical situations with families.Citation10–13 Perhaps the most critical component of these encounters is the post-session debriefing, during which the participants' frames of reference are explored and active self-reflection is promoted.Citation14–17 Unfortunately, the number of locations offering such programs is relatively small and developing such programs can be resource intensive, requiring skilled facilitators, trained standardized patients (SP's) and family members, and simulated environments with sufficient fidelity to replicate the conversations. These factors effectively prevent a majority of prehospital providers from engaging in this valuable form of education. The advent of smartphone “app” technology, however, provides a possible alternative.

With this in mind, our team developed an app intended to address the communication needs of EMS providers. The initial phases of this project involved the qualitative analysis of eight focus groups containing 98 EMS providers and 3 interviews of parents of fatally injured or ill children cared for by EMS.Citation18 These data were then used to construct a smartphone app designed to assist EMS teams as they debrief following difficult communication situations in the field. The objective of this study was to pilot the app among a group of prehospital providers using simulated scenarios depicting a pediatric arrest to determine whether it could effectively improve provider communication skills. We hypothesized that team engagement with the app as an explicit debriefing tool would result in enhanced performance as measured by a validated communication skills assessment tool when compared to controls.

Methods

This prospective randomized block interventional educational trial was approved by the University of Louisville Institutional Review Board.

Focus Group Results and App Development

As previously reported, focus groups were conducted that included 98 EMS providers and 3 parents whose children had been fatally injured or critically ill.Citation18 This data was analyzed via a grounded theory based qualitative methodology and, after triangulation, resulted in an array of themes that address optimal prehospital communication skills and provider coping mechanisms. Themes addressing prehospital communication skills were then distilled via an iterative series of discussions into 8 “principles of communication” that were used to construct the app. These principles are explored in .

Table 1. Principles of communication incorporated into the app

As previously stated, it was felt that a smartphone app was a promising way to effectively approximating some aspects of the simulated learning environment. Given the ubiquity of smartphone technology such an app would be accessible to a majority of providers, and thus have a far greater availability than live simulation-based events. It was ultimately determined that a combination of didactic modules, video recordings of simulated difficult encounters in EMS followed by detailed scripted debriefing, and hyperlinks to local and national resources would best encapsulate the focus group results.

Script development resulted in 3 simulations that were included in the final app: Sudden Unexplained Infant Death (SUID), Motor Vehicle Accident with death in the field, and Suicide. All recorded simulations focused on at the scene management of the patient coupled with difficult family conversations. After filming, the videos were analyzed by the investigator team and broken up into segments based on the original principles of communication. Each segment was then given a voice over in which the content was explained and debriefed. Finally, two didactic presentations, one discussing general strategies for navigating difficult communication situations, and one giving provider coping strategies, were created. These materials were then synthesized into an app by Advertek, Inc (now DOORN Software Architects).Citation19 A demonstration app was previewed at an EMS educator's leadership conference and a national EMS educational conference as well as a HRSA grant site visit. Critical comments were collected at these events and incorporated as the budget allowed. While information regarding coping mechanisms for prehospital providers was also included, further exploration of this aspect of the app is beyond the scope of the current study.

App Pilot Testing

The final app was pilot-tested during a scheduled EMS educational exercise at a local EMS station. Subjects were drawn from the same local EMS group. Subjects were first divided into teams of 2–4. Team members were deliberately chosen to represent a heterogeneous population with regard to EMS experience. These teams were then randomized to either control or intervention categories. Once randomization was complete, these groups then experienced a simulated case of an inconsolable infant who rapidly becomes unresponsive and experiences a bradycardic arrest. The case utilized SP's as the parents and a low-fidelity mannequin as the infant. Prerecorded cries were used to simulate an infant's cry. The case was highly scripted, allowing for standardization of SP responses and overall case progression. As with the simulations presented in the app, the evaluation simulation focused on the scene management only. Both the initial and second simulations used the same case to assure that the same educational material was being assessed.

Subsequent to the initial simulation, intervention teams were given the app and provided a quiet environment for review and reflection. They were then instructed to review the material in the app, discussing its contents and using it to self-debrief the case. Control teams were given a period of free time and instructed to use whatever methods would be typical for them to debrief the event and/or to relieve stress (internet, peer-to-peer conversation, etc.). Following this, both groups re-experienced the same simulated case. Approximately 1–2 hours were provided between simulations.

Structure of the Simulated Case

Subject teams were introduced to the simulation outside of the room. Facilitators initiated the case by saying: “You have been called to the home of an infant by the mother, who has just returned from work and found the child to be fussy and inconsolable.” After this, subject teams would enter the simulated environment and interact with two SPs: one portraying the mother and one portraying the father. The recorded cry was played continuously during this period to indicate that the infant could not be consoled. Approximately 2–3 minutes after the simulation began the audio recording was stopped and the SP portraying the mother was instructed to give the following scripted prompt: “Why is he shaking like this?” The facilitator was then instructed to say, “The child appears to be breathing but is shaking.” Approximately 30–45 seconds later, the facilitator was instructed to follow this statement with the following scripted phrase: “The child is no longer breathing, has a weak pulse, and a heart rate of 58.” This phrase was intended to signal the development of symptomatic bradycardia requiring CPR as recommended by the American Heart Association's Pediatric Advanced Life Support (PALS) curriculum.Citation20 If asked for further clinical symptoms, the facilitator was instructed to state that the child had a fixed, dilated right pupil. The case concluded when the team left to transport the child to definitive medical care.

Evaluative Methodology

Team performance during the simulated cases was evaluated using the Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF).Citation21–23 This tool contains nine domains of communication (Builds a Relationship, Opens the Discussion, Gathers Information, Understands the Patient's and Family's Perspective, Shares Information, Reaches Agreement, Provides Closure, Demonstrates Empathy, Communicates Accurate Information), each rated on a 1 (poor) to 5 (excellent) Likert-type scale. In addition, the tool contains two forced choice questions: one asking the rater to indicate the three top domains of communication, and one asking the rater to indicate the three domains most in need of improvement. This form has been studied extensively and shown to be a valid means of assessing difficult interactions in the simulated pediatric environment (ICC), with high internal consistency (Cronbach's Alpha of 0.84) and faculty inter-rater reliability (Intra-Class Correlation [ICC] of 0.83).Citation22 While the validation studies of this tool did not incorporate EMS providers, no other validated instruments for the assessment of communication skills in this population could be located, and thus the GKCSAF seemed the best option. Additional measures of internal consistency and inter-rater reliability were calculated based on our assessments to address its validity in this context.

An additional useful feature of the GKCSAF is its ability to calculate gap analyses. This figure, calculated by subtracting self-scores from the aggregate faculty scores, allows for the quantitative measurement learner self-insight and self-appraisal.Citation12,24,25 Gaps of 0.5 to −0.5 indicate largely accurate self-appraisal. Gaps greater than 0.5 indicate self-under-appraisal and may correspond to areas of unrecognized strength, while gaps less than −0.5 indicate self-over-appraisal and may correspond to unrecognized weaknesses. Each simulation was assessed in real-time by two study staff with background in in the behavioral and social sciences (e.g., psychology, social work, counseling, public health) as well as by the SP's. Teams were also asked to collaborate on a self-evaluation score. Raters were trained prior to the pilot via discussion of the tool's cognitive content and explanation of the rating scale as it pertained to the simulated case to be used. Training continued until all raters felt comfortable using the tool in the specific context of this study.

Additional data was collected from both groups after each simulation. Control group participants were asked if they had sought out or were exposed to any information between the first and second simulations that could have helped them improve their performance as a control check. Intervention group participants were asked questions regarding the app's quality, relevance, importance, applicability, perceived effect on the second simulation, perceived future communication skills, and perceived effect on the family. Intervention group participants were also asked how long their respective teams had spent with the app as an experimental check to assure that they did indeed use the app before the second simulation. All participants were asked whether the cases were seen as realistic and gave participants adequate opportunity to demonstrate their family management skills, and whether participation had caused any discomfort or distress that required mitigation. All items were measured on a 1 (not at all) to 5 (very much) Likert-type scale. Counseling was available on-site should the latter be answered in the affirmative. depicts this progression.

Figure 1. Assessment structure of the study. This chart depicts the temporal relationship between the simulations, usage of the app, and learner assessments. Communication assessments were conducted by trained observers using the GKCSAF form. Intervention groups engaged with the app immediately after the initial simulation.

Figure 1. Assessment structure of the study. This chart depicts the temporal relationship between the simulations, usage of the app, and learner assessments. Communication assessments were conducted by trained observers using the GKCSAF form. Intervention groups engaged with the app immediately after the initial simulation.

Data Analysis

Given the typical nature of EMS interactions, the team was chosen as the unit of analysis. Participant demographics and intervention group survey data regarding app usability were summarized descriptively. Psychometric characteristics of the GKCSAF (internal consistency and inter-rater reliability) were calculated using Cronbach's Alpha and Intra-Class Correlation Coefficient's (ICC) (two-way random model, absolute effects, average measures), respectively. Pre-scores were used exclusively to calculate psychometrics to avoid inadvertent inflation of the sample size. Statistical differences between intervention and control group communication assessments and gap analysis scores at the initial and second simulation were assessed using the Mann-Whitney U test, and changes in score within each group between the two simulations were assessed using the Wilcoxon Signed-Ranks Test. Gaps of 0.5 and −0.5, per the literature, were used as cutoffs for meaningful self under and over appraisal.Citation24–26 Effect size was calculated using r-value. An initial power analysis at a 0.8 power and a 0.05 level of alpha error indicated that a sample size of approximately 20 intervention and 20 control teams would be required to detect a 20% (i.e., 1 point on a 1–5 Likert scale) difference in communication scores.

Results

Demographics

A total of 148 EMS providers, divided into 18 (68 subjects) intervention and 20 (80 subjects) control teams, participated in the study. Teams consisted of 2–4 participants. Demographic variables are summarized in . No significant differences were found to exist between the intervention and control groups, and no crossover occurred between them during the course of the study. No pilot study subjects participated in the formational focus groups.

Table 2. Participant demographic characteristics

Participant Perceptions of the Simulated Environment

There were no significant differences between control and intervention groups regarding the first simulation in terms of realism, demonstration of skills or feelings of comfort or distress. The mean participant comfort score was 3.99/5 (st dev 1.01). Most saw the simulation as realistic (mean 3.99/5, st dev 1.15) and as a viable venue for demonstrating skills (mean 3.34/5, st dev 1.12). Only 13% of participants indicated that the simulation was initially distressing. Those who expressed extreme distress were debriefed. All left the study in a positive emotional state.

All control group participants indicated that they had not received external information in-between simulations that would have helped improve their performance. All intervention teams had at least one member that interacted fully with the app (rating a 4 or 5 on a 5-point scale). Sixteen percent of intervention teams indicated that app interaction had been delegated to one member, but in 84% of groups, all team members actively engaged with the app. Most teams spent between 20 and 40 minutes interfacing with the app, indicating adequate exposure for testing. While control (3.40/5, st dev 1.15) and intervention (3.28/5, st dev 1.10) groups did not significantly differ regarding the realism and fidelity of the simulated environment as a venue for demonstrating their ability to interact with families (p = 0.53) after the first simulation, a significant difference was noted after the second simulation at a p value of <0.04 (control group mean 3.38/5, st dev 1.20; intervention group mean 3.77/5, st dev 0.53).

GKCSAF Validity in the EMS Context

Analysis of study personnel scores revealed an internal consistency between 0.88–0.89 (Cronbach's Alpha) and an overall Inter-rater reliability of 0.75 (ICC). Domain-specific inter-rater reliability scores were as follows: 0.4 (Builds a Relationship), 0.49 (Opens the Discussion), 0.58 (Gathers Information), 0.5 (Understands the Patient's and Family's Perspective), 0.69 (Shares Information), 0.65 (Reaches Agreement), 0.56 (Provides Closure), 0.60 (Demonstrates Empathy), and 0.57 (Communicates Accurate Information).

Comparison of Communication Skills

The intervention and control groups had mean GKCSAF scores of 2.9/5 (st dev 0.58) and 3.0/5 (st dev 0.46) (p = 0.331), respectively, at the time of the first simulation. At the time of the second simulation, the intervention and control groups scores had means of 4.0/5 (st dev 0.49) and 3.1/5 (st dev 0.50), respectively (p < 0.001). The absolute effect size of this difference was 0.69 by r-value, indicating a moderate to large effect in the intervention group. When changes within each group were assessed, the difference between the intervention first and second simulation scores was also measured at p < 0.001 (effect size r = 0.69). depicts these scores.

Figure 2. Comparison of communication scores at pre and post-test intervals. This figure depicts overall GCKSAF scores during the pre- and post-test intervals. Pre-test scores were not significantly different. Post-test scores showed a statistically significant difference (p < 0.001) by Mann-Whitney U. Effect size was moderate by r-value (0.69).

Figure 2. Comparison of communication scores at pre and post-test intervals. This figure depicts overall GCKSAF scores during the pre- and post-test intervals. Pre-test scores were not significantly different. Post-test scores showed a statistically significant difference (p < 0.001) by Mann-Whitney U. Effect size was moderate by r-value (0.69).

Gap analysis scores at the time of the first assessment were −0.65 (st dev1.08) for the intervention group and −0.48 (st dev 0.99) for the control group, indicating significant self-over-appraisal of skill on the part of the intervention group (p = 0.515). At the time of the second assessment the intervention group's gap score had risen to 0.04 (st dev 0.72) while the control group gap score dropped to −0.80 (st dev 0.85), a number indicating self-over-appraisal (p = 0.013). When changes in score were compared within groups, no significant change was noted in the control group (p = 0.14), while a statistically significant change was noted in the intervention group (p = 0.01). The absolute effect size for this change indicated a moderate to large effect (r = 0.41). depicts these scores.

Figure 3. Comparison of gap analysis scores at pre and post-test intervals. This figure depicts gap analysis scores from the pre- and post-test intervals. Pre-test scores were not significantly different. Post-test scores showed a statistically significant difference (p = 0.013) by Mann-Whitney U. Effect size was moderate by r-value (0.41).

Figure 3. Comparison of gap analysis scores at pre and post-test intervals. This figure depicts gap analysis scores from the pre- and post-test intervals. Pre-test scores were not significantly different. Post-test scores showed a statistically significant difference (p = 0.013) by Mann-Whitney U. Effect size was moderate by r-value (0.41).

App Usability and Structure

Intervention group participants thought the app was of high quality, relevant to their work with children dying in the field, important, and applicable (). Intervention group participants also indicated that they would use the information during future cases. Ninety percent said they would recommend the app for their colleagues and 67% wanted a copy of the app when it becomes available. Intervention group participants noted that the contents of the app were also relevant for those working with the families of adults dying in an OOH setting.

Table 3. Interventional group perceptions of the COPE app

Discussion

The differences noted in GKCSAF scores after the second simulation provide strong preliminary evidence that the app, when used as a tool to enhance intra-team debriefing following an EMS call in which a child is critically ill or dies, can positively impact both the overall communication skills of the team. The simulation literature is replete with examples of how the active learning process is enhanced significantly by the conduction of a skilled debriefing, and this finding applies equally well regardless of the subject matter being addressed.Citation14,16,17,27–30 This literature, however, primarily addresses those debriefings conducted by live facilitators, a luxury not available to many pre-hospital providers. A tool that can even partially approximate this effect could thus benefit a large numbers of clinicians as well as the patients they serve.

Because of this, we tested our app in the context of team self-debriefing, instructing them to use it as a tool by which to process and learn from a recent experience involving the delivery of difficult information (in this case our simulation). While the cognitive content of the app may be useful in isolation, our hope was to enhance the overall learning effect by taking advantage of both the community context of the team and the emotional engagement encouraged by the simulated event. The data gathered during this pilot supports the use of the app in this way and suggests that cognitive aids such as this can, at least in part, take the place of traditional debriefing in the active learning process. If borne out by larger studies, this has important educational implications and could enhance the accessibility of effective communication skills education for prehospital providers.

The gap analysis measurements give an additional window into the effects of the app. Initial gap measurements suggest an overall pattern of self-over-appraisal among EMS providers, meaning that they were, to some extent, unaware of deficits in their own communication patterns as measured by faculty observers. While this pattern persisted in the control group the gap narrowed significantly in the intervention group, suggesting that self-led debriefing using the app resulted in enhanced self-awareness. Accurate self-reflection is a crucial part of communication skills that can have a significant impact on both an educator's and a clinician's ability to effectively guide a difficult conversation. Encouraging individuals to reflect on their recent educational experience is an important role of the debriefing process, and the effects of the app suggest that at least some of this value can be conveyed with live faciltation.Citation28,29,31–33

Participants exposed to the app indicated that it was of high quality, relevant to their work with dying children and adults, was applicable to the second simulation and was generalizable to future cases. An overwhelming majority also indicated that they would recommend the app to their colleagues. These data provide additional evidence of its potential value to practicing pre-hospital providers. A further observation was the significant enhancement in the intervention group's perception of the realism and fidelity of the simulated environment after app use. While the survey data is not specific enough to enable a clear interpretation of this finding, one possibility is that the intervention groups were better able to perceive the communication skills addressed by the app in real time. If so, this would suggest that the app is capable of encouraging ongoing reflection after use, another goal of the debriefing process.

This leads us to speculate on the optimal strategy for knowledge distribution using the app. We suggest, based on our findings, that the best way to make the greatest initial difference is simply to disseminate the tool along with basic instructions regarding its use as a team-based post-event self-debriefing tool to as many EMS providers as possible. A second phase might then involve encouraging local EMS educators to develop communication skills training programs based on the app's contents. By applying the app on a variety of scales and in a number of different environments it will have the greatest possibility of meaningful impact.

Key to any study such as this is the assessment methodology. While the GKCSAF has good validity data in the pediatric critical care context there are no published instances of its use among prehospital providers, raising the question of the tool's validity in that context. When discussing validity it is helpful to follow an accepted model, a commonly accepted example of which was proposed by Messick.Citation34–36 In this framework validity is subdivided into 5 major categories: Content, Response Process, Internal Consistency, Relationship to Other Variables, and Consequence. The tool was chosen based on the specific content's applicability to our educational context, effectively addressing the Content category. Response Process was addressed via the rater training described above. While the internal consistency (Cronbach's Alpha of 0.88–0.89) and the overall inter-rater reliability (intra-class correlation of 0.75) were good, the domain-specific inter-rater reliability statistics unfortunately showed only moderate correlation (0.40–0.69) between raters. These data support the use of this tool to provide an aggregate communication score in the prehospital provider population but imply that further work may be needed to adapt the individual domains to this context. We could not address the other aspects of the validity argument. Given that our comparative statistical testing used only aggregate scores, the tool appears to have sufficient reliability in this population to support our use in this study.

Limitations

The chief limitation of this pilot study is our inability to fully blind our raters. Based on space and staffing considerations, we were forced to designate specific simulation rooms for intervention and control subjects to assure that these groups did not inadvertently cross. While raters were not informed which groups would be routed to which rooms and did not leave rooms between assessments, there was no way fully prevent them from determining which group they had been assigned to. This concern is somewhat mitigated by the fact that none of the raters were study investigators and that their primary specialty is the accurate assessment of social interactions, but the possibility of bias cannot be excluded.

A second issue concerns the generally lower domain-specific reliability statistics for the assessment tool used. Here we would note both the lack of any other validated tool that could be used in its place, and the fact that our analysis depended solely on the aggregate score (for which adequate reliability data exist). Nevertheless, the tool would benefit from further context-specific improvement to improve its accuracy among prehospital providers if it is to be used further in this population.

A third issue concerns the intervention team sample size, which was just below the level determined by our power calculations. The statistically significant changes noted in the intervention group, however, suggest that the initial calculations may have overestimated the needed number of subject teams. While the main analysis was thus not affected by this issue, the lower study power did preclude any subgroup analyses, which would have been helpful in determining any potential effects attributable to gender, race, or family status. Given the overall homogeneity of our subjects and the potential importance of these variables, further study will be needed to assess their impact.

Fourth, we note that the current study did not possess a group that received traditional post-simulation debriefing, nor do we have data regarding their past debriefing experience. Thus, we cannot meaningfully speculate on whether the results of the app are comparable to that of a typical facilitated communication skills programs. Indeed, we strongly suspect that these live programs possess distinct advantages over the app-based technique. What we can say, however, is that the use of this tool, in the given context, appears significantly better than the typical actions most EMS providers take to relieve stress or discuss their experiences after a difficult case. We also did not gather data regarding the various debriefing and stress relief practices used by the control group, a potentially fruitful area of further study.

Finally, both the app and the assessment simulations focused primarily on family communications occurring at the scene and did not include debriefing regarding the stress of ongoing care. Here we would again note that the app does include substantial material addressing provider coping, but this component was not examined in our pilot study and hence is not explored here. We are presently planning for a wider dissemination of the product during which we intend to gather focused data on this aspect of the app.

Conclusion

This study provides strong preliminary evidence that a process of self-debriefing facilitated by a smart phone application can have a significant impact on the communication skills of prehospital providers. Given the logistical and feasibility issues surrounding the provision of traditional facilitated simulation based instruction in communication skills, the app thus represents a viable alternative that has the potential to improve communication skills as they pertain to the presentation of difficult news. The app was also seen positively by front-line EMS providers. Based on this data we intend to begin dissemination of the app to the national EMS community. Further study will be needed to assess the impact of this process on the communication families receive in the live setting and the impact of the app on prehospital providers' ability to cope with emotionally traumatic events.

References

  • Babl FE, Vinci RJ, Bauchner H, Mottley L. Pediatric pre-hospital advanced life support care in an urban setting. Pediatr Emerg Care. 2001;17(1):5–9.
  • Topjian AA, Berg RA, Nadkarni VM. Pediatric cardiopulmonary resuscitation: advances in science, techniques, and outcomes. Pediatrics. 2008;122(5):1086–98.
  • Martin JA, Kung HC, Mathews TJ, et al. Annual summary of vital statistics: 2006. Pediatrics. 2008;121(4):788–801.
  • American College of Surgeons Committee on T, American College of Emergency Physicians Pediatric Emergency Medicine C, National Association of EMSP, American Academy of Pediatrics Committee on Pediatric Emergency M, Fallat ME. Withholding or termination of resuscitation in pediatric out-of-hospital traumatic cardiopulmonary arrest. Ann Emerg Med. 2014;63(4):504–15.
  • Hegg-Deloye S, Brassard P, Jauvin N, et al. Current state of knowledge of post-traumatic stress, sleeping problems, obesity and cardiovascular disease in paramedics. Emerg Med J. 2014;31(3):242–7.
  • Patterson PD, Jones CB, Hubble MW, et al. The longitudinal study of turnover and the cost of turnover in emergency medical services. Prehosp Emerg Care. 2010;14(2):209–21.
  • Newland C, Barber E, Rose M, Young A. CRITICAL STRESS. Survey reveals alarming rates of EMS provider stress & thoughts of suicide. JEMS. 2015;40(10):30–34.
  • Perkins BJ, DeTienne J, Fitzgerald K, Hill M, Harwell TS. Factors associated with workforce retention among emergency medical technicians in Montana. Prehosp Emerg Care. 2009;13(4):456–61.
  • Boudreaux E, Mandry C, Brantley PJ. Stress, job satisfaction, coping, and psychological distress among emergency medical technicians. Prehosp Disaster Med. 1997;12(4):242–9.
  • Peterson EB, Porter MB, Calhoun AW. A simulation-based curriculum to address relational crises in medicine. J Graduate Med Educ. 2012;4(3):351–6.
  • Bell SK, Pascucci R, Fancy K, Coleman K, Zurakowski D, Meyer EC. The educational value of improvisational actors to teach communication and relational skills: perspectives of interprofessional learners, faculty, and actors. Patient Educ Counsel. 2014;96(3):381–8.
  • Calhoun AW, Rider EA, Meyer EC, Lamiani G, Truog RD. Assessment of communication skills and self-appraisal in the simulated environment: feasibility of multirater feedback with gap analysis. Simul Healthcare. 2009;4(1):22–29.
  • Browning DM, Meyer EC, Truog RD, Solomon MZ. Difficult conversations in health care: cultivating relational learning to address the hidden curriculum. Acad Med. 2007;82(9):905–13.
  • Ryoo EN, Ha EH. The Importance of debriefing in simulation-based learning: comparison between debriefing and no debriefing. Comput Inform Nurs. 2015;33(12):538–45.
  • Cheng A, Hunt EA, Donoghue A, et al. Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter randomized trial. JAMA Ped. 2013;167(6):528–36.
  • Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthcare. 2006;1(1):49–55.
  • Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clinic. 2007;25(2):361–76.
  • Fallat ME, Barbee AP, Forest R, et al. Family centered practice during pediatric death in an out of hospital setting. Prehosp Emerg Care 2016;20(6):798–807.
  • DOORN Software Architects, Louisville, KY, https://doornsa.com, Updated 2016, Accessed December 7, 2016.
  • Kleinman ME, Chameides L, Schexnayder SM, S, et al. Part 14: pediatric advanced life support: 2010 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2010;122(18 Suppl 3):S876–908.
  • Calhoun A, Rider E, Meyer E, Lamiani G, Truog R. Assessment of communication skills and self-appraisal in the simulated environment: feasibility of multirater feedback with gap analysis. Simul Healthcare. 2009;4(1):22–9.
  • Peterson EB, Calhoun AW, Rider EA. The reliability of a modified Kalamazoo Consensus Statement Checklist for assessing the communication skills of multidisciplinary clinicians in the simulated environment. Patient Educ Counse. 2014;96(3):411–8.
  • Rider E, Nawotniak R. A Practical Guide to Teaching and Assessing the ACGME Core Competencies, 2nd Ed. Marblehead, MA: HC Pro, 2010.
  • Calhoun A, Donoghue A, Adler M. Assessment in pediatric simulation. In: Comprehensive Healthcare Simulation: Pediatrics. Grant V, Cheng A (eds). Switzerland: Springer, 2016.
  • Calhoun A, Rider E, Peterson E, Meyer E. Multi-rater feedback with gap analysis: an innovative means to assess communication skill and self-insight. Patient Educ Counsel. 2010;80(3):321–6.
  • Calhoun A, Boone M, Miller K, et al. A multirater instrument for the assessment of simulated pediatric crises. J Grad Med Educ. 2011;3(1):88–94.
  • Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief: a critical review of healthcare simulation debriefing methods. Simul Healthcare. 2016;11(3):209–17.
  • Voyer S, Hatala R. Debriefing and feedback: two sides of the same coin? Simul Healthcare. 2015;10(2):67–8.
  • Husebo SE, Dieckmann P, Rystedt H, Soreide E, Friberg F. The relationship between facilitators' questions and the level of reflection in postsimulation debriefing. Simul in Healthcare. 2013;8(3):135–42.
  • Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthcare. 2007;2(2):115–25.
  • Fryer-Edwards K, Arnold R, Baile W, et al. Reflective teaching practices: an approach to teaching communication skills in a small-group setting. Acad Med. 2006;81(7):638–44.
  • Ellman MS, Fortin AH. Benefits of teaching medical students how to communicate with patients having serious illness: comparison of two approaches to experiential, skill-based, and self-reflective learning. Yale J Biol Med. 2012;85(2):261–70.
  • Fallowfield L, Jenkins V. Communicating sad, bad, and difficult news in medicine. Lancet. 2004;363(9405):312–9.
  • Downing S. Validity: on meaningful interpretation of assessment data. Med Educ. 2003;37(9):830–7.
  • Messick S. Meaning and values in test validation: the science and ethics of assessment. Educ Research. 1989;18(2):5–11.
  • Messick S. Validity. In: Educational Measurement. 3rd ed. Linn R (ed). New York, NY: American Council on Education and Macmillan, 1989, pp 13–103.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.