18,171
Views
137
CrossRef citations to date
0
Altmetric
Web Papers

Simulation in healthcare: A taxonomy and a conceptual framework for instructional design and media selection

, , , , , , & show all
Pages e1380-e1395 | Published online: 02 Nov 2012

Abstract

Background: Simulation in healthcare lacks a dedicated framework and supporting taxonomy for instructional design (ID) to assist educators in creating appropriate simulation learning experiences.

Aims: This article aims to fill the identified gap. It provides a conceptual framework for ID of healthcare simulation.

Methods: The work is based on published literature and authors’ experience with simulation-based education.

Results: The framework for ID itself presents four progressive levels describing the educational intervention. Medium is the mode of delivery of instruction. Simulation modality is the broad description of the simulation experience and includes four modalities (computer-based simulation, simulated patient (SP), simulated clinical immersion, and procedural simulation) in addition to mixed, hybrid simulations. Instructional method describes the techniques used for learning. Presentation describes the detailed characteristics of the intervention.

The choice of simulation as a learning medium is based on a matrix of simulation relating acuity (severity) to opportunity (frequency) of events, with a corresponding zone of simulation. An accompanying chart assists in the selection of appropriate media and simulation modalities based on learning outcomes.

Conclusion: This framework should help educators incorporate simulation in their ID efforts. It also provides a taxonomy to streamline future research and ID efforts in simulation.

Background

Simulation in healthcare is increasingly being used for teaching and training in health care, not only in technical skills and patient management but also in competencies related to patient safety and teamwork, a more novel yet increasingly important use of simulation (Raemer Citation2004).

Although rapidly growing, there are still significant gaps in the empirical evidence related to a wide variety of questions surrounding the methodologies, the value, and the outcomes of simulation in healthcare (Cook et al. Citation2011). This evidence gap becomes a particular challenge when faculty, clinicians, and educators are asked to integrate simulation into existing curricula and training programs. Furthermore, in a context in which simulation is being considered for use in national licensure examinations, alignment between simulation centers and programs in curriculum design is desirable. Hence, an appropriate framework must be made available to purveyors of simulation-based training. The Canadian Network for Simulation in Healthcare (CNSH) believes that the design of such a framework must be based on a comprehensive instructional framework that supports, as a minimum requirement, an alignment of unidisciplinary and/or interprofessional educational methodologies, definitions, theories, and best practices that can, in turn, support a more universal direction for research and research outcomes.

With the field still in its infancy, there has been no formal theory of instructional design (ID) for simulation in healthcare. Moreover, while theoretical frameworks that relate educational methods to desired objectives do exist, none is specifically designed for simulation in healthcare. There are some frameworks tailored to specific types of simulations, such as computerized screen-based simulations (Williams Citation2003), that are outside the field of healthcare. Most of the existing and accepted ID theories either do not take the specifics of simulation into consideration (such is the case with Kern et al.'s (Citation2009) six-step approach to curriculum development), or use simulation as one single, very broad category, for example, in Robert Gagné's principles of ID (Gagné et al. Citation2005). It is thus challenging to implement these theories for the design of simulation curricula in learning institutions. Such concerns have presumably hampered local efforts to build appropriate and effective curricula for simulation in healthcare.

A crucial issue when discussing ID is to clearly distinguish the tools used for learning from the actual educational modalities. In simulation, more than in any other field, this has been a persistent problem in the discussions and publications on simulation training and education. The actual tool that is the patient simulator is often considered an educational method, even though it might be used in widely different educational experiences, for example to reproduce an encounter with a patient (Lee et al. Citation2003; Hassan & Sloan Citation2006), to simulate an adverse event in a dynamic environment (Kobayashi et al. Citation2006), or to facilitate training in a single technique (Monti et al. Citation1998).

In this article, we propose a conceptual framework for the ID of educational activities using simulation in healthcare. The framework is linked to learning outcomes and assists educators in selecting characteristics for the best design of simulation training interventions. It positions simulation within the breadth of potential uses. This article also provides a model for selecting appropriate simulation modalities, inspired by Robert Gagné's extensive work on instructional media selection (Reiser & Gagné Citation1983; Gagné & Medsker Citation1996).

A definition of healthcare simulation

Although simulation in healthcare is not a monolithic concept, the expression “healthcare simulation” seems appropriate to describe the wide range of simulation experiences. Healthcare simulation is an instructional medium used for education, assessment, and research, which includes several modalities that have in common the reproduction of certain characteristics of clinical reality. Simulation-based educational activities rely on experiential learning. As a fundamental requirement, they must allow participants to affect, to different degrees, the course of the educational experience through verbal or physical interaction with the simulated components or patients.

The scope of the framework presented in this article is restricted by excluding certain simulation modalities that target healthcare systems rather than healthcare providers (e.g., computer simulations of emergency room patient flow for bed management purposes).

A framework for ID

The framework for ID in healthcare simulation is based in part on Cook's model for research on e-learning (Cook Citation2005). Its purpose is to provide a solid foundation on which to build the characteristics of simulation for a given educational intervention.

This framework describes an educational activity or intervention using four levels of ID (). Each consecutive level encompasses a defined set of characteristics that correspond to one level of details of the simulation activity. Stated otherwise, an educational activity consists of several characteristics grouped into distinct levels based on their specific impact on the overall quality and design of the intervention. The choices for any characteristic at a given detail level usually depend on the choices made at the previous levels. At each level, the choices made are dependent on the actual learning needs and goals of the activity.

Figure 1. The levels of ID for an educational experience using healthcare simulation. Each progressive level constitutes the building blocks for the level directly above it.

Figure 1. The levels of ID for an educational experience using healthcare simulation. Each progressive level constitutes the building blocks for the level directly above it.

Level 1: Medium

The most encompassing level, medium constitutes the most basic choice in the ID process. It describes the principal mode of delivery of instruction (Cook Citation2005), which determines all the educational characteristics of the activity. Examples of media include textbook learning, lectures, and computer-based training. Within this framework, simulation constitutes one specific medium. Two of its core characteristics, the imitation of reality and its interactive nature, distinguish it from the other delivery media (McGuire Citation1999).

While simulation methodologies have the potential to support virtually any learning opportunity, they are an expensive and often scarce medium. Judiciously selecting simulation as an appropriate medium for a given educational activity entails an examination of specific characteristics of that activity, which can be defined in a “zone of simulation matrix.”

The Zone of Simulation Matrix

The decision to use simulation as an instructional medium should be based on the analysis of two characteristics of the specific events, series of events or conditions that are the desired focus of training: acuity and opportunity. Acuity is defined as the potential severity of an event or a series of events and their subsequent impact on the patient. Opportunity is defined as the frequency in which a particular department or individual is actively involved in the management of the event. Alternatively, it also includes the likelihood of discovering a particular problem. A latent problem that is not easily discovered in the actual setting is akin to an event that rarely occurs.

These two characteristics define a matrix that can be divided into four quadrants: high-acuity low-opportunity (HALO), high-acuity high-opportunity (HAHO), low-acuity low-opportunity (LALO), and low-acuity high-opportunity (LAHO) (). HALO encompasses clinical situations that have a high potential to severely impact the patient but are not common occurrences among the targeted group of learners. Examples of HALO would be the management of malignant hyperthermia in the operating room (OR), or a mass casualty incident in the emergency department. HAHO includes clinical situations that have a high potential to severely impact the patient but are common occurrences among the targeted group. Examples of HAHO include managing a postpartum hemorrhage in a postpartum unit, and initiating cardiopulmonary bypass in the OR. LALO includes clinical situations that have a lower potential to severely impact the patient but are not a common occurrence among the targeted group. An example of LALO is manual disimpaction of fecal mass. Finally, LAHO involves clinical situations that have a lower potential to severely impact the patient if not managed appropriately and are common occurrences among the targeted group. Examples of LAHO include many of the tasks related to routine patient care and uncomplicated induction of anesthesia in the OR.

The “zone of simulation” is that area in which simulation may be advantageous over other instructional media. Within this zone, simulation can serve as an acceptable substitute or complement to other, less expensive, media and methods. The zone of simulation encompasses all HALO situations, and, when feasible, most HAHO and LALO situations (). While simulation could be used for acquiring introductory skills in LAHO situations, it is probably not the most efficient method in that context.

Figure 2. The zone of simulation matrix. The model is based on two characteristics of clinical situations, acuity (severity) and opportunity (frequency) that define four areas of varying dynamics. The zone of simulation identifies those situations where healthcare simulation may be advantageous over other instructional media.

Figure 2. The zone of simulation matrix. The model is based on two characteristics of clinical situations, acuity (severity) and opportunity (frequency) that define four areas of varying dynamics. The zone of simulation identifies those situations where healthcare simulation may be advantageous over other instructional media.

Level 2: Simulation Modality

The second level (called “configuration” in Cook's original framework), describes the simulation modality used for teaching and learning. The choice of modality determines a broad set of characteristics that dramatically alter the learning experience. As such, simulation modalities represent the high-level description of the simulation activity. Their categorization provides clues to their adaptation for specific learning outcomes (see section ID in simulation and the outcomes of learning).

This description is based partly on the concept of the “immersive learning environment” that is often associated with simulation. This concept can range from a focused application such as the use of a virtual reality simulator, to a broad spectrum of dynamic environments such as the replication of an operating theatre or an emergency department. From an instructional framework perspective, we have defined immersive learning as any situation which is highly interactive and engages the learner in such a way that disbelief is suspended and the learner becomes an active participant in the experience.

The shortcoming of the above definition when applied to a simulation instructional framework is that it does not reflect that there is a broad spectrum of “degrees of immersion.” In simulation, we frequently apply an experiential mode to learning, as defined by Appelman (Citation2004, p. 73): “Experiential Modes (EMs) are components of a learning environment that focus on the learner's perception while experiencing any experiential mode, and through a micro analysis bridges the gap between instructional development and learner cognition.” Within simulation, we can extend Appelman's definition of modes to identify the various modalities used to deliver simulation, and classify simulation modalities into four categories with an additional methodology of hybrid (or blended) simulation (see and ).

Figure 3. The four simulation modalities, which are broad descriptions of the simulation experience. Modalities show areas of overlap that constitute hybrid simulations.

Figure 3. The four simulation modalities, which are broad descriptions of the simulation experience. Modalities show areas of overlap that constitute hybrid simulations.

Table 1  Description of the simulation modalities that constitute level 2 of the ID framework

Computer-based simulation is a modality that allows the user to interact with the simulation experience through the screen-based interface of a computer. It can be used, for example, to simulate encounters with patients that are programmed to respond to user input. Simulated Patients (SPs) are also a simulation modality that can be used to replicate encounters with real patients (Bokken et al. Citation2008; Cleland et al. Citation2009). They have been used for more than three decades in medical education (Barrows Citation1993). With this modality, elements of history taking, physical examination, and clinical reasoning can be learned either using an actual patient or a surrogate playing the role of an actual patient. The encounter and behavior of the patient are often standardized (hence the frequent denomination “standardized patients”). However, contrary to simulated clinical immersion, the environment does not affect to a large extent the way the educational experience unfolds or the occurrence of specific events. In fact, SPs can be used “in classrooms and in many nonclinical areas” (Barrows Citation1993).

In simulated clinical immersion, the learners are exposed to particular patient problems. Its distinctive characteristic, however, is that the environment reproduces the actual clinical or work environment. Interaction with a patient usually takes place in a fashion similar to SPs, but the environment takes on an important role in achieving the learning outcomes, or affects directly the educational experience and the occurrence of events. For example, the dynamic nature of the environment and high data flow of a simulated Emergency Department can be taken advantage of to teach crisis management in complex clinical settings; alternatively, an equipment failure may be the trigger for a specific teaching case in a simulated operating theatre. In simulated clinical immersion, the environment can be real (using the actual clinical setting) or simulated; it can further be small in scale – reproducing, for example, a single operating room – or can have a very large scope – a battlefield, a building or even a city. Importantly, however, the concept of environment includes not only the physical setting but also the equipment, teammates, and other individuals involved in reproducing the desired situation. It also includes elements necessary to the way it is perceived and experienced (eventually even believed) by participants. Thus, simulated clinical immersion is clearly a social experience (Dieckmann et al. Citation2007) rather than the more individual experience that constitutes interaction with SPs.

Finally, procedural simulation focuses on acquiring and improving procedures and technical skills. Its main characteristic is that it allows the learner to replicate specific behaviors and movements inherent in the real-life counterpart. It also allows the learner to train in the specific sequence of actions – procedures – that are required to appropriately perform a specific technical skill.

The classification of simulation activities into these four modalities shows some overlap. However, their nature and the learning outcomes they each help achieve are different enough to warrant their consideration as separate modalities. Nonetheless, in some situations, multiple outcomes can be achieved by using different simulation modalities at the same time, which can be described as hybrid (or blended) simulation. Such hybrid simulations allow, for example, training in technical skills combined with communication proficiency (Kneebone et al. Citation2002).

It is important to underline the fact that simulation modalities and simulator types (see level 4) are distinct concepts that are unfortunately all too often confounded (Schiavenato Citation2009). The same simulator can serve very distinct purposes. For example, a patient simulator can be used to train specific technical skills or be used as a patient surrogate in a scenario of medication error. These are two very different educational experiences, and the objectives they tackle are inherently different. Level 2: Simulation Modality aims to answer the paramount question “how is simulation being used?” rather than “what simulator is being used?” (a question addressed by Level 4: Presentation of the current framework).

Level 3: Instructional method

The instructional method (or “mode”) represents the specific techniques used for learning (Gagné & Medsker Citation1996; Cook Citation2005). Most methods can be used no matter which simulation modality is chosen. However, they have an important impact on the educational experience, its perception and its effectiveness. Two instructional methods can be used with simulation: self-instruction and instructor-based learning.

Self-instruction (or self-directed learning) allows learners to determine the timing and rhythm of learning. Learners can also choose their desired learning objectives (Gagné & Medsker Citation1996). This method is well adapted to procedural and computer-based simulations.

Instructor-based learning is the usual method for learning with healthcare simulation. It requires instructor supervision and includes varying degrees of instructor involvement, from debriefing sessions to direct participation by the instructor in the training session.

LeFlore and Anderson (Citation2009) have recently studied the effectiveness of teaching sessions in which instructors modeled appropriate behaviors for the learners. They concluded that instructor-modeled learning and instructor-based learning with debriefing were equally effective. While this would suggest that there is a third type of instructional method, consisting of observation, this method is irrelevant to simulation, since trainees have no hands-on experience and do not interact with the situation. What is more, observation relies on different learning mechanisms than simulation. Learning theories that are appropriate for simulation but have no bearing on learning through observation include Kolb's experiential learning (Kolb Citation1984) and deliberate practice (Ericsson Citation2004). In fact, from an educational standpoint, observation constitutes a medium (level 1 in this framework) distinct from simulation (Gagné & Medsker Citation1996).

Level 4: Presentation

Presentation includes characteristics that define exactly how the simulation activity is shaped and designed, but that do not constitute instructional methods per se. Although they usually involve small differences within each instructional method, they can have a tremendous impact on actual learning effectiveness. Of course, the level of detail that defines the presentation of a given simulation activity can be nearly infinite. Yet, several characteristics are integral in the simulation experience, among which are the nature and quality of feedback, simulation fidelity, simulator type, scenario, and team composition (). In addition, duration of individual training sessions and their interval certainly constitute important aspects of presentation, although they are not unique to simulation. Finally, other factors such as the location of the educational intervention could be important, but further research is still required to assess their exact role.

Table 2  Important elements of presentation for simulation experiences, along with their major characteristics and descriptors

Table 3  Descriptions and examples of available simulator types

Feedback

Feedback is an essential characteristic of simulation training and has been identified as the single most important presentation element for simulation, with a direct impact on learning (Issenberg et al. Citation2005). It is defined as a particular type of communication in which a sender (the source) conveys a message to a recipient that includes information about the recipient's behavior (Ilgen et al. Citation1979). Feedback provides the ability to reflect on the educational experience to enhance learning (Salas et al. Citation2005; Fanning & Gaba Citation2007). Studies of feedback in medicine, also in fields such as psychology and management, have shown that specific feedback has a better potential of improving performance than vague or no feedback, because it increases knowledge about performance (Johnson et al. Citation1993; Hewson & Little Citation1998), and, when no goals exist, allows the learner to set specific goals for learning (Ilgen et al. Citation1979; Early et al. Citation1990). These goals in turn enhance performance by increasing motivation, effort, and persistence, and by improving the way strategies for success are devised (Locke et al. Citation1981; Early et al. Citation1990).

Feedback can take several forms, but its essential attributes can be grouped into type, source, timing, the level to which it is aimed (the individual or the group), and other attributes (including the medium through which feedback is delivered) (Chase & Houmanfar Citation2009). The main characteristics will be addressed here.

The psychology and management literature usually recognizes two types of feedback: outcome feedback and process feedback (Early et al. Citation1990, Johnson et al. Citation1993). Outcome feedback – also called performance-oriented feedback – provides participants with the knowledge of their results. While it is posited that such form of outcome would allow the individual to improve his or her performance by altering the strategies used to implement the task, some studies suggest that outcome feedback is ineffective for complex, uncertain tasks (Jacoby et al. Citation1984; Johnson et al. Citation1993).

Process feedback – also called learning-oriented, descriptive or cognitive feedback – aims at facilitating learning and has an explanatory value (Johnson et al. Citation1993). It provides descriptive information on how to perform a specific task or on how to improve performance. Contrary to outcome feedback, evidence suggests that process feedback improves the strategies used to achieve an outcome, and enhances performance, especially on complex tasks (Johnson et al. Citation1993). It is also likely that the effects of both types of feedback are additive (Early et al. Citation1990).

In addition to its type, feedback can be classified based on its source. There are three potential sources of feedback: the task environment itself, the individual performing the task, and other individuals observing the performance. The source has an effect on the response to feedback, through the source's credibility and its power over sanctions and rewards imparted to participants (Ilgen et al. Citation1979).

The first source of feedback is the task and its environment. By its very nature, feedback is an inherent characteristic of simulation, even when no external feedback is provided. The appropriate replication of physical characteristics provides guidance to the user, called task-inherent feedback or natural feedback (Friedman Citation1995). This occurs, for example, when an appropriate technique leads to success upon inserting a thoracic drain, or when inappropriate management decisions lead to worsening of the patient's condition. Such feedback emanating from the task environment can be enhanced by additional mechanisms that supplement the information provided through natural feedback. This type of feedback, called augmented feedback (Ilgen et al. Citation1979) is often embedded into virtual reality simulators.

Another source of feedback is the individual performing the task, who can provide his or her own feedback on the processes involved in the performance or its outcomes. However, studies do not generally support the ability of individuals to self-assess (Davis et al. Citation2006), although self-feedback could have value as a motivational or development tool (Campbell & Lee Citation1988).

External subject-matter expert observers can provide feedback called expert-directed feedback. In healthcare simulation, this often takes the form of after-the-fact debriefing sessions (Dreifuerst Citation2009). Such sessions can be conducted using several styles or models of feedback, such as non-judgmental debriefing (Rudolph et al. Citation2006), plus-delta (Fanning & Gaba Citation2007) or target-focused feedback (Wallin et al. Citation2007). Unfortunately, few published studies have described the effectiveness of various styles of debriefing (Fanning & Gaba Citation2007). It is beyond the scope of this article to describe those styles, and readers are referred to the appropriate references.

Finally, feedback can be provided by the other learners involved in the training experience. Such peer feedback may be integrated in the debriefing session. When used as a learning method (rather than an assessment) it can be particularly useful for providing insight into one's performance (Falchikov Citation1995), and has been successfully used in areas of healthcare such as nursing, especially in pregraduate training (Morris Citation2001).

Timing is also an important element of feedback. When feedback is provided during simulation, it is described as synchronous. Feedback can also be immediate, when it is given after each simulation session (e.g., debriefing after each scenario during simulated clinical immersion), or delayed when there is a time gap between the end of the simulation session and feedback. There is some evidence that, while synchronous feedback is appropriate for procedural simulation, immediate feedback is more appropriate when the aim of simulation is to integrate a particular concept or model (Schär et al. Citation2000; Williams Citation2003). Furthermore, synchronous feedback, when it is provided by an external source, could intrude on the learning experience (Williams Citation2003). Finally, delayed feedback, although often more feasible, may not be as effective as synchronous or immediate feedback (Salas et al. Citation2008).

Other elements that might determine the effectiveness of feedback include duration of the feedback session relative to the simulation session, and the provision of individual and team-oriented feedback when appropriate (Salas et al. Citation2008).

Fidelity

Fidelity, defined as the realism of the experience, is an intrinsic characteristic of simulation and an element of presentation that can affect learning (Issenberg et al. Citation2005; Gaba Citation2007). As such, it is important to be able to define and measure it. In 1999, the Fidelity Implementation Study Group formed by the Simulation Interoperability Standards Organization (SISO) presented a first report describing some of the major conceptual frameworks for fidelity and sketching a fidelity taxonomy (Gross Citation1999). The report highlighted the difficulties in establishing standards for fidelity in simulation. Still, it argued, “if we are to make fidelity a useful concept, then we must make it measurable.” Simply using broad descriptors such as “high fidelity” or “low fidelity” is not enough and is misleading since fidelity is a multidimensional construct (Rehmann et al. Citation1995; Maran & Glavin Citation2003; Beaubien & Baker Citation2004). In order to make fidelity measurable, four steps are required: (1) articulation of the standard, (2) identification of the fidelity taxonomy, (3) measurement of fidelity features, and (4) determination of the required level of fidelity.

The standard against which fidelity should be measured (the “fidelity referent”) is not the real world. The latter is too large and impossible to adequately describe. More importantly, most of the real world would be irrelevant to a particular learning outcome. Instead, the fidelity referent should include the minimal characteristics of real world features that are needed for a given educational experience (Gross Citation1999). For example, an arm used for the insertion of a peripheral venous catheter might have adequate fidelity if the position and aspects of the veins as well as the feel of the techniques performed are realistic, independently of other features such as color of the skin or the ability of the simulator to react to pain. Of course, if the objectives of a training session also include communication skills, then some aspects of fidelity would be lacking. Hence, the fidelity referent should be established by closely analyzing the desired learning outcomes and matching them to the real world features. This would usually entail a thorough (and, ideally, standardized) description of the target domain (Gross Citation1999).

The second step involves determining the appropriate dimensions and features of fidelity to analyze. In the field of aviation, important dimensions include physical fidelity, visual fidelity, audio, motion, environment, temporal fidelity, behavior and aggregation (Gross Citation1999). Two of these, physical and environment fidelity, have been used in healthcare simulation (Maran & Glavin Citation2003; Beaubien & Baker Citation2004). Behavior fidelity (the way features of the simulation react), while important, can be subsumed under these two domains. Physical fidelity in healthcare simulation refers to the realism of the patient or of the component that is simulated. As such, it could also be called patient fidelity. Environment fidelity refers to the realism of all elements not directly connected to the patient, including the setting and the personnel. To these two domains, we should add temporal fidelity, which refers to the way time flows during the simulation session. At the high end, time flows unimpeded. At the low end, temporal contractions or pauses take place.

The third step is to measure the agreement between features of the simulation and the fidelity referent. One fidelity framework describes two metrics: resolution and accuracy (Gross Citation1999). Resolution, a dichotomous value, refers to whether the feature of the referent is reproduced in the simulation. Accuracy refers to the degree to which the feature of the referent is closely reproduced in the simulation. It would be expressed through a numeric index, but the actual measures used to generate it vary. Several techniques should be used depending on the specific feature, including error estimation techniques. Of course, the complexity increases tremendously when a human element is added to the simulation, which may require resorting to an analysis by subject matter experts (Gross Citation1999).

The final step for analyzing fidelity is to determine the appropriate and required level of both fidelity metrics. This determination will depend on the targeted learning outcomes (Salas et al. Citation1998) and on the participants’ level of expertise (Maran & Glavin Citation2003). However, studies using a strict definition of fidelity are lacking (Cook et al. Citation2011), and the issue of the extent of fidelity required for a given educational experience is still unresolved.

Fidelity is not inherent to the specific simulation setting used, but varies from one “case” to another. Moreover, fidelity metrics do not apply broadly to a fidelity category or domain (such as the environment), but to individual elements from each domain that are essential to the specific educational experience. Although the process of determining fidelity is expensive and time-consuming, it is necessary both to ascertain the quality of the simulation activity and to standardize research. Terms such as “high fidelity mannequin” or “high fidelity simulation” are overreaching and misleading. Unfortunately, even in a field with extensive experience in simulation, such as aviation, fidelity is still lacking precise metrics (Rehmann et al. Citation1995).

Simulator type

The type of simulator can alter the effectiveness of the learning experience, partly through its effects on the available instructional methods and on other aspects of presentation such as fidelity and feedback. Available simulators depend mostly on the chosen simulation modality, and several simulators can be chosen for more than one modality. provides a definition for each simulator type.

Table 4  Adaptation of simulation modalities to specific learning outcomes domains

Organic simulators (animals, tissues or cadavers) and synthetic simulators (including the so-called “part-task trainers” as well as patient simulators when used for this purpose) are mostly used for procedural simulation. Virtual reality simulators are types of synthetic simulators where interfacing with the simulator is done through highly realistic (natural) means, and a computer controls nearly all the outputs of simulation events (Gorman et al. Citation1999; Kaufmann & Liu Citation2001).

Computer-based simulation, as a modality, can make use of a more limited set of simulator types. These include virtual patients, virtual worlds, and computer (or web) applications. Virtual patients are becoming increasingly important in healthcare simulation (Cook & Triola Citation2009; Huwendiek et al. Citation2009). They enable health learners to practice a large number of medical skills, including communication and history taking (Bernard et al. Citation2006). Virtual worlds that replicate complete environments and diverse events on a computer-screen have also seen a surge in popularity. They use platforms dedicated to education in healthcare such as Virtual ED, a virtual emergency department (Youngblood et al. Citation2008), gaming platforms including environments dedicated to “massively multiplayer online games” or MMOG (Youngblood et al. Citation2007), or virtual communities such as Second Life® (Linden Lab, San Francisco) (Phillips & Berge Citation2009). Computer-based simulations can also involve computer applications that mimic the functions of real-world systems, such as web-based anesthesia machine simulators (Lampotang Citation2003).

Simulated clinical immersion and SPs can be designed using multiple simulator types, including actors, patients, patient simulators, and any of the simulators usually used for technical skills training. This is an important point when discussing simulator types. Most of the terminologies used today in the literature, such as “mannequin-based simulation” (Tsai et al. Citation2006) or “high-technology patient simulation” (Lareau et al. Citation2010), emphasize the patient simulator (the “mannequin”) to the detriment of other aspects of the experience. Not only does this put the emphasis on the technology rather than the educational experience, it also suggests a specific role for the patient simulator in learning. Yet, the learner's actions that are part of the educational experience are similar, whether the patient is reproduced by an actor, an actual patient, another learner, or a patient simulator (Collins & Harden Citation1998).

Scenario

The scenario used for simulation, which describes the patient case, is an important aspect for enhancing learning. Salas and collaborators have suggested that the “scenario is the curriculum” (Salas et al. Citation2005, p. 364). It is a crucial presentation element since it serves as the foundation upon which learning will take place (Salas et al. Citation1998; Salas & Burke Citation2002). It must be based on the intended learning outcomes. It must have different levels of difficulty and allow the learners a certain degree of control over the events.

Team composition

The last presentation of interest is team composition. It has a direct impact on learning since simulation is often a shared, social experience, contrary to other instructional media (Dieckmann et al. Citation2007). Aside from actors and confederates, teams can be composed of members of a single discipline. They can alternatively involve interdisciplinary (or interprofessional) units, and work units. In single discipline teams, each team member plays a role required by the actual scenario, whether the role is within or outside his or her professional domain. In interdisciplinary units, each team member plays a role consistent with his or her professional domain. As for work units, they consist of actual clinical teams. Interdisciplinary teams are often favored for learning attitudes and beliefs related to patient safety and teamwork (Buljac-Samardzic et al. Citation2010). Yet single discipline teams could provide their members with unique insights into other professionals’ roles, and be a major impetus for changes in culture and behavior within multidisciplinary clinical teams. Work units may increase transfer of knowledge to the actual clinical setting, although this assertion needs to be supported by future research.

ID in simulation and the outcomes of learning

Instructional media and simulation modalities are each best adapted for specific uses. Indeed, as Gagné and his collaborators asserted: “Technology is not an end in itself; any successful use of training technology must begin with clearly defined educational objectives” (Gagné et al. Citation2005, p. 231). The intended knowledge, skills, and attitudes (and often learners’ expertise level) dictate the modality best adapted for each learning curriculum and training session ().

In recent years, healthcare education has moved away from objective-centered approaches, to more outcome-based models for education (Harden Citation2007). Outcome-based education (OBE) highlights the importance of learning outcomes and competencies in designing curricula, particularly for choosing appropriate instructional methods (Harden Citation2007). While learning outcomes and competencies can be used interchangeably, the latter typically refers to the end-points of learning expected from the learners, whereas the former also relates to programs (Ellaway et al. Citation2007).

Several learning outcomes frameworks have been developed for local, national, or international use. Some of the best known systems in medicine are the CanMEDS competency framework developed by the Royal College of Physicians and Surgeons of Canada (Frank Citation2005), the ACGME outcome project developed by the Accreditation Council for Graduate Medical Education in the US (Swing Citation2007), and Tomorrow's Doctors developed by the General Medical Council in the UK (General Medical Council Citation2009). Other healthcare professions have also developed similar systems, and interprofessional competency frameworks have been published (CIHC Competencies Working Group Citation2010). Finally, some frameworks have been developed to highlight the importance of training in a subset of competencies that are often neglected or ignored. Thus, in Canada, a framework complementary to the CanMEDS model has been published by a licensing body and by a patient-advocate organization, which specifically targets learning outcomes aimed at improving patient safety (Frank & Brien Citation2008).

In discussing learning outcomes for simulation-oriented training, these published frameworks are too broad (or, in some cases, too specific) to be of practical use. Moreover, most of them are aimed at specific healthcare professions, such as medicine, while the intent of the model described here is interprofessional. Hence, we have opted to discuss specifically the following interprofessional competency domains: (1) rote knowledge; (2) techniques and procedures; (3) history, physical exam and patient counseling; (4) clinical reasoning and patient management; (5) teamwork and crisis management; (6) ethics and beliefs. These domains by no means cover the scope of the healthcare professions, but highlight some of the most salient knowledge, skills, and attitudes that constitute the health learning continuum, are useful for ID, and are consistent with the way others have approached simulation-based learning (Cook & Triola Citation2009). It is of course possible to map those domains to existing frameworks, and local educators and program developers who wish to implement simulation-based training should cross-reference those domains to the learning outcomes frameworks that are appropriate to their programs. Examples and discussions of methodologies for cross-referencing different learning outcomes frameworks can be found in the appropriate literature (Ellaway et al. Citation2007).

Although rote knowledge is best achieved through other media, SPs could be used to that end, especially where clinical knowledge is concerned. SPs, however, are particularly adapted for a wide array of competencies that require direct contact with a patient, including history taking, physical examination, and patient counseling (Cleland et al. Citation2009).

Along with clinical reasoning, situation awareness – the ability to “read” the ongoing situation and predict its evolution (Endsley Citation1995) – is an important aspect of clinical management. While controversy exists regarding their nature as actual skills (Patrick & James 2004; Eva Citation2005; Norman Citation2005), there is still widespread agreement that they must be developed by learners of health sciences (Schuwirth Citation2002; Patrick & James 2004). SPs and simulated clinical immersion are both appropriate modalities for developing competencies such as clinical reasoning and patient management. However, the high costs associated with simulated clinical immersion and the increased cognitive burden that interacting with the environment adds on the learner suggest that it should probably be reserved for situations where the environment is an important contributing factor in training the intended competencies, such as training in situation awareness, training in patient management in a highly dynamic situation that imposes time constraints and increases the stakes of both diagnosis and treatment, or training in error management (Gaba et al. Citation2001; Hassan & Sloan Citation2006). Concurrently, simulated clinical immersion has been shown to be effective for training in safety competencies, especially in dynamic or crisis situations (Wallin et al. Citation2007; Buljac-Samardzic et al. Citation2010), and is often used for that purpose (Raemer Citation2004; Wayne et al. Citation2008; Smith & Cole Citation2009). This domain includes several competencies related to ‘crisis management’ or ‘non-technical skills’, such as team management skills (Fletcher et al. Citation2004; Reader et al. Citation2006), which are seldom described in the medical education literature. These competencies can be subsumed under the heading patient safety competencies (Frank & Brien Citation2008). They include skills such as role clarity, communication and resource utilization, which are usually mobilized within entire teams during crisis situations (Raemer Citation2004).

The acquisition and development of specific perceptual and motor skills, the proper performance of techniques, and the application of their underlying procedures are important skills in healthcare, and are usually targeted extensively through procedural simulation. Self-instruction is possible, especially through the use of virtual reality simulators (Moorthy et al. Citation2003) or, when motor practice is not necessary, through computer-based simulation. It is necessary, however, to realize that beliefs and attitudes underlying the specific procedures are paramount. For example, dedication to the adherence of proper protocol, especially in the face of time constraints and pressures that are often an integral part of health professions, is often as important as knowing the appropriate steps in a procedure. As such, simulated clinical immersion and SPs can be a useful adjunct to procedural simulation, since they allow the replication of the actual clinical conditions that prevail in the learners’ workplace.

Appropriate beliefs and attitudes, including ethical conduct, are not unique to technical procedures but permeate the continuum of healthcare and are recognized by all major learning outcomes frameworks. Given their close resemblance to actual patient care, SPs and, especially, simulated clinical immersion are valuable learning modalities for developing and maintaining these attitudes and beliefs. SPs have indeed been used for teaching clinical ethics (Edinger et al. Citation1999), but there still is a paucity of data, especially related to the use of simulated clinical immersion in affecting beliefs and attitudes. One study has shown clinical immersion to be ineffective in changing beliefs toward patient safety (Wallin et al. Citation2007). This, however, could reflect the inherent difficulty in changing beliefs and attitudes, or it could be an artifact of the assessment method used.

Depending on the type of simulator used, computer-based simulation can be adapted to a wide set of learning outcomes. Computer or web applications are a great asset for learning basic knowledge (Chumley-Jones et al. Citation2002). Virtual patients are effective for teaching clinical reasoning and patient management in non-dynamic situations (Triola et al. Citation2006), and may be the modality of choice for these domains (Cook & Triola Citation2009), although a recent meta-analysis showed no difference with other, non-computer instruction (Cook et al. Citation2010). Virtual worlds have shown promise in developing teamwork and crisis management skills, and have proven, in one pilot study, equally as effective as simulated clinical immersion (Youngblood et al. Citation2008). Computer-based simulators are also effective for learning or developing skills and procedures, especially in novice learners, before hands-on experience is introduced (Vukanovic-Criley et al. Citation2008).

Choosing the appropriate media and simulation modalities

Media selection charts are useful adjuncts to learning theories for ID (Reiser & Gagné Citation1983; Gagné & Medsker Citation1996). Previously published charts were dependent on learner characteristics, objectives, and prior decisions made about the instructional methods. We present here a chart inspired by Robert Gagné's seminal work (Reiser & Gagné Citation1983), and adapted to simulation ( and ). It is based on learning outcomes as a first level, rather than a priori decisions about the instructional mode, as in Gagné's chart.

Figure 4. Media and simulation modalities selection chart A (continued in ). Diamond shapes correspond to key decision points (questions). Rectangles with dark borders correspond to appropriate media and simulation modalities. Text in red and bold corresponds to preferred modalities. AND/OR decision points suggest that several competency domains can be considered at the same time (potentially leading to the use of hybrid modalities). CBS: Computer-Based Simulation; pt: Patient; SCI: Simulated Clinical Immersion; SP: Simulated Patient; VR: Virtual Reality.

Figure 4. Media and simulation modalities selection chart A (continued in Figure 5). Diamond shapes correspond to key decision points (questions). Rectangles with dark borders correspond to appropriate media and simulation modalities. Text in red and bold corresponds to preferred modalities. AND/OR decision points suggest that several competency domains can be considered at the same time (potentially leading to the use of hybrid modalities). CBS: Computer-Based Simulation; pt: Patient; SCI: Simulated Clinical Immersion; SP: Simulated Patient; VR: Virtual Reality.

Figure 5. Media and simulation modalities selection chart B (continued from chart A in ). Diamond shapes correspond to key decision points (questions). Rectangles with dark borders correspond to appropriate media and simulation modalities. Text in red and bold corresponds to preferred modalities. AND/OR decision points suggest that several competency domains can be considered at the same time (potentially leading to the use of hybrid modalities). CBS: Computer-Based Simulation; PE: Physical exam; Hx: History; pt: Patient; SCI: Simulated Clinical Immersion; SP: Simulated Patient.

Figure 5. Media and simulation modalities selection chart B (continued from chart A in Figure 4). Diamond shapes correspond to key decision points (questions). Rectangles with dark borders correspond to appropriate media and simulation modalities. Text in red and bold corresponds to preferred modalities. AND/OR decision points suggest that several competency domains can be considered at the same time (potentially leading to the use of hybrid modalities). CBS: Computer-Based Simulation; PE: Physical exam; Hx: History; pt: Patient; SCI: Simulated Clinical Immersion; SP: Simulated Patient.

Determining desired learning outcomes for a given learning activity, based on the learners’ profession, on local practices, and on published competency frameworks is a fundamental step in ID. Then, through a series of questions provided in the charts (and by referencing if needed), the educator can choose the suitable medium and simulation modalities (when appropriate) in order to foster better learning and to achieve the outcomes set forth in the curriculum.

We acknowledge that this chart is a work in progress and will need to be expanded by future research and publications. This could include refining the level of detail by adding aspects of presentation, informed by the published research.

Discussion

There is a great need, in the nascent field of healthcare simulation, for an ID framework. A framework is necessary to structure future research, and provide direction to educators and simulation proponents in the design and creation of effective and innovative curricula that optimize what simulation has to offer. Such an instructional framework has to be flexible enough to ensure it can meet the wide variety of training demands it must support, which include technical skills development (airway management, lumbar puncture, etc.), non-technical skills (teamwork, communications, leadership), train-the-trainer courses (debriefing, creation of learning environments), or specialized skills (hospital evacuation, mass casualty management). We believe the framework presented here achieves these goals and further, it can be easily implemented in both ID and research.

Simulation, by itself, is not a guarantee that adequate learning will occur. Educational sessions using simulation must rely on appropriate ID based on learning theories (Salas et al. Citation2005). The framework presented here can help in designing such effective simulations.

The framework classifies characteristics of the educational experience into four levels: medium, simulation modality, educational method, and presentation. It offers guidance to adapt each level to the appropriate educational goals and objectives, and provides grounds for evidence-based practices in ID. The designer of a training course can determine whether simulation constitutes an appropriate medium for delivery of instruction by mapping the severity and likelihood of exposure of the desired events to the zone of simulation matrix. The exact simulation modality can in turn be determined by the desired learning outcomes and through the use of the media and modality selection charts. After determining the required instructional methods, the program designer would then give specific attention to the items of presentation which include type of simulator, fidelity, feedback, scenario, and so on. This tool is thus useful for the simulation educator that aims to develop effective training sessions.

The framework can further be used to direct research design. In his paper describing the framework for e-learning, David Cook makes a persuasive argument that media-comparative research – that is, research comparing the learning effectiveness of two different media – is logically impossible given that there are no valid comparison groups (Cook Citation2005). Other authors had previously made such claims (Keane et al. Citation1991; Clark Citation1992; Friedman Citation1994). These authors argue that the diverse elements involved in each instructional medium introduce confounding variables when comparing two groups in which different media are used. Instead, they argue that it is best to compare within a single medium the features that promote optimal learning, in order to inform the field of education. Cook extended this claim by arguing that research for e-learning is best done within each level of his four-level framework rather than between levels. While a full discussion of this issue is beyond the scope of this article, we would argue that the same holds true for simulation. By providing a stepwise framework of simulation features, we hope that future research can be tailored to determine the specific features of simulation that are best conducive to learning by designing studies that compare elements within each level of the framework rather than across it.

In developing this model, it was inevitable that a taxonomy be adapted to its use. We believe this taxonomy will be useful for the educators and users of simulation alike, and it is our hope that the taxonomy be broadly adopted. Other taxonomies have been developed elsewhere. One such effort was conducted by Alinier (Citation2007) who described a fairly extensive and valuable typology of simulation tools. In contrast to this taxonomy, however, our model does not focus on technology, but on the educational experience that is the end-product of the ID process. As was discussed earlier, there is no inherent characteristic in a patient mannequin that would uniquely affect the final learning outcomes compared, for example, to an actual patient, provided that the tool used (simulator or patient) allows adequate reproduction and training of the targeted competencies. What is more, a drawback of a technology-focused taxonomy is that it often assigns important presentation attributes, such as fidelity, to the tool itself (the simulator) rather than to the pedagogical end-point, the individual educational experience.

This framework has several limitations. While it rests on solid theoretical grounds and relies on published studies, there is still a paucity of available literature specifically aimed at best practices for ID in healthcare simulation. Many of the concepts presented here are grounded in experience and still need to be formally studied and supported. As such, several other presentation elements (such as group size), which probably have an effect on the educational experience, still need to be researched and analyzed. The media selection charts could be expanded in the future (e.g., by adding presentation aspects such as fidelity) in order to improve the model. Given the available literature, it is impossible for such a framework to be entirely prescriptive, although future studies may refine it and move it closer to a fully-prescriptive model.

What is more, the terminology used is bound to be somewhat idiosyncratic. While the authors are educators and simulation experts from across Canada, we acknowledge that local practices may affect the way some definitions and terms are used, especially when the literature is not conclusive on their use.

Finally, the conceptual framework presented here is seen through the lens of education. It is explicitly presented as an ID framework with clearly defined parameters (simulation centered on a patient). As such, it does not address some aspects of simulation that are indubitably very important endeavors, such as epistemology of human factors and systems dynamics. Such efforts should be grounded in other frameworks and approached differently.

Conclusion

We have presented a framework for ID in healthcare simulation that provides the simulation educator with tools to appropriately design training sessions based on learning outcomes and instructional intent. The framework is grounded in previously published studies, although many more studies are needed to refine it.

Despite some limitations, this framework fills a void in the area of ID and research for healthcare simulation. The CNSH hopes that the model provided in this article will be adopted by simulation proponents in Canada and elsewhere, in order to design effective curricula and standardize research. It if further our hope that the framework can serve as a catalyst for the simulation community – which includes clinicians, educators, and experts in other fields – to engage in a discussion about the educational characteristics of simulation and to encourage future research in this field.

Acknowledgments

The authors acknowledge the invaluable contribution of Mr Murray Doggett, both for providing clerical support and for offering important feedback. The authors also recognize the insightful input of Ms Jacqueline Lyndon. Finally, the authors thank two anonymous reviewers whose insights were very helpful in improving this article.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of this article.

References

  • Alinier G. A typology of educationally focused medical simulation tools. Med Teach 2007; 29: e243–e250
  • Appelman R. Designing experiential modes: A key focus for immersive learning environments. TechTrends 2004; 49: 64–74
  • Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med 1993; 68: 443–451
  • Beaubien JM, Baker DP. The use of simulation for training teamwork skills in health care: How low can you go?. Qual Saf Health Care 2004; 13(Suppl 1)i51–i56
  • Bernard T, Stevens A, Wagner P, Bernard N, Oxendine C, Johnsen K, Dickerson R, Raji A, Lok B, Duerson M, et al. A multi-institutional pilot study to evaluate the use of virtual patients to teach health professions students history-taking and communication skills. Simul Healthc 2006; 1: 92
  • Bokken L, Rethans JJ, Scherpbier AJ, van der VlC. Strengths and weaknesses of simulated and real patients in the teaching of skills to medical students: A review. Simul Healthc 2008; 3: 161–169
  • Buljac-Samardzic M, Dekker-van Doorn CM, van Wijngaarden JD, van Wijk KP. Interventions to improve team effectiveness: A systematic review. Health Policy 2010; 94: 183–195
  • Campbell DJ, Lee C. Self-appraisal in performance evaluation: Development versus evaluation. Acad Manage Rev 1988; 13: 302–314
  • Chase J, Houmanfar R. The differential effects of elaborate feedback and basic feedback on student performance in a modified, personalized system of instruction course. J Behav Educ 2009; 18: 245–265
  • Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: Sound educational method or hype? A review of the evaluation literature. Acad Med 2002; 77: S86–S93
  • CIHC Competencies Working Group. A national interprofessional competency framework. Canadian Interprofessional Health Collaborative, VancouverCanada 2010
  • Clark RE. Dangers in the evaluation of instructional media. Acad Med 1992; 67: 819–820
  • Cleland JA, Abe K, Rethans JJ. The use of simulated patients in medical education: AMEE Guide No 42. Med Teach 2009; 31: 477–486
  • Collins JP, Harden RM. AMEE Medical Education Guide No. 13: Real patients, simulated patients and simulators in clinical examinations. Med Teach 1998; 20: 508–521
  • Cook DA. The research we still are not doing: An agenda for the study of computer-based learning. Acad Med 2005; 80: 541–548
  • Cook DA, Erwin PJ, Triola MM. Computerized virtual patients in health professions education: A systematic review and meta-analysis. Acad Med 2010; 85: 1589–1602
  • Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ, Hamstra SJ. Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA 2011; 306: 978–988
  • Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ 2009; 43: 303–311
  • Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA 2006; 296: 1094–1102
  • Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc 2007; 2: 183–193
  • Dreifuerst KT. The essentials of debriefing in simulation learning: A concept analysis. Nurs Educ Perspect 2009; 30: 109–114
  • Early PC, Northcraft GB, Lee C, Lituchy T. Impact of process and outcome feedback on the relation of goal setting to task performance. Acad Manage J 1990; 33: 87–105
  • Edinger W, Robertson J, Skeel J, Schoonmaker J. Using standardized patients to teach clinical ethics. Med Educ Online 1999; 4: 1–5
  • Ellaway R, Evans P, McKillop J, Cameron H, Morrison J, McKenzie H, Mires G, Pippard M, Simpson J, Cumming A, et al. Cross-referencing the Scottish Doctor and Tomorrow's Doctors learning outcome frameworks. Med Teach 2007; 29: 630–635
  • Endsley MR. Toward a theory of situation awareness in dynamic systems. Hum Factors 1995; 37: 32–64
  • Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004; 79: S70–S81
  • Eva KW. What every teacher needs to know about clinical reasoning. Med Educ 2005; 39: 98–106
  • Falchikov N. Peer feedback marking: Developing peer assessment. Innov Educ Train Int 1995; 32: 175–187
  • Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc 2007; 2: 115–125
  • Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Rating non-technical skills: Developing a behavioural marker system for use in anaesthesia. Cogn Tech Work 2004; 6: 165–171
  • Frank JR, ed. 2005. The CanMEDS 2005 Physician Competency Framework. Better standards. Better physicians. Better care. Ottawa, Canada: The Royal College of Physicians and Surgeons of Canada.
  • Frank JR, Brien S. The Safety Competencies Steering Committee. The safety competencies: Enhancing patient safety across the health professions. Canadian Patient Safety Institute, OttawaCanada 2008
  • Friedman CP. The research we should be doing. Acad Med 1994; 69: 455–457
  • Friedman CP. Anatomy of the clinical simulation. Acad Med 1995; 70: 205–209
  • Gaba DM. The future vision of simulation in healthcare. Simul Healthc 2007; 2: 126–135
  • Gaba DM, Howard SK, Fish KJ, Smith BE, Sowb YA. Simulation-based training in anesthesia crisis resource management (ACRM): A decase of experience. Simul Gaming 2001; 32: 175–193
  • Gagné RM, Medsker KL. The conditions of learning: Training applications. Wadsworth, Belmont, CA 1996
  • Gagné RM, Wager WW, Golas KC, Keller JM. Principles of instructional design5th. Wadsworth, Belmont, CA 2005
  • General Medical Council. Tomorrow's doctors: Outcomes and standards for undergraduate medical education. GMC, London 2009
  • Gorman PJ, Meier AH, Krummel TM. Simulation and virtual reality in surgical education: Real or unreal?. Arch Surg 1999; 134: 1203–1208
  • Gross DC. Report from the Fidelity Implementation Study Group. Simulation Interoperability Standards Organization, Orlando, FL 1999
  • Harden RM. Outcome-based education – The ostrich, the peacock and the beaver. Med Teach 2007; 29: 666–671
  • Hassan Z-U, Sloan P. Using a mannequin-based simulator for anesthesia resident training in cardiac anesthesia. Simul Healthc 2006; 1: 44–48
  • Hewson MG, Little ML. Giving feedback in medical education: Verification of recommended techniques. J Gen Intern Med 1998; 13: 111–116
  • Huwendiek S, De leng BA, Zary N, Fischer MR, Ruiz JG, Ellaway R. Towards a typology of virtual patients. Med Teach 2009; 31: 743–748
  • Ilgen DR, Fisher CD, Taylor MS. Consequences of individual feedback on behavior in organizations. J Appl Psychol 1979; 64: 349–371
  • Issenberg SB, McGaghie WC, Petrusa ER, Gordon DL, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach 2005; 27: 10–28
  • Jacoby J, Mazursky D, Troutman T, Kuss A. When feedback is ignored: Disutility of outcome feedback. J Appl Psychol 1984; 69: 531–545
  • Johnson DS, Perlow R, Pieper KF. Differences in task performance as a function of type of feedback: Learning-oriented versus performance-oriented feedback. J Appl Social Psychol 1993; 23: 303–320
  • Kaufmann C, Liu A. Trauma training: Virtual reality applications. Stud Health Technol Inform 2001; 81: 236–241
  • Keane DR, Norman GR, Vickers J. The inadequacy of recent research on computer-assisted instruction. Acad Med 1991; 66: 444–448
  • Kern DE, Thomas PA, Hughes MT, eds. 2009. Curriculum development for medical education: A six-step approach, 2nd. Baltimore, MD: Johns Hopkins University Press.
  • Kneebone R, Kidd J, Nestel D, Asvall S, Paraskeva P, Darzi A. An innovative model for teaching and learning clinical procedures. Med Educ 2002; 36: 628–634
  • Kobayashi L, Suner S, Shapiro MJ, Jay G, Sullivan F, Overly F, Seekell C, Hill A, Williams KA, EMT-P NR. Multipatient disaster scenario design using mixed modality medical simulation for the evaluation of civilian prehospital medical response: A “dirty bomb” case study. Simul Healthc 2006; 1: 72–78
  • Kolb DA. Experiential learning: Experience as the source of learning and development. Prentice-Hall, Englewood Cliffs, NJ 1984
  • Lampotang S. Virtual anesthesia machine has worldwide impact. APSF Newslett 2003; 18: 57
  • Lareau SA, Kyzer BD, Hawkins SC, McGinnis HD. Advanced wilderness life support education using high-technology patient simulation. Wilderness Environ Med 2010; 21: 166–170.
  • Lee SK, Pardo M, Gaba D, Sowb Y, Dicker R, Straus EM, Khaw L, Morabito D, Krummel TM, Knudson MM. Trauma assessment training with a patient simulator: A prospective, randomized study. J Trauma 2003; 55: 651–657
  • LeFlore JL, Anderson M. Alternative educational models for interdisciplinary student teams. Simul Healthc 2009; 4: 135–142
  • Locke EA, Saari LM, Shaw KN, Latham GP. Goal setting and performance: 1969–1980. Psychol Bull 1981; 90: 125–152
  • Maran NJ, Glavin RJ. Low- to high-fidelity simulation – A continuum of medical education?. Med Educ 2003; 37: 22–28
  • McGuire CH. Simulation: Its essential nature and characteristics. Innovative simulations for assessing professional competence: From paper-and-pencil to virtual reality, A Tekian, CH McGuire, WC McGaghie. University of Illinois at Chicago, Chicago 1999; 3–6
  • Monti EJ, Wren K, Haas R, Lupien AE. The use of an anesthesia simulator in graduate and undergraduate education. CRNA 1998; 9: 59–66
  • Moorthy K, Smith S, Brown T, Bann S, Darzi A. Evaluation of virtual reality bronchoscopy as a learning and assessment tool. Respiration 2003; 70: 195–199
  • Morris J. Peer assessment: A missing link between teaching and learning? A review of the literature. Nurse Educ Today 2001; 21: 507–515
  • Norman G. Research in clinical reasoning: Past history and current trends. Med Educ 2005; 39: 418–427
  • Patrick J, James N. A task-oriented perspective of situation awareness. A cognitive approach to situation awareness: Theory and application, SP Banbury, S Tremblay. Ashgate Publishing, HampshireEngland 2004; 61–81
  • Phillips J, Berge ZL. Second life for dental education. J Dent Educ 2009; 73: 1260–1264
  • Raemer DB. Team-oriented medical simulation. Simulators in critical care education and beyond, WF Dunn. Society of Critical Care Medicine, Des Plaines, IL 2004; 42–46
  • Reader T, Flin R, Lauche K, Cuthbertson BH. Non-technical skills in the intensive care unit. Br J Anaesth 2006; 96: 551–559
  • Rehmann AJ, Mitman RD, Reynolds MC. A handbook of flight simulation fidelity requirements for human factors research. U.S. Department of Transportation, Federal Aviation Administration, Atlantic City, NJ 1995
  • Reiser RA, Gagné RM. Selecting media for instruction. Educational Technology Publications, Englewood Cliffs, NJ 1983
  • Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: A theory and method for debriefing with good judgment. Simul Healthc 2006; 1: 49–55
  • Salas E, Bowers CA, Rhodenizer L. It is not how much you have but how you use it: Toward a rational use of simulation to support aviation training. Int J Aviat Psychol 1998; 8: 197–208
  • Salas E, Burke CS. Simulation for training is effective when…. Qual Saf Health Care 2002; 11: 119–120
  • Salas E, Klein C, King H, Salisbury M, Augenstein JS, Birnbach DJ, Robinson DW, Upshaw C. Debriefing medical teams: 12 Evidence-based best practices and tips. Jt Comm J Qual Patient Saf 2008; 34: 518–527
  • Salas E, Wilson KA, Burke CS, Priest HA. Using simulation-based training to improve patient safety: What does it take?. Jt Comm J Qual Patient Saf 2005; 31: 363–371
  • Schär SG, Schluep S, Schierz C, Krueger H. Interaction for computer-aided learning. Interact Multimedia Electr J Comput-Enhanced Learn 2000; 2, [Accessed 1 February 2012] Available from http://imej.wfu.edu/articles/2000/1/03/
  • Schiavenato M. Reevaluating simulation in nursing education: Beyond the human patient simulator. J Nurs Educ 2009; 48: 388–394
  • Schuwirth L. Can clinical reasoning be taught or can it only be learned?. Med Educ 2002; 36: 695–696
  • Smith JR, Cole FS. Patient safety: Effective interdisciplinary teamwork through simulation and debriefing in the neonatal ICU. Crit Care Nurs Clin North Am 2009; 21: 163–179
  • Swing SR. The ACGME outcome project: Retrospective and prospective. Med Teach 2007; 29: 648–654
  • Triola M, Feldman H, Kalet AL, Zabar S, Kachur EK, Gillespie C, Anderson M, Griesser C, Lipkin M. A randomized trial of teaching clinical skills using virtual and live standardized patients. J Gen Intern Med 2006; 21: 424–429
  • Tsai T-C, Harasym PH, Nijssen-Jordan C, Jennett P. Learning gains derived from a high-fidelity mannequin-based simulation in the pediatric emergency department. J Formos Med Assoc 2006; 105: 94–98
  • Vukanovic-Criley JM, Boker JR, Criley SR, Rajagopalan S, Criley JM. Using virtual patients to improve cardiac examination competency in medical students. Clin Cardiol 2008; 31: 334–339
  • Wallin CJ, Meurling L, Hedman L, Hedegard J, Fellander-Tsai L. Target-focused medical emergency team training using a human patient simulator: Effects on behaviour and attitude. Med Educ 2007; 41: 173–180
  • Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: A case-control study. Chest 2008; 133: 56–61
  • Williams V. Designing simulations for learning. E-J Instruct Sci Technol 2003; 6, [Accessed 11 November 2011] Available from http://www.ascilite.org.au/ajet/e-jist/docs/Vol6_No1/contents2.htm
  • Youngblood P, Harter PM, Srivastava S, Moffett S, Heinrichs WL, Dev P. Design, development, and evaluation of an online virtual emergency department for training trauma teams. Simul Healthc 2008; 3: 146–153
  • Youngblood P, Heinrichs L, Cornelius C, Dev P. Designing case-based learning for virtual worlds. Simul Healthc 2007; 2: 246–247

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.