15,117
Views
130
CrossRef citations to date
0
Altmetric
Research Article

Frameworks for learner assessment in medicine: AMEE Guide No. 78

&
Pages e1197-e1210 | Published online: 16 May 2013

Abstract

In any evaluation system of medical trainees there is an underlying set of assumptions about what is to be evaluated (i.e., which goals reflect the values of the system or institution), what kind of observations or assessments are useful to allow judgments; and how these are to be analyzed and compared to a standard of what is to be achieved by the learner. These assumptions can be conventionalized into a framework for evaluation. Frameworks encompass, or “frame,” a group of ideas or categories to reflect the educational goals against which a trainee's level of competence or progress is gauged. Different frameworks provide different ways of looking at the practice of medicine and have different purposes. In the first place, frameworks should enable educators to determine to what extent trainees are ready for advancement, that is, whether the desired competence has been attained. They should provide both a valid mental model of competence and also terms to describe successful performance, either at the end of training or as milestones during the curriculum. Consequently, such frameworks drive learning by providing learners with a guide for what is expected. Frameworks should also enhance consistency and reliability of ratings across staff and settings. Finally, they determine the content of, and resources needed for, rater training to achieve consistency of use. This is especially important in clinical rotations, in which reliable assessments have been most difficult to achieve. Because the limitations of workplace-based assessment have persisted despite the use of traditional frameworks (such as those based on knowledge, skills, and attitudes), this Guide will explore the assumptions and characteristics of traditional and newer frameworks. In this AMEE Guide, we make a distinction between analytic, synthetic, and developmental frameworks. Analytic frameworks deconstruct competence into individual pieces, to evaluate each separately. Synthetic frameworks attempt to view competence holistically, focusing evaluation on the performance in real-world activities. Developmental frameworks focus on stages of, or milestones, in the progression toward competence. Most frameworks have one predominant perspective; some have a hybrid nature.

The importance of frameworks

Imagine yourself being a clinical specialist, recently appointed as a training director for a clerkship or clinical attachment at a teaching hospital. Students and residents will all visit your department for clinical training. Your institution has asked you to have your faculty evaluate them at the end of their rotations and to report a valid mark for each. Here is where you find yourself somewhat uncomfortable. Teaching is your passion, but assessing students has simply not been easy for you as a teacher, and overseeing the assessments of your fellow teachers seems very complicated. The students’ school and the residents’ program each have their own assessment forms and frameworks for evaluation, and you have trouble understanding these yourself. Explaining it to others and overseeing their evaluations may expose your own lack of experience with educational principles. Students and residents are usually perceived by you and your clinical colleagues as likeable and they “deserve to pass” but grading them on a scale does not make much sense to you. You yourself like giving all learners “above expectations” marks, because students clearly seem to do their best. You worry that all grading is subjective in any case, and do not feel you know how to get “objective” evaluations from your colleagues. Where can you get help?

A consideration of educational frameworks, as this Guide provides, can help you be more clear in your own thinking, and in communicating expectations to your students and faculty. Understanding the basic terminology and principles of some common frameworks can assist you in your own assessment of trainees, and help you to guide the teachers at your clinical site in theirs. You and your colleagues, as inexperienced clinician educators, are not rare. In fact, most clinicians are trained to manage patient conditions, not to judge trainees. Evaluating patient problems may have some resemblance to evaluating trainees, but there are vast differences in theory and practice. Current medical practice builds on abundant evidence, and sources are quickly found to evaluate patients and support decisions. For the evaluation of trainees, many clinicians just use their own experience as a benchmark. However, their judgments about trainees can be easily structured by using a common language and mental model of what is to be expected. This is the goal of a framework.

In the past decades, the assessment of trainees in the workplace has become recognized as an essential component of evaluation, as performance in the workplace is the core of medical competence. George Miller has made medical educators aware that competence can and should be evaluated at different levels of proficiency:“knows,” “knows how,” “shows how,” “does” (Miller Citation1990). The simple four-layered framework he provided, widely known as The Miller Pyramid, alerted educators that there is a higher, more valid level than written tests and even than standardized skills tests, if doctors are to be assessed on their clinical ability.

Miller's Pyramid is an excellent example of a model that frames the minds of educators when assessing students and planning learning experiences ().

Figure 1. Miller's pyramid.

Figure 1. Miller's pyramid.

Goals for assessment and goals for teaching and curriculum development should be fully intertwined. Educational goals are intended to drive the design of a curriculum, the learning of individuals and their assessment. When widely shared, a framework of objectives becomes a convention, an agreement between leaders, teachers, and learners, about what is important. It establishes a culture of teaching and assessment. It also enables those overseeing educational programs, such as teachers and course directors, to establish categories about which observations are to be collected for the purpose of assessment. provides an overview of common frameworks with which educators may be familiar.

Table 1.  Overview of common frameworks to guide teaching and assessment in medical education

The primary assessment effect of frameworks is, in fact, to guide the teachers in their observations—what to look for in a trainee, when, and in what order of importance. Blueprints to choose items for written tests or to devise forms for observation of trainees in practice can be derived from such frameworks. The effect of aligning teaching with assessment is to drive learning in these categories, because students will focus on the categories if they realize these have been designated as the drivers of grading.

Frameworks are powerful in their effects upon the organization of curricula and upon what is learned. Frameworks set up a priori what students are supposed to learn. Although it must be admitted immediately that students learn many things outside the intentions of the formal curriculum, the categories within a framework are the primary expression of an institution's educational values and expectations for learners.

Secondary effects related to frameworks are the consistency and accuracy with which they can be applied by those expected to use them (students and teachers, as well as course directors). Successful application relates to the clarity of the categories, the ease of use of the framework, and the acceptability of its values by the user. Fairness to learners and ultimately to society will depend upon how well, that is, how consistently, reliably, and validly the framework can be applied. This will depend upon both the intrinsic characteristics of the framework (clarity, simplicity, and acceptability) and the resources spent to instruct and train teachers and others to use it. Frameworks, then, serve as a frame of reference for all involved in the curriculum. This Guide views the common frameworks seen in through the kind of mental model that is provided, and also gives the definitions, assumptions, and advantages of three kinds of frameworks—analytic, synthetic, and developmental ().

Table 2.  Summary of frameworks for assessment of competence. Definitions, examples, assumptions. advantages, and limits

Short history of major frameworks to inform teaching and assessment

How did the idea of frameworks arise in education? Ever since educational scientist Ralph Tyler published in 1949 what became known as the “Tyler Rationale,” education started to orient toward outcomes (Tyler Citation1949). This Rationale poses four simple but powerful questions:

  1. What educational purposes should a school seek to attain?

  2. What educational experiences can be provided that are likely to attain these purposes?

  3. How can these educational experiences be organized?

  4. How can we determine whether these purposes are being attained?

The first and fourth question, on objectives and assessment, lead the idea of frameworks. Since Tyler, many educationalists have expanded on this idea; most prominently Benjamin Bloom, whose taxonomy of educational objectives described a cognitive domain (knowledge), a psychomotor domain (manual skills), and an affective domain (attitudes), and has since dominated most of the world's thinking of educational objectives. Bloom's work elaborated on the cognitive domain (Bloom et al. Citation1956), and other authors have followed with other domains (Simpson, Citation1972; Krathwohl et al. Citation1973; de Landesheere Citation1997; Krathwohl Citation2002) (see Appendix 2). Since that time, “KSA” (for knowledge-skills-attitudes) has been the dominant, if not exclusive, mental model of generations of teachers.

In the 1980s, educationalists started focusing not only on final objectives of education, but also on developmental milestones. The model devised by Dreyfus and Dreyfus (Dreyfus & Dreyfus Citation1986) distinguishing five stages (novice, advanced beginner, competent, proficient, and expert) has recently been applied as a developmental framework for medical training (Carraccio et al. Citation2008).

In medical education, many national and international bodies have devised extensive descriptions of the objectives for undergraduate medical education over the past two decades. Well-known examples are analytic frameworks of USA's Medical School Objectives Project (Anderson Citation1999), UK's Tomorrow's Doctors (General Medical Council (GME) Citation2009), the Scottish Doctor (Scottish Deans’ Medical Curriculum Group Citation2009), and the Dutch Framework for Undergraduate Medical Education (van Herwaarden et al. Citation2009). The “RIME” framework (Reporter-Interpreter-Manager-Educator) (Pangaro Citation1999) has a developmental dimension but is synthetic at the same time, as it integrates Bloom's KSA into the learner roles in clinical practice.

Recently, postgraduate medical education has been renewed in many countries with frameworks of objectives, two of which have become widely known: the Canadian Medical Education Directions for Specialists, in short “CanMEDS” (Frank Citation2005), and the framework of the Accreditation Council for Graduate Medical Education, the “ACGME framework” (Anon Citation1999). The CanMEDS framework now serves to guide medical education development in many countries, both for postgraduate and increasingly for undergraduate education, and the ACGME framework is dominant in postgraduate training in the United States.

Assessment tools in the workplace reflect frameworks on a micro level. There is a wide variety of checklists that focus on objectives of measurement. Checklists used in Objective Structured Clinical Examinations, in direct observation in clinical settings (Norcini & Burch Citation2007), in multi-source feedback tools (Lockyer Citation2003) all reflect implicit or explicit objectives, but are not always derived from overarching frameworks on a curriculum level.

The difficulty of workplace assessments

Our initial text example of the challenges for teachers in workplace assessment was meant to illustrate how difficult such assessments can be, and to lead to our point that the application of a clear framework can help solve the problem. Research shows that few assessments are so fraught with threats to validity and reliability as workplace-based assessments (Williams et al. Citation2003). Traditional reliability requirements of assessment cannot easily be met in the workplace. Assessors differ in expertise and experience, tasks in the workplace that are being assessed differ, and circumstances differ continuously. In addition, “medical competence” includes many different facets, most of which are not visible at a moment of observation. How then can judgments about a trainee then ever be reliable, or an evaluation on progress be valid?

Assessors are considered to be a major source of measurement error in workplace assessment (Govaerts et al. Citation2007). There are both systematic error and random error. A systematic error is the widespread tendency to rate medical trainees in the workplace too highly and to “fail to fail” (Dudek et al. Citation2005). This has been called leniency-bias or generosity error, and is caused by several factors, such as lack of having or applying standards (Albanese Citation1999). Particularly disturbing is the observation that with increased emphasis on workplace assessment, grades appear to become “inflated” over the years, resulting in lowered standards (Speer et al. Citation2000).

Halo-effects and low intra- and interrater reliability are ubiquitous among untrained assessors of medical trainees (Albanese Citation2000; Williams et al. Citation2003). This may in part be caused by a lack of a mental frame of reference (Holmboe et al. Citation2011), but also by the complexity of the assessment task, or by the tendency of humans to categorize others in predefined groups. Such subjective, socially constructed frameworks that individuals have built over many years may interfere with frameworks that aim to maximize objectivity in assessment (Gingerich et al. Citation2011). It has also been suggested that the many aspects to evaluate learners on, in a busy, distracting clinical setting simply demand too much of the cognitive capacity of supervisors to accurately judge them well (Tavares & Eva Citation2012). Any framework that serves to reduce the cognitive load of assessors is likely to improve the accuracy of ratings.

So, frameworks for assessment are precisely about this issue. They are one key to achieving common mental models across teachers and settings, needed to reduce threats to reliability in work-place ratings.

The primary theoretical and research question is why the availability of frameworks has not been able to overcome the workplace problems inherent in the rater (halo, leniency, etc.) or inherent in the circumstances (changes of case content, complexity, and context). Does rater error stem from frameworks that ask raters to carry out judgments incongruent with what they are judging? (Gingerich et al. Citation2011). In other words, would different frameworks be better for different assessment tasks? Or do we need more resources and training to employ the same framework in various circumstances? Objectives of education must be translated into frameworks for assessment, which teachers can apply properly in one-on-one situations. This is a major responsibility of training programs, and a major task for clerkship and residency directors. Evaluation of students in a workplace setting can only approach a level of validity and reliability if, first of all, the rater has a frame of reference to benchmark for two questions:

  1. What are relevant facets of competence to be taken into account? and

  2. What is superior, adequate and unacceptable performance in each of these aspects?

Secondly, the assessment system must provide the resources to be sure that the available framework is actually used and applied by teachers. This will take training, monitoring, and feedback.

Types of frameworks in medical education—Theory explained

Analytic frameworks, describing aspects of competence

Since the times of Tyler and Bloom, a shift is now apparent from a focus on what happens in a medical school to what is needed in practice. Teachers and schools were the first to devise their own objectives, but increasingly, bodies outside individual schools have been involved in determining the purpose of education. The national frameworks mentioned earlier all focus on an ideal image at the end stage of training, a horizon that should guide teachers and learners. This movement has evolved in what has become known as Outcome-Based Education (OBE) (Harden et al. Citation1999a; Harden et al. Citation1999b). Because these approaches are focused upon measurement of outcomes, they divide the desired competence into domains or aspects, for example, knowledge, skills, and attitudes, which preferentially facilitate measurement. illustrates how attributes of a physician's competence are taken apart and allocated into domains (such as the “roles” within the CanMEDs framework or the “competencies” within the ACGME framework), and then even into more specific subunits.

Figure 2. How competence is pictured in an “analytic” model, here using terms from the CanMEDs framework (Royal College of Physicians of Canada 2012).

Figure 2. How competence is pictured in an “analytic” model, here using terms from the CanMEDs framework (Royal College of Physicians of Canada 2012).

We have available methods to quantify knowledge retention, whether under the rubric Medical Expert (CanMEDs) or Medical Knowledge (ACGME) as an end-point, separated from the skill that may be needed in applying it; and separated from, for example, how knowledge might be incorporated into obtaining a patient's informed consent. It is a feature of analytic frameworks that the relevant dimensions of competence are all encompassed within the framework, and a successful analytic framework will do so clearly for those who have to use it. Fully analytic frameworks focus on description of all facets of competence, which makes them detailed and often hierarchical. Major competencies may be expressed as domains or as roles and these in turn, include “sub-competencies,” or “enabling competencies,” each of which may be described in further detail. In their initial formulations, many of the national systems mentioned above have, to be complete, listed more than 100 separate abilities or competencies to be assessed. We encourage program and clerkship directors to provide teachers a simple structure on which to hang their terms. This can be done with “KSA,” or even more concisely by using Pellegrino's definition of Professionalism (Pellegrino Citation1979) as a promise of duty (attitude) and expertise (skill and knowledge).

It is an assumption of analytic frameworks that the domains of competence, whether given as abstractions (e.g., ACGME Citation1999) or roles (e.g., CanMEDs) can be measured discretely. Most outcome-oriented frameworks have an analytic nature; that is, they start with a general set of abstract domains of interest (knowledge, skill, attitude) or a profile of what a graduate of education should look like, usually defined as a set of qualities, for example, a doctor should be a content-expert, a communicator, a good collaborator, a scholar, a manager, a health advocate, and a professional (Frank Citation2005). These aspects, intrinsic to the concept of the competence, are then simply unpacked or taken apart (“analyzed”), rather than derived from empirical observation. Next, each of these descriptors is defined on a more detailed level, as these domains of competence are considered too general for teaching and assessment purposes. In many cases, a further level of detail is added. The CanMEDS framework has 7 roles, 134 “elements,” 28 “key-competencies,” and 125 “enabling competencies.” The Manager role for instance includes 21 Elements, 3 of which are: “collaborative decision making,” “health human resources,” and “negotiation.” One of the four “Key”-competencies is “physicians are able to allocate finite healthcare resources appropriately” and one of its subordinate 13 “Enabling” competencies is “physicians are able to recognize the importance of just allocation of healthcare resources, balancing effectiveness, efficiency and access with optimal patient care.” The strength in this approach is that it nears a fully comprehensive description of what we expect a physician to be. But the difficulty of highly analytic frameworks is that they lead to long and very detailed lists of objectives that tend to lose clarity. Frameworks are abstractions of the real world that need to be remembered and applied by those who use them. Many people can remember a set of four (RIME Pangaro Citation1999), six (ACGME Citation1999), or seven (CanMEDS 2005) units. More elaborated frameworks with dozens of units are usually not retained by the bulk of the users. This results in what we would call “secondary effects” of the frameworks, which directly affect their reliability in use, such as the ease of their use by the educational community, and the resources needed to train people to use the framework. We know of no studies comparing frameworks with one another in secondary effects, but there is some evidence that simpler frameworks are more effective (Battistone et al. Citation2002). Further, analytic frameworks assume that, together, the domains of the framework encompass competence, and as a consequence, measuring each domain is essential. This leads to the secondary effect that resources must be committed for each domain to be assessed and documented.

Synthetic frameworks, integrating facets or domains of competence

Frameworks with a synthetic nature are grounded in the practice of their focus. This approach is essentially integrative and less measurement-oriented, than is the case with analytic frameworks (Pangaro Citation1999). The grounding question is: What activity or task can be entrusted to a trainee, once sufficient competence has been reached? Such tasks, which have been designated “entrustable professional activities” or EPAs (ten Cate Citation2005), invariably combine multiple domains or facets of competence. In an EPA, such as performing a thoracentesis, multiple attributes (competencies or roles) are required and must be brought together (synthesized), as seen in .

Table 3.  Facts of competence (required skills) that must be synthesized for successful performance of thoracentesis, and their location within two common analytic frameworks

As seen in , they are synthetic in the sense that they combine knowledge, skill, and attitudes (Pangaro Citation1999).

Figure 3. (a) How competence is pictured in a “synthetic model,” here using the terms from the CanMEDs framework. The seven “roles” combine to allow a given task, here an entrusted professional activity. (b) How competence is picture in a “synthetic model,” here using the terms from the RIME framework (Pangaro Citation1999) which synthesize the elements of expertise and duty (knowledge, skills, and attitudes) into the roles of reporter, interpreter, manager, and educator.

Figure 3. (a) How competence is pictured in a “synthetic model,” here using the terms from the CanMEDs framework. The seven “roles” combine to allow a given task, here an entrusted professional activity. (b) How competence is picture in a “synthetic model,” here using the terms from the RIME framework (Pangaro Citation1999) which synthesize the elements of expertise and duty (knowledge, skills, and attitudes) into the roles of reporter, interpreter, manager, and educator.

Synthetic frameworks may combine elements of any other given framework, such as expertise in the cognitive domain, communication skills, collaboration skills, and management skills. Several authors recently presented examples of this approach. For instance, a pediatric resident's being entrusted with management an adolescent's high-risk health behavior would combine knowledge, communication skills, professionalism, and system-based practice (Jones et al. Citation2011). Synthetic, activity-based frameworks are not an alternative for analytic frameworks, but rather complement them. Directors of clinical clerkships and residencies should be quite adept at moving between them. The ACGME competencies and sub-competencies may, for simplicity, each be mapped to the RIME framework ().

Table 4.  Example of correspondences: Analytic (ACGME) with synthetic (RIME) models

Underlying a discussion of frameworks is a conception of what is being described or “framed” by the terminology used. Currently, major frameworks for medical training are often called “competency frameworks.” We explore this issue here, as a way of exploring the uses of frameworks, rather than as a definitive discussion of “competence.” Competency-based medical education has been proposed to link outcome of education more strongly to what schools believe that society expects from a doctor (Carraccio et al. Citation2002; Frank et al. Citation2010). The terms “competence” and “competency” have been used in differing ways, and this has resulted in some confusion. Any type of outcome for education in the medical domain has been called a “competency,” and several authors have sought to clarify what it is and how competencies differ from regular educational objectives(ten Cate & Scheele Citation2007; Albanese et al. Citation2008; Frank et al. Citation2010). An authoritative publication proposed as a definition of medical competence:

The habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.” (Epstein & Hundert Citation2002)

Judged by this definition, competence is clearly multi-dimensional, utilizing Bloom's KSA elements to serve the practice of medicine, and grounded in practice. “Competency” is linguistically similar to “competence.” “Competence” is often used in singular, reflecting a state of the individual's general ability. Competency, however, is often used in plural as “competencies.” What many people call competencies are components or facets of integrative competence; and from our perspective, they reflect an underlying analytic approach, implying multiple facets or skills that must be put together by a learner to be successful.More importantly, “competencies” tend to be abstractions and therefore do not seem to be the most natural units for assessment, unless they are linked to concrete activities which can be observed. This is seen in , which lists the activities that can be observed to allow the inference that a competency has been achieved.

Table 5.  Example of correspondences between an analytic model (CanMEDs) with synthetic (EPAs). (The dots serve as examples and are not the only correct placements.)

Competence should therefore be considered the integrative ability to do something successfully or efficiently (Oxford Dictionaries).Phrased another way, competence brings to each situation or each patient what is required by the situation, with little excessive use of effort or resources (Pangaro Citation2000). Thus, competence is reflected in a concrete act of the profession in daily practice. The ability to execute an EPA can thus be designated a competency, because that is exactly what an EPA is: an important, perhaps essential, activity that a professional has demonstrated by performing in a way that allows future trust.

To repeat, it would be sensible to call the ability to communicate or collaborate, or to perform any other role, a “domain of competence,” rather than a competency, as is often done, and to call more detailed sub-skills “facets of competence.” Finally, a trainee may be able to technically perform a specific activity, such as placing a chest tube with consistency and reliably, but would not be entrusted to do so unsupervised, unless and until this EPA is mastered in a broad and integrative sense, embracing the communication skills, professional attitude, and situational overview that allows patient-safe management in various situations.Any use of the term “competent” or “competency” before a trainee is ready for unsupervised practice is therefore provisional and limited.

It is an assumption of synthetic frameworks that functioning in a social situation, such as in patient care, requires the real-time combination of knowledge, skills, and attitudes. A trainee is not competent until he/she can put the right combination together without having been provided a clue in the assessment instructions as to what is the essential task at hand to be evaluated, much less what the right mix is. Competence is a final end-point after years of training, but in the meantime learners must be incorporated into the community of practice (Lave & Wenger Citation1991) through increasing, real responsibility. The approach is essentially social in that performance has a clear practice context, and is not behavioral (measurement oriented, in that it can be observed independent of situation) as it is with analytic frameworks. Synthetic approaches move from the “cognitive” question of what the student has learned, and even beyond the “behavioral” question of what the student can do (or demonstrate) under test conditions, to what the student “does do,” in a situation with real responsibility, over time, at the top of Miller's Pyramid (Miller Citation1990). While it is possible to measure functioning in a simulated system situation as a “competency” to be demonstrated in a simulated situation, we would rather call this a skill; once demonstrated in an actual practice situation, a skill can be called a competency. Thus, the social approach, implicit in synthetic frameworks, also makes clear the difference between “shows how” and “does” in Miller's Pyramid.

It is a further assumption of the synthetic model that performance is sustained over time and over multiple patients to enable entrustment of on-going responsibility for the task or role. Entrustment decisions for unsupervised practice, taken after a threshold of minimum competence has been passed (ten Cate et al. Citation2010), usually require a certain duration of sustained practice to consolidate this competency.

The RIME model (Pangaro Citation1999) is an example of a synthetic framework. It was designed to describe minimum expectation levels of medical students in the setting of their clerkships (or attachments) in the clinical workplace. The model describes levels of function in the clinical setting: (1) Reporter, (2) Interpreter, (3) Manager, and (4) Educator (). A student, for instance, who did not demonstrate consistent reliability as a “reporter” in gathering an accurate daily description of their patients’ symptoms, physical findings, and laboratory studies would not be allowed to progress to a higher level of responsibility, such as advancement to the next year of training, without remediation. These “RIME levels” correspond to a simple rhythm of observation, reflection, and action, with managing and educating seen as two levels of proficiency in the realm of action. In a sense, the RIME framework is a simple elaboration of what patient care encompasses. Gathering clinical findings, interpreting them, and proceeding to a plan for the patient diagnostic, therapeutic, and counseling, and for educating and leading the health care team. The framework has been presented as a vocabulary, stressing the fact that much of the communication and consensus about education, assessment, and milestones is a linguistic issue. Finding the right words to express student progress is hugely important for learners, teachers, and administrators. After its introduction, the RIME vocabulary quickly caught on in North-American medical education (Hemmer et al. Citation2008), and was found feasible in a wide variety of settings (Battistone et al. Citation2002). One reason may well be that its synthetic nature is recognized as directly related to patient care responsibilities, and thus is more congruent with clinicians’ usual judgment systems (Gingerich et al. Citation2011).

Synthetic terminologies typically use concrete terms and are less often expressed in generic abstractions, and they often describe roles. The term “Medical Expert” or “Advocate” from the CanMEDs framework, for instance, imply a task or role to be filled, just as “reporter” in the RIME scheme is a role to be entrusted. The performance dimensions of a synthetic framework cannot typically be unpacked from the concept of competence, but are derived from actual practice.This analytic-synthetic distinction has been recognized as way back as by Emmanuel Kant in philosophy (Rey Citation2008). Analytic propositions are logically true by virtue of the meaning of the words alone, while synthetic propositions are known to be true from how their meaning relates to the world. Synthetic frameworks, such as RIME, depend upon a workplace observation of the tasks and roles that physicians perform, rather than being abstractions derived from, or “analyzed” from, a prior concept of what competence would include.

It makes sense that a mental model derived from the actual practice of those using a framework would have advantages. Gathering and communicating clinical information (reporting), reaching conclusions (interpreting), and formulating plans (managing) are part of the daily work of physicians. Whether the person is in training or in subsequent practice, the underlying construct (mental model) reflects the daily workplace tasks of physicians, and thus is more easily available than one derived from abstractions, such as Bloom's knowledge-skills-attitudes approach. The synthetic approach takes advantage of two abilities which physicians apply in patient care—pattern recognition and reaching conclusions from messy sets of findings. The RIME scheme asks raters to collect observations about a student's performance on a patient or series of patients over time, and to an image in their own mind what an “interpreter” looks like. This fits with what we know about pattern recognition skills in physicians (Elstein et al. Citation1978) and rating as a categorization process (Gingerich et al. Citation2011). Because the student's abilities may not completely fit a pure pattern and may have some aspects of interpreter (e.g., providing a good differential diagnosis), but be deficient in reporting (e.g., contradictory in documentation of key finding), the rater could still make a judgment about how to describe the students despite some pieces of the picture that do not quite fit. This fits with our understanding of judgments by expert raters.

Developmental frameworks, focused on progression

A different approach to frameworks is offered by developmentally oriented models. Social theories of learning deal explicitly with this social-contextual dimension and how the learner, starting as a novice, first-year medical student is progressively more included in a community of medical practice (Lave & Wenger Citation1991). In a developmental framework, the learner progresses step-wise up a ladder toward independence. Developmental frameworks always mention stages or milestones in the development of the learner, as opposed to the more static outcome-based frameworks mentioned above.

The growth of children has often been used as an image or metaphor for the growth of students in an educational process. In fact the etymology of the Greek term “pedagogy” is “leading a child”—it became the overall term for instructional methods. “Education” comes from “leading out of” (Latin: e–ducere) and also visualizes a leading out of dependence. Seeing progress and growth as the basis of the learning process is quite old. Plato describes psychological growth as progress from an awareness of superficial, concrete details toward a perception of the true meaning and form underlying them (Kenny Citation2004). This is directly analogous to moving from signs and symptoms to an underlying concept of a pathological process, the diagnosis. Similarly, Piaget, founder of developmental psychology, describes a scheme in which children progress from sensation of the concrete to abstraction and understanding (Piaget & Inhelder Citation1969). A frequently cited developmental framework in higher education, devised for expertise development by Dreyfus and Dreyfus, includes five stages: novice, advanced beginner, competent, proficient, and expert.(Dreyfus & Dreyfus Citation1986). The model has be recently translated and adapted to medical education (Carraccio et al. Citation2008) (see ).

The assumption of the developmental model is that there are stages, or steps of progression in a logical order, and that each step is required for progression. Once one is an advanced beginner in a task, one no longer looks or behaves like a novice. The model is essentially organic in nature, and the final developmental stage is the end-product of the series. In a developmental model, the term “competence” is used as one step, probably the most important, but not necessarily the final step, as the Dreyfus model shows. “Competent” can at least be viewed as a threshold that should permit a certain independence of the learner (ten Cate et al. Citation2010). The developmental model provides a framework or scaffold to which educators must add considerable detail to convey what is expected. The Dreyfus terms in particular are intentionally generic and do not give learners or teachers a concrete picture of what is expected. To the extent that the Dreyfus steps are generic and derived from an understanding of the basic concept, they are not dependent on empirical observations of what competence looks like in practice. On the other hand, the use of “milestones” to document progression toward independent practice is clearly empiric, with the objectives chosen by the observation of experts. To achieve the consistency of use that allows reliable application of the framework to specific students in specific settings, a lot of work must still be done. A first attempt has been made by Carraccio and colleagues (Carraccio et al. Citation2008), who have provided some terms for what progress in medical expertise looks like, for example, from “novice” (for whom performance is rule driven) to “advanced beginner” (uses both analytic reasoning and pattern recognition) to expert (recognizes the limits of pattern). Recently, a full document was completed describing the pediatric milestones for each of the ACGME sub-competencies in behavioral terms, based on this framework (Schumacher et al. Citation2012). It enables the construction of detailed observational frames of reference for evaluation.

Though not an intrinsic assumption, it is often true that in a developmental framework the trainees leave earlier stages behind as they progress. To function again as a “novice,” after having achieved “expert” status would be seen as a relapse. This is one reason that “RIME” is not a fully developmental framework; residents who are “managers” continue to acquire and interpret clinical findings. In fact, those at expert level in the RIME scheme typically do all four roles in the same patient interview. What happens is an accumulation of stages to integrate into a full range of necessary elements of clinical practice.

The hybrid nature of most frameworks

Most frameworks can be labeled predominantly as analytic, synthetic, or developmental, but have features of the other models.

Within the seven analytic CanMEDs roles, the “medical expert” role is explicitly central. The CanMEDS logo shows Medical Experts as a central role, overlapping with all other six roles, which is an attempt to synthesize; such visual appearance conveys an important message (Zibrowski et al. Citation2009). Teachers and learners need to have a concrete, and rich, idea of what a successful “medical expert” looks like, and what feedback to enhance this role would sound like. Although not yet made explicit by the ACGME, we would argue that the competency domain of “patient care” is clearly the dominant domain, which all others really support. “Patient care” is itself a synthetic, multidimensional term for which faculty development efforts must be focused on developing a shared meaning, across teachers and settings. One cannot be superb in patient care while at the same time mediocre in the other domains. Others have argued that the role of being a “professional” distinguished both in CanMEDS and the ACGME frameworks should rather synthesize all other roles, or that Reflection should be added as a central role (Gans Citation2009).

Developmental features of analytic and synthetic frameworks are also apparent. Because medical education may span well over a decade of training, it is clear that educators must spend effort to articulate the developmental aspect of any framework that they use. Since 2009, considerable effort is being expended to translate the ACGME competencies into milestones which can benchmark progress (Green et al. Citation2009); these are five levels of “developmentally based, specialty-specific achievements that residents are expected to demonstrate at established intervals as they progress through training” that will be mandatory for all post-graduate medical training from 2014 onward (Nasca et al. Citation2012) ().

Table 6.  Milestones levels as reflecting stage of training

As the starting point of the ACGME model was analytic, the developmental aspect was not intrinsic, but is now under development in the form of “milestones” (Green et al. Citation2009). The combination of competency domains with milestones now clearly results in a hybrid framework. provides an example of how milestones may be related to, or hung upon, the RIME framework.

Table 7.  Correspondence of synthetic framework (here RIME) with milestones that benchmark learner progress

On the other hand, the synthetic RIME framework has a developmental aspect allowing it to be widely used in clerkships in the United States to guide judgments on advancement to the next year of training (Hemmer et al. Citation2008). Yet, it is not strictly developmental in that those who have earned interpreter “status” do not leave reporting tasks behind. In fact, they get better at reporting. At the final stage of competence in RIME, physicians in practice typically gather information from patients, interpret, manage, and educate their patients simultaneously.

Most educators have the role of fostering independence over time, and program and clerkship directors must be able to describe and communicate developmentally appropriate goals. They do not have an either/or choice of formative versus summative assessment goals, and the merging of approaches is most useful. We need to have both the final goal, the outcome, in mind and the level-appropriate expectations for each stage of training. While the eventual goal of training is independent, unsupervised practice, to structure the expectations for this student for this year and this day, tools that include a developmental aspect allow us to be efficient in our use of time and effort for the student at hand, and to create focus on what is the “next step” for this learner, and not distract their attention with goals or responsibilities that are beyond their current ability. Two related methods within our discussion illustrate this. Entrustable professional activities (ten Cate Citation2005) are specific tasks that are chosen by educators as level-appropriate responsibilities that can be trained to, assessed and then conferred, as a trainee acquires more and more legitimate roles, for which he/she will be accountable, in the social setting of patient care. Within the RIME scheme, the role of the “reliable reporter” is used in clinical clerkships as a demonstrable to EPA for having less immediate oversight of one's daily data gathering of patient findings, and to progress to the next step in clinical training.

Guiding teachers—the use of frameworks for the assessment of learners—Theory in practice

Program and clerkship directors must be conversant with different frameworks, and use them as needed. This Guide includes several tables that demonstrate the correspondences between different kinds of frameworks. Being aware of the correspondence between frameworks may lead to enhanced understanding of each.The developmental stages within one dimension (the cognitive) of the analytic knowledge-skill-attitude framework can be used to reflect the progressively higher levels required within a synthetic framework such as RIME ().As learners progress from “reporter” to “interpreter” roles within the RIME framework, they must not only possess remembered, factual knowledge, but acquire understanding and conceptual knowledge.

Table 8.  Example of correspondences between the RIME roles and aspects of knowledge from the Bloom taxonomy (Rodriguez R after Krathwohl, used with permission)

Within the assessment process it is important to realize that specific tasks or activities can be observed and documented, and that the competencies or skills required to perform the task are inferred from these observations. This was illustrated above in .

This may aid in understanding the approach of each, and also allow educators to guide the assessments of their teaching faculty. One's own role in the educational process, and the timing and purpose of the assessment—for instance, formative or summative—will determine the kind of framework that will be most useful. When we must determine whether a trainee is ready for independent practice, or when we prepare physicians for licensing, then our emphasis must be on an outcomes-oriented framework. This requires a dichotomous pass–fail focus, and the learner's attainment of intermediate milestones in a developmental framework is then less important. Following the analytic framework approach, there is a need to assess and document all aspects of the domain. Checklists of goals for assessments are to be applied, including elements in vivo with real patients, and in vitro with simulations or written exams, leading to valid pass–fail standards to be met. Following the synthetic framework approach, the focus is on functioning in the workplace with real patients and practical tasks and varying circumstances that require a holistic view of the situation, leading to trust in a trainee to work with no more than backstage supervision by those legally responsible for patient care. The workplace requires the trainee to work with real patients and to adapt their general textbook knowledge or prior skills and attitudes to specific patient contexts and practice circumstances. This requires “situated cognition” and an assessment rubric that is robust in the real world (in vivo) setting. We believe that the assessment terms should be broad enough to allow teachers to apply their own expert judgment. Specifically, they would assess the general competency domains of Patient Care (ACGME) or Medical Expert (CanMEDs), perhaps using the reporter-interpreter-manager-educator model.

By contrast, if our educational role is to foster growth over a long process from undergraduate to graduate medical education, then an explicitly developmental framework becomes essential. Structured observation and feedback are designed for improvement and advancement, not a summative decision.

If the framework of our culture or institution does not provide it, then we must articulate the expectations at each level that are required to fulfill a role with increasing responsibility and decreasing supervision or to advance to the next level of training. The particular problem posed by synthetic approaches is that the time-honored available methods, judgments by raters presumed to be expert, have not been systematically studied. In fact the analytic approaches of recent decades have emphasized the importance and highlighted the difficulty of psychometrically defensible quantified measurements (Lurie et al. Citation2011), which have de-emphasized and perhaps devalued more descriptive evaluations (Pangaro Citation2000).

In addition to availability of proven, in vivo assessment approaches for the workplace, there is a need for deploying them in the care of actual patients. The question is: How to structure the observation of a trainee close to independent practice, or rather a set of observations to sample their consistency over time, in a way that allows judgment of “independent” function, and still does not compromise patient care? Studies are underway, but this will remain a field for further research for quite some time in which trust seems a key element (Kennedy et al. Citation2005; Sterkenburg et al. Citation2010; Wijnen-Meijer et al. 2013). Newer frameworks and approaches, like RIME, may have a higher burden of proof than more traditionally accepted approaches like “KSA.” However, there have been some encouraging studies of the reliability of the RIME approach (Durning et al. Citation2003), its validity (Tolsgaard et al. Citation2012), and its feasibility (Battistone et al. Citation2002).

As we have mentioned earlier, assessment of learners in the clinical workplace is a difficult task, and the community of medical educators is currently only at the beginning of finding answers to the many psychometric challenges it imposes. However, we believe that a frame of reference to evaluate trainees is a necessary, although not sufficient, prerequisite to arrive at defendable decisions to entrust trainees with the responsibility for unsupervised practice. This mental frame of reference, likely a combination of analytic, synthetic, and developmental approaches, stems from more than a document. It rather is a shared educational culture, grounded in clear language, and supported by training (Holmboe et al. Citation2011) that will eventually justify decisions that need to be taken about trainee progress and certification.

Finally, we like to emphasize that there is no one “correct” or “best” framework for all situations. This may be true even when a particular framework has been prescribed by a regulatory body.A framework reflects a vision within its own time and context. We believe that regularly reflecting on the strengths and weaknesses of a framework is extremely useful. The Dutch Framework of objectives (van Herwaarden et al. Citation2009) for undergraduate medical education is updated every eight years since 1994.In guiding the work of teachers and learners, we urge those leading the educational process to look at the advantages and limits of the alternative frameworks, and decide what seems best for the purpose at hand, and for those who will use the framework. Viewed in this manner, frameworks are a means to an end, rather than the end itself.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

The opinions herein do not represent official positions of the Uniformed Services University or the Department of Defense of the United States.

Notes

Notes

1. For purposes of this paper, we will use the word Assessment to refer to the process of making observations about the learner's proficiency, and comparing these to a standard, and Evaluation to mean the process of making a judgment that gives meaning to observations about the learner's proficiency, usually by comparing to expectations. Grading will refer to the action of making decisions that allow advancement (Pangaro & McGaghie Citation2012). The three terms yield the rhythm of observation, judgment and action.

References

  • Accreditation Council on Graduate Medical Education (ACGME). 1999. ACGME core competencies. [Accessed 21 July 2012]. Available from http://www.mcw.edu/MedicalSchool/EducationalServices/GraduateMedicalEducation/ACGMECoreCompetencies.htm
  • Albanese M. Challenges in using rater judgements in medical education. J Eval Clin Pract 2000; 6(3)305–319
  • Albanese M. Rating educational quality: Factors in the erosion of professional standards. Acad Med 1999; 74(6)652–658
  • Albanese MA, Mejicano G, Mullan P, Kokotailo P, Gruppen L. Defining characteristics of educational competencies. Med Educ 2008; 42(3)248–255
  • Anderson B. Learning objectives for medical student education—Guidelines for medical schools: Report I of the Medical School Objectives Project. Acad Med 1999; 74(1)13–18
  • Anon. 1999. Oxford dictionaries. Available at http://oxforddictionaries.com/definition/competence
  • Battistone MJ, Milne C, Sande MA, Pangaro LN, Hemmer PA, Shomaker TS. The feasibility and acceptability of implementing formal evaluation sessions and using descriptive vocabulary to assess student performance on a clinical clerkship. Teach Learn Med 2002; 14(1)5–10
  • Bloom BS, Engelhart MD, Furst EJ, Hill WH, Krathwohl DR (Eds). Taxonomy of educational objectives: The classification of educational goals; Handbook I: Cognitive Domain. Longmans, Green & Co. Ltd, London, WI 1956
  • Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From Flexner to competencies. Acad Med 2002; 77(5)361–367, Available from http://www.ncbi.nlm.nih.gov/pubmed/12010689
  • Carraccio CL, Benson BJ, Nixon LJ, Derstine PL. From the educational bench to the clinical bedside: Translating the Dreyfus clinical skills. Acad Med 2008; 83(8)761–767
  • de Landesheere V. Taxonomies of educational objectives. Educational research, methodology and measurement—An international handbook, J Keeves. Pergamon/Elsevier Science Ltd, Oxford 1997; 803–812
  • Dreyfus HL, Dreyfus SE. Mind over machine. Free Press, New York 1986
  • Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med 2005; 80(10 Suppl)S84–87
  • Durning S, Pangaro L, Denton GD, Hemmer P, Wimmer A, Grau T, Gaglione MA, Moores L. Inter-site consistency as a standard of programmatic evaluation in a clerkship with multiple, geographically separated sites. Acad Med 2003; 78: S36–S38
  • Elstein AS, Shulman LS, Sprafka SA. Medical problem solving. An analysis of clinical reasoning. Harvard University Press, Cambridge, Massachusetts 1978
  • Epstein RM, Hundert EM. Defining and assessing professional competence. Med Humanit 2002; 287(2)226–235
  • Frank JR. The CanMEDS 2005 physician competency framework: Better standards, better physicians, better care. Royal College of Physicians and Surgeons in Canada, Ottawa 2005
  • Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: A systematic review of published definitions. Med Teach 2010; 32(8)631–637
  • Gans ROB. Mentoring with a formative portfolio: A case for reflection as a separate competency role. Med Teach 2009; 31(10)883–884
  • Gingerich A, Regehr G, Eva KW. Rater-based assessments as social judgments: Rethinking the etiology of rater errors. Acad Med 2011; 86(10 Suppl)S1–7
  • Govaerts MJ, van der Vleuten CP, Schuwirth LW, Muijtjens AM. Broadening perspectives on clinical performance assessment: Rethinking the nature of in-training assessment. Adv Health Sci Educ Theory Pract 2007; 12(2)239–260
  • General Medical Council (GME). 2009. Tomorrow's Doctors—The duties of a doctor registered with the General Medical Council. London: General Medical Council. Available at http://www.gmc-uk.org/TomorrowsDoctors_2009.pdf_39260971.pdf
  • Green ML, Aagaard EM, Caverzagie KJ, Chick DA, Holmboe E, Kane G, Smith CD, Iobst W. Charting the road to competence: Developmental milestones for internal medicine residency training. J Grad Med Educ 2009; 1(1)5–20
  • Harden RM, Crosby JR, Davis MH, Friedman M. AMEE Guide No. 14. Outcome-based education: Part 5—From competency to meta-competency: A model for the specification of learning outcomes. Med Teach 1999a; 21(6)546–552
  • Harden RM, Crosby JR, Davis MH, Fuller T. AMEE Guide No. 14: Outcome-based education: Part 1—An introduction to outcome-based education. Med Teach 1999b; 21(1)7–14
  • Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: Results of a national survey and comparison to other clinical clerkships. Teach Learn Med 2008; 20(2)118–126
  • Holmboe ES, Ward DS, Reznick RK, Katsufrakis PJ, Leslie KM, Patel VL, Ray DD, Nelson EA. Faculty development in assessment: The missing link in competency-based medical education. Acad Med 2011; 86(4)460–467
  • Jones Jr MD, Rosenberg AA, Gilhooly JT, Carraccio CL. Perspective: Competencies, outcomes, and controversy—Linking professional activities to competencies to improve resident education and practice. Acad Med 2011; 86(2)161–165
  • Kennedy TJ, Regehr G, Baker GR, Lingard LA. Progressive independence in clinical training: A tradition worth defending?. Acad Med 2005; 80(10 Suppl)S106–111
  • Kenny A. Ancient philosophy. Oxford University Press, New York 2004; 49–53
  • Krathwohl DR. A revision of Bloom's taxonomy: An overview. Theor Pract 2002; 41(4)212–218
  • Krathwohl DR, Bloom BS, Masia BB. Taxonomy of educational objectives, the classification of educational goals. Handbook II: Affective domain. David McKay Co, Inc, New York 1973
  • Lave J, Wenger E. Situated learning. Legitimate peripheral participation. Cambridge University Press, Edinburgh 1991
  • Lockyer J. Multisource feedback in the assessment of physician competencies. J Contin Educ Health Prof 2003; 23(1)4–12
  • Lurie SJ, Mooney CJ, Lyness JM. Commentary: Pitfalls in assessment of competency-based educational objectives. Acad Med 2011; 86(4)412–414
  • Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 87(7)S63–S67
  • Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med 2012; 366(11)1051–1056
  • Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach 2007; 29(9)855–871, Also available from http://www.ncbi.nlm.nih.gov/pubmed/18158655
  • Oxford Dictionaries. [Accessed 10 December 2012]. Available at http://oxforddictionaries.com
  • Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med 1999; 74(11)1203–1207
  • Pangaro LN. Investing in descriptive evaluation: A vision for the future of assessment. Med Teach 2000; 22(5)478–481
  • Pangaro LN, McGaghie WC. Evaluation of students. Guidebook for clerkship directors4th, BZ Morgenstern. Gegensatz Press, North Syracuse, New York 2012
  • Pellegrino ED. Humanism and the physician. University of Tennessee Press, Knoxville 1979; 222–224
  • Piaget J, Inhelder B. The psychology of the child. Basic Books, New York 1969
  • Rodriguez R, Pangaro L, 2012. Mapping the ACGME competencies to the RIME Framework, Acad Med 87(12):1781
  • Rey G, 2008. The analytic/synthetic distinction. Stanford encyclopedia of philosophy. Available at: http://plato.stanford.edu/entries/analytic-synthetic/
  • Scottish Deans’ Medical Curriculum Group 2009. 2011. The Scottish doctor. 3rd edition. [Accessed 21 July 2012]. Available from http://www.scottishdoctor.org/index.asp
  • Schumacher DJ, Lewis KO, Burke AE, Smith ML, Schumacher JB, Pitman MA, Ludwig S, Hicks PJ, Guralnick S, Englander R, et al. 2013. The pediatrics milestones: Initial evidence for their use as learning road maps for residents. Acad Pediatr 13(1):40–7
  • Simpson EJ. The classification of educational objectives in the psychomotor domain. Gryphon House, Washington, DC 1972
  • Speer AJ, Solomon DJ, Fincher RME. Grade inflation in internal medicine clerkships: Results of a national survey grade inflation in internal medicine clerkships: Results of a national survey. Teach Learn Med 2000; 12(3)112–116
  • Sterkenburg A, Barach P, Kalkman C, Gielen M, ten Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks?. Acad Med 2010; 85(9)1408–1417
  • Tavares W, Eva KW. Exploring the impact of mental workload on rater-based assessments. Adv Health Sci Educ Theory Pract 2012, doi: 10.1007/s10459-012-9370-3
  • ten Cate O. Entrustability of professional activities and competency-based training. Med Educ 2005; 39(12)1176–1177
  • ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice?. Acad Med 2007; 82(6)542–547
  • ten Cate O, Snell L, Carraccio C. Medical competence: The interplay between individual ability and the health care environment. Med Teach 2010; 32(8)669–675
  • Tolsgaard MG, Arendrup H, Lindhardt BO, Hillingsø JG, Stoltenberg M, Ringsted C. Construct validity of the reporter-interpreter-manager-educator structure for assessing students' patient encounter skills. Acad Med 2012; 87(6)799–806
  • Tyler RW. Basic principles of curriculum and instruction. University of Chicago Press, Chicago 1949
  • van Herwaarden CLA, Laan RFJM, Leunissen RRM. 2009. The 2009 framework for undergraduate medical education in the Netherlands, Utrecht. Available from http://www.vsnu.nl/Media-item/Raamplan-Artsopleiding-2009.htm
  • Wijnen-Meijer M, Van der Schaaf M, Nillesen K, Harendza SD, ten Cate O. 2013. Essential facets of competence that enable trust in graduates: A Delphi study among physician educators in the Netherlands. J Graduate Med Edu 5:46–53
  • Williams RG, Klamen DA, McGaghie WC. Cognitive, social and environmental sources of bias in clinical performance ratings. Teach Learn Med 2003; 14(4)37–41
  • Zibrowski EM, Singh SI, Goldszmidt MA, Watling CJ, Kenyon CF, Schulz V, Maddocks HL, Lingard L. The sum of the parts detracts from the intended whole: Competencies and in-training assessments. Med Educ 2009; 43(8)741–748

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.