6,885
Views
28
CrossRef citations to date
0
Altmetric
Web Paper

Methods to assess students’ acquisition, application and integration of basic science knowledge in an innovative competency-based curriculum

, , , &
Pages e171-e177 | Published online: 03 Jul 2009

Abstract

Background: The Cleveland Clinic Lerner College of Medicine was designed to encourage medical students to pursue careers as physician investigators. Our faculty decided that assessment should enhance learning and adopted only formative assessments to document student performance in relation to nine broad-based competencies. No grades are used to judge student performance throughout the 5-year program. Instead, assessments are competency-based, relate directly to performance standards, and are stored in e-Portfolios to track progress and document student achievement. The class size is limited to 32 students a year.

Aims: Schools with competency-based curricula must provide students with formative feedback to identify performance gaps and monitor progress. We describe a systematic approach to assess medical knowledge using essay-type questions (CAPPs) and multiple choice questions (SAQs) to provide medical students with weekly, formative feedback about their abilities to acquire, apply and integrate basic and clinical science concepts.

Method: Processes for developing performance standards, creating assessment items, training faculty, reporting student performance and monitoring outcomes are described. A case study of a Year 1 course is presented with specific examples of CAPPs and SAQs to illustrate how formative assessment data are interpreted and reported in students’ e-Portfolios.

Results: Preliminary evidence suggests that CAPPs and SAQs have a positive impact on students’ education, a justifiable cost in light of obtained benefits and growing acceptance among stakeholders. Two student cohorts performed significantly above the population mean on USMLE Step 1, which suggests that these assessment methods have not disadvantaged students. More evidence is needed to assess the reliability and validity of these tools for formative purposes.

Conclusions: Using assessment data for formative purposes may encourage application and integration of knowledge, help students identify performance gaps, foster student development of learning plans and promote student responsibility for learning. Discussion provides applications for institutions with larger classes to consider.

Introduction

Despite repeated calls over the past 25 years for new approaches to assess students’ medical knowledge (GPEP Citation1984; ACME-TRI Report Citation1992; GMC Citation1993), many faculties continue to rely solely on traditional approaches, such as multiple choice questions, to make summative decisions about student performance or to assign grades to students. Unfortunately, using assessment methods for only summative performance decisions may emphasize and reinforce rote memorization, thereby undermining the habits of self-directed learning and reflection we hope to instill in medical students (Cooke et al. Citation2006; Epstein Citation2007). The impetus to use assessment tools differently to assess medical knowledge is growing, however, as more medical schools adopt competency-based curricula. When students are expected to meet explicit competencies, schools have the responsibility to provide students with formative feedback about gaps in knowledge and areas for improvement to help students track progress (Ben-David Citation1999; Smith et al. Citation2003; Goldstein et al. Citation2005; Fishleder et al. Citation2007; Litzelman & Cottingham Citation2007).

In this article, we describe a systematic approach using two methods to assess medical knowledge in a competency-based curriculum. Both essay questions and multiple choice questions are used to provide medical students with weekly, formative feedback about their abilities to acquire, apply and integrate basic and clinical science concepts essential to medical practice. We describe all assessment system components that guided the planning, faculty development and implementation of essay and multiple-choice questions assessments before presenting the Renal I course as a case example.

Background

Curriculum structure in Years 1 & 2

The Cleveland Clinic Lerner College of Medicine (CCLCM) was created in 2002 with the mission to train medical students to pursue careers as physician investigators (Fishleder et al. Citation2007). The basic science curriculum was developed by integrating learning objectives from curricular threads representing basic science disciplines (e.g. anatomy, physiology, etc.) with applications of these learning objectives to core clinical disciplines (e.g. cardiology, pulmonary, renal, etc.). The building blocks of the curriculum in Years 1 and 2 are organ-based system courses where a problem-based learning (PBL) case serves as the core learning activity each week and is augmented with interactive seminars focusing on basic science, research and clinical science concepts. Interwoven with these activities are longitudinal clinical experiences and clinical skill development sessions. Organizationally each organ system course has a planning committee that consists of representatives from the clinical specialty and each of the basic science threads. The class size is limited to 32 students a year, with the first class graduating from the 5-year program in May, 2009.

Assessment and competencies

To achieve the goal of helping students develop a critical approach to self-assessment and self-improvement, we needed an educational environment that challenged students intellectually and engaged them actively in both learning and assessment processes. Our faculty decided the goal of assessment should be to enhance learning. Consequently, the faculty committed to develop assessment tools to provide students with only formative, diagnostic feedback in relation to nine broad-based competencies (e.g. medical knowledge, research, clinical skills, clinical reasoning, communication, professionalism, health care systems, personal development and reflective practice) required of physician investigators. No grades or class ranks are used to document student performance during any curricular experience (e.g. courses, clinical rotations, research experiences, electives, etc.) throughout the 5-year program. Instead, assessments are competency-based in relation to performance standards and student assessment evidence is stored in e-Portfolios to track progress and document student achievement (Dannefer & Henson Citation2007).

Groups of faculty experts defined developmentally appropriate standards on which to base judgments of student performance for each of the nine competencies. The medical knowledge competency, which is defined as the ability to ‘Demonstrate and apply knowledge of human structure and function, pathophysiology, human development and psychosocial concepts to medical practice,’ has three performance standards. Two standards are the same for Years 1 and 2: ‘Identifies and acknowledges gaps in knowledge and develops and implements plans to correct,’ and ‘Achieves breadth and depth of knowledge in the curricular threads (e.g., physiology, pharmacology, etc.).’ The third standard reflects the curricular focus on normal physiology in Year 1, ‘Applies basic science principles to new problems in the basic and clinical sciences relevant to medicine,’ and pathology in Year 2, ‘Applies core concepts of pathophysiology to new problems in the basic and clinical sciences relevant to medicine.’

Methods to assess medical knowledge

A faculty Student Assessment Committee (SAC) was formed to develop institutional policies and recommended assessment tools. The SAC sought assessment tools that provided students with high-quality, formative feedback in relation to the nine competencies and associated performance standards. Following much discussion, the SAC identified two methods to document the depth and breadth of students’ medical knowledge. These methods, known as concept appraisals (CAPPs) and self-assessment questions (SAQs), were adopted for all courses in Years 1 and 2 of the CCLCM program.

CAPPs

These are essay-type questions designed to challenge a student's ability to integrate and apply core concepts presented during the week or from previous weeks/courses when applicable. Students receive 2–3 CAPPs on Wednesday and are required to submit their responses electronically by the following Monday. Students get two sources of feedback for each CAPP response: a standard faculty ‘answer’ and individualized faculty feedback. Faculty-generated answers are posted on the school's curriculum portal after Monday's submission deadline to help students assess the accuracy and comprehensiveness of their CAPP responses and focus independent study of unclear concepts during the following week. A faculty member also reads all CAPP responses and provides each student with individualized narrative feedback within seven days of the submission deadline. This feedback is automatically added to each student's e-Portfolio, which also includes the student's response to the CAPP question to provide documentation of ‘depth’ of medical knowledge. Course directors and faculty raters are blinded to the identity of students. Only the physician advisor, a faculty mentor who regularly coaches students to self-assess their performance and develop appropriate learning plans, has access to monitor students’ CAPP responses and accompanying faculty feedback.

SAQs

This second measure of medical knowledge uses multiple-choice items designed by course faculty to help students monitor their understanding of important concepts in the organ-based curriculum. Thus, SAQs function as formative measures of ‘breadth’ of knowledge. Students complete approximately 30 SAQs each week to identify their acquisition of key concepts addressed that week in the curriculum. Because a secure testing environment is unnecessary for these formative assessments, students receive raw scores and answer explanations immediately after completing SAQs electronically. This timely feedback is intended to help students determine if they grasped major concepts emphasized that week. It also provides opportunities for students to review challenging material before instruction continues the next week. Though course directors are blinded to individual student SAQ scores, they do receive weekly reports of aggregate student performance and item-level statistics (including student feedback on SAQ quality). These reports help course directors identify concepts the class does not grasp or detect items that require revision.

Some faculty members were uncertain about their ability to create high quality SAQs and CAPPs. Accordingly, early faculty development focused on strategies for writing MCQ and essay questions. The NBME guidelines for USMLE type questions were used as the ‘gold standard’ (NBME Citation2002), and a question writing manual was developed and distributed to all course directors. The goals of the CAPPs were shared with the faculty through a series of half-day retreats where desired item formats for higher-level thinking (Marzanno Citation2001) and guiding principles for feedback were discussed. Groups of faculty organized by course, then wrote, discussed and shared examples of CAPPs. Peers provided feedback on the extent to which CAPPs met the goals of promoting integration and application of new knowledge. The remainder of this article presents how CAPPs and SAQs are used in one course, Renal I, to assess students’ acquisition, application and integration of medical knowledge during a given week within the curriculum.

Assessment snapshot: Renal I

Renal I is a 3-week course in the Year I curriculum that follows a basic and translational research block (10 weeks) and cardiovascular and pulmonary organ-systems course (9 weeks). Like other organ-based courses, Renal I is organized around weekly learning themes. The individual seminars as well as the PBL cases for the course are intended to provide opportunities for the student to achieve competence in the learning objectives for the organizing theme of the week. This curriculum is enhanced as the various course components (e.g. PBL, seminars, anatomy labs) have learning objectives that crosslink and reinforce each other. For example, in the week wherein the theme is sodium homeostasis, the PBL case is that of a 20-year-old woman with sudden onset of severe swelling and proteinuria consistent with the condition of nephrotic syndrome. The seminars for the example week include:

  1. Seminar on the cellular and molecular mechanisms of the renin-angiotensin-aldosterone system's regulation of Na+ excretion that is complemented by discussion of clinical cases to explore signs, symptoms and urinary indices of volume depletion;

  2. Seminar on renal clearance, glomerular filtration processes and renal blood flow with discussion of regulation of renal microcirculation and its effect on renal clearance;

  3. Seminar on histology in which normal glomerular, tubular and upper urinary histology are presented and discussed using cases;

  4. Anatomy session using prosections (Drake Citation2007) to learn the gross anatomy with emphasis on the organization of the anterior and posterior abdominal walls, the relationship of the kidney to other structures and the external anatomy of the kidney.

In terms of assessment, the CAPPs for this week () ask the student to write essays which demonstrate their understanding of extracellular fluid and sodium processing in the kidney and glomerular filtration as a means of urine production. These questions are intended to require students to show the depth of their knowledge and to demonstrate the ability to answer questions logically based upon the knowledge gained during that week as well as include other learned material from other weeks in the course and from other organ blocks. In this same week, students complete approximately 30 SAQs that probe understanding of key aspects of sodium homeostasis, histology and anatomy addressed in the respective seminars and allow students to test their breadth of knowledge in the significant learning objectives of the week. Refer to for examples of SAQs used during this week.

Table 1.  Examples of CAPPs and SAQs used during Renal I course

Results

Schuwirth & van der Vleuten (Citation2004) recommend that the utility of assessment methods should be evaluated using five components: (1) acceptability of method, (2) educational impact, (3) cost, (4) validity and (5) reliability. We use this framework to report the outcomes and limitations of our formative use of CAPPs and SAQs to document students’ abilities to acquire, apply and integrate basic science knowledge.

Acceptability of method

After 4 years CCLCM continues to use CAPPs and SAQs as originally intended, thereby demonstrating the acceptability of these assessment tools. Of course, both faculty and students voiced concerns that CAPPs and SAQs would not adequately prepare students for licensure examinations, namely USMLE Step I. Most of our students desired grades to determine their mastery of course content, which seemed quite reasonable as students must obtain high undergraduate GPAs and MCAT scores to enter medical school. Our faculty, many of whom graduated from medical schools with graded curricula, also worried that reliance on formative assessment methods would permit some students to ‘fall through the cracks’ or would not give students the necessary test-taking skills to score well on licensure exams. The educational leadership maintained a dialogue with faculty and students about these concerns and reinforced CCLCM's assessment principles for the medical knowledge competency and associated performance standards. Students were also given time for independent study to prepare for USMLE Step 1 and were offered practice tests and targeted review sessions as desired. Our first two cohorts of students scored well on USMLE Step 1 (Mean = 231, SD = 20), with performance significantly above the 2006 USMLE Step 1 population mean score (Mean = 218, SD = 23). This positive outcome has helped assure both faculty and students that our use of CAPPs and SAQs did not disadvantage students.

Educational impact

Ultimately, assessment should enhance student learning in relation to curricular goals. CAPP reviewers have given us feedback that students provide in-depth, logical CAPPs responses that truly demonstrate their abilities to apply and integrate medical knowledge. Those faculty who advise students and who make promotion decisions also cite CAPPs as the best evidence they use to monitor and document students’ mastery of medical knowledge. The majority of students surveyed at the end of Year 2 (91% response for students who consented to release their data for research purposes) indicated that CCLCM's program placed a ‘moderate’ to ‘a lot’ of emphasis on self-directed learning (100%), on the integration of basic and clinical sciences (96%) and on the application of medical knowledge (96%). A much smaller proportion of students thought the CCLCM program emphasized the memorization of facts (38%).

Cost

Those who use MCQs for summative purposes already know substantial resources are needed to develop high-quality items that align with ever-changing curricula. Additional challenges may present if support staff are not adequately trained to develop and score MCQ assessments using computerized item banks. We did not anticipate the time commitment needed to develop high-quality CAPPs and SAQs each week that assessed students’ abilities to acquire, apply, and integrate core concepts addressed in the curriculum. Initially, several CAPPs did not elicit higher-level thinking or the integration of concepts as desired, and many SAQs focused on isolated facts or were poorly constructed. These quality issues required the development of a formal faculty review process from item inception to computerized delivery for all SAQ and CAPP assessments. For example, each week a core group of faculty reviewed SAQs and CAPPs for technical errors (i.e. spelling errors, etc.) before these items were released to students. Then, at the beginning of the following week, course directors received item analyses reports (e.g. item difficulty, aggregate student performance, student ratings’ of item quality, etc.) to monitor the quality of SAQs and the scope of student understanding. We also successfully implemented a formal review process for course faculty to identify CAPPs that required revision (Bierer et al. Citation2008). Course directors and faculty continue to refine existing and create new SAQs and CAPPs each year to reflect curricular changes and improvements.

We also faced technical challenges when implementing CAPPs and SAQs as most commercial testing packages assumed that multiple choice questions and essay items were used in a traditional manner to inform summative decisions of student performance. This proved particularly problematic when implementing CAPPs due to our formative reporting requirements. Consequently, we needed to design an in-house, computerized program for faculty to use to assess students’ CAPP responses anonymously and to upload students’ CAPP responses and associated feedback into their e-Portfolios.

Validity

We do not have evidence that CAPPs and SAQs provide authentic representations of each student's independent work. Students may complete these assessments at their leisure, and they may discuss their responses with other students or faculty. Students may also review resources to help them answer the questions as secure testing conditions are not used. Although we require students to submit CAPPs and SAQs independently, we view these assessments as learning tools that also provide performance evidence for medical knowledge. In a few isolated cases, physician advisors were able to identify students with deficient medical knowledge after examining performance on CAPPs across courses. We continue to monitor these few occurrences of knowledge deficits to explore if patterns emerge with other knowledge measures (e.g. MCAT scores) or observed behaviors (e.g. lack of professionalism). Additional research is needed to determine if CAPPs and SAQs assess content areas that are representative and proportional to our targeted domain of curricular objectives (Kane et al. Citation1999).

Reliability

The classic test theory definition of reliability as the reproducibility of observed scores in relation to true scores poses an interesting dilemma. Commonly used internal consistency methods such as test–retest are not appropriate for our use of SAQs as students may take SAQs multiple times in a non-secure testing environment. We also cannot collect numeric data about the consistency of faculty ratings because all faculty feedback on CAPPs consists of narrative comments. Focus groups with students have helped identify the importance of students receiving consistent, high-quality feedback from faculty on CAPPs (narrative feedback) and SAQs (answer rationales) across courses. Current faculty development initiatives focus in this area.

Discussion

Most medical schools use multiple-choice questions as a method to assess knowledge, and many use some form of essay-type questions in the first two years of medical school (Mavis et al. Citation2001). Thus, these tools are not new to medical education. The discussion that follows focuses on unique aspects with regard to the formative interpretation of performance data collected using essays and multiple-choice questions described in this article and addresses applications of the model for other institutions to consider.

In the CCLCM system, weekly CAPPs and SAQs are used to provide feedback to students on their depth and breadth of knowledge in relation to the curriculum's core concepts. The feedback is formative and specifically designed to help students identify gaps in their knowledge base. The feedback is timely in that correct answers are available upon student submission of SAQs and faculty-generated standardized responses to CAPPs are available every Monday morning. Students also receive individualized feedback from faculty about their CAPP responses within the week. Key components of this system are the faculty, the student, and the student advisor. Faculty members are trained to construct USMLE-type questions that assess core concepts and to write essay questions that require application and integration of knowledge. Those who review CAPPs are also trained to give students useful, individualized feedback. With regard to students, the literature suggests that the effectiveness of formative assessment depends on students who are motivated and able to accurately self-assess (Biggs Citation1998). In this assessment model, student motivation comes from the externally-imposed responsibility to document performance in the curriculum's nine competencies as part of the promotions process. Internal motivation is nurtured by having clear standards of performance coupled with feedback that allows students to track their progress. Trained physician advisors contribute further by regularly reviewing feedback to help students identify knowledge gaps accurately and implement appropriate learning plans (Ruston Citation2005).

Feedback from students and their advisors reveals that weekly formative assessments encourage students to engage in a continuous approach to learning rather than cyclical, intense study periods just prior to exams. Nevertheless, some beginning medical students still accustomed to traditional assessment methods require periodic reassurance that grades are not the sole criterion on which to judge their knowledge, particularly in a field where knowledge changes often with medical advances. Our use of formative feedback requires students to make judgments about their ability to acquire, apply and integrate knowledge. The inability to answer CAPPs and SAQs in a given week should help students identify areas of superficial understanding.

There are aspects of CCLCM's formative assessment model that can be readily used by other programs with larger class sizes or existing assessment programs. One aspect is to develop and implement multiple-choice questions for formative, self-assessment purposes. This provides a relatively easy way to give students additional responsibility to monitor their learning beyond reliance on review books or commercial item banks targeted for licensure exam preparation. The act of creating formative multiple-choice questions should also help faculty identify and prioritize core concepts for students to learn throughout a course, assuming the questions are well written and aligned with learning objectives. Too often faculties are forced to develop examinations which emphasize minutiae to differentiate among students for grading purposes. Costanzo (Citation1998) argues that ‘good teaching’ and ‘good testing’ can coexist if teachers develop test questions that measure essential concepts all students should master rather than isolated facts only a few students may know. This advice should guide those seeking tools to assess medical knowledge in a competency-based rather than a norm-based assessment model.

Other aspects include using essay questions as an assessment tool, a learning experience, or both depending upon goals. Although individual feedback to students might be considered ideal if essays are used as assessment tools, one can easily imagine a weekly debriefing class meeting in which smaller groups of students review the faculty prepared answer and compare and contrast their answers and understanding. Remaining questions could be addressed to faculty either in person at the debriefing or electronically. This type of self and peer reflection is entirely consistent with models of adult learning and reflective practice and may be more helpful in developing those skills than the individual feedback model we have adopted. If essays are used as learning tools, small groups of students could meet to discuss CAPPs-type questions as a means to integrate and apply basic science knowledge being addressed in the curriculum. Taking this potential use one step further, CAPPs could be designed with no one right answer in mind which could allow groups to report back unique answers to the larger group.

Conclusion

With medical education's move towards competency-based education, assessment is being approached more systematically with attention being paid to curriculum content, timing and learning through assessment. Other approaches are being developed to address competencies such as self-directed learning and application of knowledge. The system described in this article provides an example of one school's effort to create a learning environment that values application and integration of knowledge, provides formative feedback to identify students’ gaps in knowledge, fosters student development of learning plans, and promotes student responsibility for their own learning. We have preliminary evidence that this assessment approach appears to have a significant impact on the education of students, a justifiable cost in light of benefits obtained, and growing acceptance among faculty and students. While additional data are necessary to strengthen these conclusions, we believe other institutions should consider the use of multiple-choice and essay questions as formative assessment tools aimed at helping students to assess their breadth of knowledge and to apply this knowledge to practical problems and situations.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

Additional information

Notes on contributors

S. Beth Bierer

Dr BIERER is Director of Evaluation, Cleveland Clinic Lerner College of Medicine of CWRU, Cleveland, Ohio.

Elaine F. Dannefer

Dr DANNEFER is Director of Student Assessment and Medical Education Research, Cleveland Clinic Lerner College of Medicine of CWRU, Cleveland, Ohio.

Christine Taylor

Dr TAYLOR is Director of Faculty Development, Cleveland Clinic Lerner College of Medicine of CWRU, Cleveland, Ohio.

Phillip Hall

Dr HALL is Course Director for Renal I, Cleveland Clinic Lerner College of Medicine of CWRU, Cleveland, Ohio.

Alan L. Hull

Dr HULL is Associate Dean for Curricular Affairs, Cleveland Clinic Lerner College of Medicine of CWRU, Cleveland, Ohio.

References

  • ACME-TRI Report. Educating Medical Students: Assessing Change in Medical Education–The Road to Implementation. Association of American Medical Colleges, Washington, DC 1992
  • Ben-David MF. Outcome-based education: Part 3–Assessment in outcome-based education. Med Teach 1999; 21: 23–25
  • Bierer SB, Taylor C, Dannefer EF, Hull AL. Evaluation of essay questions used to assess medical students’ application and integration of basic and clinical science knowledge. Paper presented at American Educational Research Association, New York, March 24–28, 2008. NYUSA
  • Biggs J. Assessment and classroom learning: A role for summative assessment?. Assess Educ 1998; 5: 103–110
  • Costanzo LS. Good teaching and good testing: Examples from renal physiology. Adv Physiol Educ 1998; 275: 217–220
  • Cooke M, Irby DM, Sullivan W, Ludmerer KM. American medical education 100 years after the Flexner report. N Engl J Med 2006; 355: 1339–1344
  • Dannefer EF, Henson LC. The portfolio approach to competency-based assessment at the Cleveland Clinic Lerner College of Medicine. Acad Med 2007; 82: 493–502
  • Drake RL. A unique, innovative, and clinically oriented approach to anatomy education. Acad Med 2007; 82: 475–478
  • Epstein RM. Assessment in medical education. N Engl J Med 2007; 356: 387–396
  • Fishleder AJ, Henson LC, Hull AL. Cleveland Clinic Lerner College of Medicine: An innovative approach to medical education and the training of physician investigators. Acad Med 2007; 82: 390–396
  • General Medical Council [GMC]. Tomorrow's doctors: Recommendations on undergraduate medical education. General Medical Council, London 1993
  • Goldstein EA, Maclaren CF, Smith S, Mengert TJ, Maestas RR, Foy H, Wenrich MD, Ramsey PG. Promoting fundamental clinical skills: A competency-based college approach at the University of Washington. Acad Med 2005; 80: 423–433
  • Kane M, Crooks T, Cohen A. Validating measures of performance. Educational Measurement: Issues and Practice 1999; 18: 5–17
  • Litzelman DB, Cottingham AH. The new formal competency-based curriculum and informal curriculum at Indiana University School of Medicine: Overview and five-year analysis. Acad Med 2007; 82: 41–421
  • Marzanno RJ. Designing a new taxonomy of educational objectives. Sage, Thousand Oaks, CA 2001
  • Mavis BE, Cole BL, Hoppe RB. A survey of student assessment in U.S. medical schools: The balance of breadth versus fidelity. Teach Learn Med 2001; 13: 74–79
  • National Board of Medical Examiners (NBME). Constructing written test questions for the basic and clinical sciences. 3rd ed. 2002, Retrieved 7 July 2007, from http://www.nbme.org/PDF/ItemWriting_2003/2003IWGwhole.pdf
  • Panel on the General Professional Education of the Physician and College Preparation for Medicine (GPEP). Physicians for the twenty-first century: The GPEP report. Association of American Medical Colleges, Washington, DC 1984
  • Rushton A. Formative assessment: A key to deep learning?. Med Teach 2005; 27: 509–513
  • Schuwirth LW, Van der Vleuten CP. Changing education, changing assessment, changing research?. Med Educ 2004; 38: 805–812
  • Smith SF, Dollase RH, Boss JA. Assessing students’ performances in a competency-based curriculum. Acad Med 2003; 78: 97–107

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.