907
Views
9
CrossRef citations to date
0
Altmetric
Article

A Model for the Assessment of Medical Students’ Competency in Medical Ethics

, , , , , , , , & show all
Pages 68-83 | Received 26 Mar 2012, Accepted 09 Oct 2012, Published online: 26 Aug 2013
 

Abstract

Background: This article focuses on the goals of our medical ethics education program and our formative assessments of students’ competency at various points during this education. Methods: Because of the critical relationship between a program's goals and the design of an assessment strategy, we provide an overview of the theoretical basis of our curriculum, our program's objectives, and teaching methods. In order to verify that our students had achieved minimum competency in the objectives of our ethics curriculum, we developed assessments that evaluated their ability to identify and apply ethical principles to clinical cases and to use moral reasoning to resolve dilemmas. We verified the reliability of these assessment instruments by correlating two different Mount Sinai raters’ scores of the same assessments with each other and the validity of these assessments with external reviewers. Results: For interrater reliability, paired raters scored the same student written exercise within 5 points of each other on 119 of the exercises (87% rater consensus). Therefore, we found our assessment tools to be reliable. Regarding validity, all three expert external reviewers agreed that our instruments were well suited for evaluating medical student competency in medical ethics and that they measured what we intended to measure. Conclusions: Our efforts in medical ethics education and competency assessment have produced an integrated model of goals, methodology, curriculum, and competency assessment.  The entire model is directed at providing students with the ethical knowledge, skills, and attitudes required of an exemplary physician.  We have developed reliable and valid assessment tools that allow us to evaluate the competency of students in medical ethics and to identify students who require remediation, and that are useful for other ethics programs.

Acknowledgments

A. Favia and L. Frank are equal first authors; E. Friedman and R. Rhodes are equal last authors.

Notes

This view is an explicit departure from the most commonly accepted view that medical ethics is common morality applied to clinical practice. Most notably, that standard view has been espoused in all of the several editions of Beauchamp and Childress's Principles of Medical Ethics (2001) and Gert, Clouser, and Culvers's two editions of Bioethics. Our positive view can be found in Moros and Rhodes (Citation2002), Rhodes (Citation2007), Rhodes and Alfandre (Citation2007), Rhodes et al. (Citation2004), and Swick (Citation2000).

It is important to note that although students are learning ethical principles, they are not learning the “principlism” of common morality as put forth by Beauchamp and Childress and others (see, e.g., Beauchamp and Childress 2001). Instead, our program focuses on an array of principles that govern the medical professions. Thus, we teach more than “the four principles” and emphasize that the principles function as reasons to justify professional action.

The development of the Ethics Project in the Surgery Clerkship is described in greater detail in Gligorov et al. (Citation2009).

Because raters were asked to grade 40 essay exams in less than 1 week while also performing their other regular teaching and studying responsibilities, we needed a method to limit the distorting effects of rater fatigue, boredom, distraction, frustration, and the like, so as to assure that the differences between raters were genuine differences in judgment and not rating errors. When there was a significant disagreement between two raters on one exam, these raters were both notified and asked to regrade the exam. To avoid prejudicing reassessment in any direction, raters were not told whether their rating was higher or lower than the other rating. This is not a practice we plan to continue since our model is intended to employ only one rater per exam. As a standard procedure, however, when a student receives a low score (inadequate or close to inadequate) the exam is regraded by a second rater.

We obtained an exemption letter from our institutional review board (IRB). This project involved educational research on the performance of all students, no recording of identifiable student information, and no interaction with students beyond their regular comprehensive assessment exercises. Thus, it was exempt from informed consent requirements. Students were, however, informed that their exercises were being used in a study aimed at assessing our evaluation tools.

For example, Professor Fleck wrote that his answer to both of the questions we posed “is a strong yes.” According to Professor McCullough, “This is an excellent set of pedagogical materials” that is “nicely complementary with the goal of producing students who will be able to think through ethical challenges in clinical practice in a disciplined, reliable fashion that can be assessed during learning.” Professor Kopelman remarked that “you have developed impressive tools to evaluate your students’ performance and to refine your curriculum.” Specifically, with respect to the first question, Professor McCullough responded that “the core skill that this assessment is designed to measure is the attainment of practical skills of normative ethics reasoning about cases using ethical principles.” McCullough also commented that our assessment questions “reflect well-accepted approaches to normative ethical reasoning in the clinical setting.” In answer to the second question, Professor Fleck stated that “your instrument should allow you to judge reasonably well which students have a well developed capacity to identify morally relevant considerations with regard to a concrete clinical ethics problem from those who fail to identify many considerations which they ought to recognize as having moral relevance.” Professor Kopelman concurred, writing that the “Compass-2A exercise should be very useful in distinguishing students with high competence from students with inadequate understanding of ethical concepts.”

Professor McCullough remarked that the Student Instructions seemed “clear and fair” and that the Rater Evaluation Guide was a “detailed … clear, easy-to-use evaluation sheet for faculty.” Professor Fleck pointed out how difficult it is to achieve interrater agreement in essay-type evaluations, saying, “I like the way you have broken down each item into what are supposed to be identifiable sub-parts which should result in various raters making the same judgment.”

For example, Professor Kopelman wondered whether students have sufficient time in their busy schedules to complete the assignment. Both Professors Fleck and Kopelman suggested that we supplement our assessment by asking students to consider objections or weaknesses of their proposed resolution. Also, Professor Kopelman was skeptical about assessing individual students’ performance on the oral component of the exam. Their overall opinions can be summed up in Professor Fleck's response: “What you have in your evaluation instrument is the best effort I have seen thus far in trying to assure interrater reliability in judging ethics essays from medical students.”

We are grateful to Mark Aulisio, Cheryl Cox, Immaculado De Melo-Martin, Leonard Fleck, Loretta Kopelman, Laurence McCullough, John Moskop, Wayne Shelton, and Jeffrey Spike for their generous contributions of time and effort in familiarizing themselves with our rating materials and grading our sample exams. We value their contribution to this effort.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 137.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.