1,884
Views
11
CrossRef citations to date
0
Altmetric
WEB PAPER

Use of case-based exams as an instructional teaching tool to teach clinical reasoning

, , &
Pages e170-e174 | Published online: 03 Jul 2009

Abstract

Background: It is a very well-known fact that examinations drive learning to a great extent. The examination program is actually the ‘hidden curriculum’ for the students. In order to improve teaching and learning one option is to strategically use of exams.

Aims: This report of the strategic use of an innovative assessment tool in clinical problem solving domain, presents the design, format, content, students’ results and evaluation of one year test results of instructive case-based exams for 6th year medical students.

Method: Using a hybrid form of the OSCE, PMP and KFE formats, we developed a case-based stationary exam. Students were treated as advanced beginners in medical career and forced to an inquiry to use their clinical knowledge in the cases. Case discussions and question-answer sessions followed the exams. Six exams were held in 2000–2001 and 382 students participated in the study. One or two problems were used for each exam and the mean duration was 27 minutes for 7–11 stations. 17–19 observers contributed to each exam. Exams were evaluated by questionnaire based feedbacks of the students and oral feedbacks of the staff members.

Results: The exams were well received and rated ‘fair’ by the students and the format was found highly ‘relevant for learning’ while the content was ‘instructive’ and ‘not difficult’. The total non-satisfactory performance rate was 2.36%. Students asked to take a similar test weekly. Although it was labor intensive, staff members appreciated the collaborative working process.

Conclusions: Instructive case-based exams and the following case discussions seemed a high potential and motivating teaching tool in the clinical problem solving domain for 6th year students.

Introduction

One of the most important domains in clinical competence is the ability to think critically about diagnosis and management, which is usually referred to as ‘clinical problem solving’ (Schuwirth. Citation1998; Boshuizen & Schmidt. Citation2000), and its importance is recognized (GMC Citation1993). It is an ability that requires “reasoning”, which involves a combination of thinking and decision making processes to judge the best course of action to take in the clinical context. The process itself is an intellectual activity which requires obtaining information, critical thinking and integration with prior knowledge. All the activities in the process aim to reduce uncertainity of the clinical condition, and synthesis is the main feature (Rimoldi & Raimondo Citation1998).

Clinical problem solving skills differ between novices and experts. Due to the expert's well organized knowledge structure (or semantic network) and great deal of practice, the higher the level of expertise, the faster the problem solving and the more efficacious the results. The number and connections of ‘Illness Scripts’ (Charlin. Citation2002) in memory, based on previous practice, determine the level of expertise in problem solving. In this context, ‘scripts’ and ‘pattern recognition’ concepts are relatively new developments for the explanation of the cognitive process of clinical problem solving as a content specific ability. Carefully looking up a patient's initial information is very important in recognizing patterns and increased expertise, associated with the use of less information, speed and the selection of data, reduces uncertainty in the process (Derry. Citation1990; Schmidt et al. Citation1990; Vleuten & Newble. Citation1995; Custers & Boshuizen. Citation2002).

In other words, meaningful construction of content specific knowledge and matching the problem with previous experience relevant to the current case are the key points of clinical problem solving ability, and have implications in teaching and learning in medicine using a heuristic approach.

Meanwhile, it is a well-known fact that examinations drive learning to a great extent. The examination program is actually the hidden curriculum for the students. In order to improve teaching and learning, the strategic use of exams is very important (Van der Vleuten et al. Citation2001).

With this innovative use of case-based exams we aimed to help students to improve their case experience through facing them to the sliced scripts and asking to use their clinical problem solving ability while providing written answers and feedbacks during the exams. In the next part of this article, we aim to explain how we use case-based exams for teaching clinical problem solving, and the development process of the exams, their format, content, student results and feedback of the first six examinations.

Method

At the Ege University Faculty of Medicine, the medical doctor program lasts 6 years. The last three years are called ‘clinical years’ and clinical teaching methods vary among departments. In the past, no attention was generally given to students’ clinical problem solving ability.

We developed case-based stationary exams. Clinical problems selected based on the following three criteria: firstly, the case should be one of the most important health priorities of the community, secondly, it should have handling capability by the general practitioner in a primary healthcare setting, and thirdly, it should meet the integrated educational objectives of the division.

Later exam cases were created which each representing a priority clinical problem. The best clinical approach model for each case was developed by the discussion of 2–5 faculty members. The procedure was, (1) to select one clinical presentation of one patient and decide the key features as well as the initial information (age, sex, occupation, symptoms), (2) to list the tentative diagnosis options according to the initial information, (3) to address the diagnostic differentiation procedures, (4) to specify the required medication and patient education, (5) to state the prognosis and patient follow up strategy.

Each case was divided into the clinical sub problems (student tasks) such as attaining correct diagnosis, deciding correct treatment. Next, the cases were divided into key decisions and each element was structured as a single task in the stations. All the relevant patient material, such as X-rays, laboratory results and hospital forms was provided. In the questions, the team established all decision options (suitable/unsuitable for given situation) or the correct answer (such as a short essay). They provided evidence based explanations (why the option was suitable or not). Stations were weighted according to the complexity of the tasks, based on 100 points. For instance, reaching the correct tentative diagnosis option was 20 while writing the prescription was 25.

At some of the task stations, a list of diagnosis or treatment options was presented which were plausible given the initial situation. The correctness or incorrectness of the options was weighted and explanations were provided on the back of the option card. According to the degree of correctness or incorrectness of the options, points were written on the observer checklist. Except at some stations, it was not suggested how many of the proposed decisions should be made. In contrast to the patient management problems assessment procedure, the option(s) selected did not dictate the way the exam would proceed. Because the tasks were linked to each other and accurate knowledge or performance was required to perform correctly at the next station, corrective feedback was provided in the format of the best answer in the initial presentation of the next task (e.g. the most plausible tentative diagnosis at the previous station was diabetes mellitus for the following reasons … . At this station, please look carefully at the patient x-ray and write further laboratory orders in the given forms). Gradually more information was revealed, which might require students to reorient their subsequent decisions. Following is the main structure of each case in the exam.

1st station

  • Initial information about the patient and key features of clinical presentation (age, profession, sex, and history as independent variables and signs, symptoms and their durations as dependents). The student has to perceive and appreciate the meaning of the key features.

  • Questions on the cards (what the possible tentative diagnoses are, what to do at the next step, what else she/he wants to know). The student has to identify the possible pathological process that is occurring, know how to differentiate one pathological process from another, and know from epidemiology the most likely causes of a particular pathological process.

  • Options and/or place for writing the answer.

2nd station

  • The best answer of the first task and further information on the card or patient record (correct tentative diagnosis, what is necessary for further clinical reasoning, laboratory test results, X-rays, etc.). The student has to evaluate all pieces of information available and decide on the likely cause and course of illness scripts and select an option or write.

3rd station

  • The correct diagnosis and its explanations, further questions. The student has to select or write the best treatment and follow up strategy, indicate additional key features needed for solving the patient's problem and preventing further complications and write the correct prescription.

Checklists were used by observers in the exams. Observers only marked the students’ selections and monitored them through the stations. Every observer was trained how to use the checklist. In the checklist each option/decision has previously attained scores. If short written answers were required at a station (e.g. treatment), an answer key sheet was used.

We scheduled time to meet the students immediately after each exam. All the cases, answers and scores of each option/decions were presented on overheads and the correct problem solving process was discussed. In the session students have calculated their own exam scores. The post exam meeting finished with a question-answer session.

We included measures to assess students satisfaction (five point Likert Scale questionnaires (5 = completely agree, 1 = completely disagree) in the format of six items, and written comments) and faculty perceptions (oral feedback) for educational effects and sutainability of the exams.

Results

In the educational year of 2000–2001 we constructed 6 exams. shows the clinical problems, number of stations, duration and number of observers for each one. Different departments played a part in exam preparation and provided between 17 and 19 observers for the exams. The mean duration was 27 minutes and the number of stations was a minimum of 7 and a maximum of 11.

Table 1.  Instructive examinations in 2000–2001

382 students were assessed and 9 of them were found unsatisfactory (2.36 %), shown in . In the format The achievement and previous results of the students with unsatisfactory results were reviewed and it was found that a low level of achievement was not correlated with their past performance. As shown in , all the exams were well received by the students. The fourth exam was rated highest. The students rated all the exams ‘fair to everyone’.

Table 2.  Students’ scores

Table 3.  The students’ ratings

Discussion

Patient Management Problems (PMPs) are one way of testing clinical problem solving ability. PMPs were introduced to the assessment procedures of many medical schools and licensing bodies, particularly in North America (McGuire & Babbott. Citation1967). In PMPs, clinical reasoning was typically measured by asking students to respond to problems that could be presented in a standardized format. The main assumption of this attempt was that clinical problem solving is a generic ability.

By the end of 1970s, the Objective Structured Clinical Examination (OSCE) was introduced, with the aim of providing a better solution to the drawbacks of traditional clinical examinations. The OSCE has widely impacted the clinical assessment programs of many medical schools during the last twenty five years (Harden & Gleeson. Citation1979; Mennin & Kalishman. Citation1998; Fowel & Bligh. Citation2001). The OSCE format integrated the assessment of clinical problem solving ability with clinical competence in a structured fashion.

At the same time, more efficient assessment formats have been developed to replace the PMP, such as “key features”, based on the assertion that successful handling of a clinical case depends on a few critical elements. When testing students on problem-solving skills, one should focus on such key pieces of patient's information. Authors call this type of assessment a Key Features Examination (KFE) (Bordage & Page,. Citation1987; Bordage et al. Citation1995; Hatala & Norman,. Citation2002). KFE enabled the use of more cases and assessment of much more content with limited time and cost. This approach fitted the content specificity attribute of problem solving relatively well.

In order to utilize the above advantages of the OSCE, PMP and KFE, we used a hybrid method for our instructive exams.

Although option given assessment tasks are open to the “cueing” effect (Schuwirth. Citation1998) and must be avoided in summative exams, we tried to use this cueing effect for enhancing learning and reinforcement in exams and as motivation for further learning.

Giving the best possible answer of the previous station gave us a non-biased analysis of each subsequent task and an analysis of the weakness of the student in each particular domain. (e.g. tentative diagnosis and/or treatment). These analytic findings were converted into implications of instructional design.

The driving concepts of the post exam meetings were reinforcement, corrective feedback, elaboration, and metacognition. We observed a mutual agreement of the faculty staff members’ free comments, that the post exam meetings turned into highly motivating case-based teaching sessions. Students were challenged to reflect on their own performance, based on explained expert performance, which allowed them to understand what they knew and what they did not know (Bruning et al. Citation1999).

Both of the two positive items in the student evaluation questionnaire ranked above 3.00 points for each exam. These findings showed us that format was well received by students. According to the written comments, the fairness of any single clinical exam was the most important issue for our students. The format seems to have the potential to solve in large measure the problem of ‘fairness’, which they are still suffering in summative oral exams. Usage of this format for the first time in the curriculum showed both students and faculty members that ‘it is possible to be fair in exams'.

Pre-exam explanations were given by a combination of a ‘written exam manual’ and ‘oral questioning’ except in the first exam. Explanation in the first exam was given only in writing, and we agreed that students did not pay much attention to the written material because they were being assessed. We decided to support the written explanations with questioning in the 10 minutes before the exam. This worked, which was reflected in students’ ratings on the explanation and adaptation items.

Generally, when using a new exam format, adaptation problems tend to be the main cause of possible ‘too difficult’ ratings, instead of ‘real difficulty of content’. In our innovation, the results were the opposite. Students found the exams very easy, as can be seen in the students’ scores. We interpret this finding to mean that although there was no specific attention to teaching clinical problem solving, students’ have been achieved some degree of expertise for internal medicine cases.

Students enjoyed all the components of the exams and asked to repeat a similar exam as a drill every Friday. We believe instructive exams will enhance learning in the clinical phases of the medical curriculum and this format helped them to link concepts in their knowledge structure with focusing on enabling conditions, faults and consequences in sequences of medical problem solving (Boshuizen & Schmidt. Citation2000), which they had never been asked or shown how to use in any clinical examination they had taken before.

The process and results enhanced our discussion of clinical learning, teaching and the use of assessment methods. We had the chance to share our experience with other faculties (Istanbul University Medical Faculty and Ege University Faculty of Dentistry) as well as with our faculty members (in the clinical assessment faculty development program). At the end of 2000–2001 we started to construct computer based exams.

Conclusions

Instructive case-based assessment at short intervals (e.g. weekly) is a potential learning tool for clerkships. Post exam meetings are great challenges for both students and faculty members. Since 2002, we have moved the exam to a computer medium and now the Department of Internal Medicine is benefiting from these exams. We will continue to use and study instructive exams as a teaching tool and share our experience with other departments.

Additional information

Notes on contributors

Halil Ibrahim Durak

HALIL IBRAHIM DURAK provided the original idea for this study and wrote the manuscript.

Suleyman Ayhan Caliskan

SULEYMAN AYHAN CALISKAN contributed to exam development and analysis processes.

Serhat Bor

SERHAT BOR is clerkship director of the Internal Medicine Department and the content developer.

Cees Van Der Vleuten

CEPS VAN DER VLEUTEN provided support, guidance and amendments to the manuscript.

References

  • Bordage G, Brailovsky C, Carretier H, Page G. Content validation of key features on a national examination of clinical decision-making skills. Academic Medicine (supplement) 1995; 70: 276–281
  • Bordage G, Page G. An alternative approach to PMPs: The “key features” concept. Further Developments in Assessing Clinical Competence, IR Hart, RM Harden. Heal Publications, Montreal 1987; 50–75
  • Boshuizen HPA, Schmidt HG. The Development of clinical resoning expertise; Implications for teaching. Clinical Resoning Skills2nd, M Jones. Butterworth-Heinemann, Oxford 2000
  • Bruning RH, Schraw GJ, Ronning RR. Cognitive Psychology and Instruction3rd. Prentice_Hall Inc, Columbus, Ohio 1999
  • Charlin B. Standardized Assessment of Ill-defined Clinical Problems. Datawyse/Universitaire Press Maastricht, Maastricht 2002
  • General Medical Council. Tomorrow's Doctors: Recommendations on Undergraduate Medical Education. GMC, London 1993
  • Custers EJFM, Boshuizen HPA. The Psychology of Learning. International Handbook of Research in Medical Education, DI Newble. Kluwer Academic Publishers, Great Britain 2002; 1: 159–202
  • Derry SJ. Learning Strategies for acquiring useful knowledge. Dimensions of Thinking and Cognitive Instruction, L Idol. Erlbaum, Hillsdale, NJ 1990; 347–379
  • Ertmer PA, Newby T. The expert learner: strategic, self regulated, and reflective. Instructional Sci. 1996; 24: 1–24
  • Fowel S, Bligh J. Assessment of undergraduate medical education in the UK: time to ditch motherhood and apple pie. Med. Educ. 2001; 35: 1006–1007
  • Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination. Med. Educ. 1979; 13: 19–22
  • Hatala R, Norman G. Adapting the key features examination for clinical clerkships. Med. Educ. 2002; 36: 160–165
  • McGuire CH, Babbott D. Simulation technique in the measurement of problem solving skills. Journal of Educational Measurement 1967; 4: 1–10
  • Mennin SP, Kalishman S. Student assessment. Acad. Med. (supplement) 1998; 73: s46–s54
  • Rimoldi HJA, Raimondo R. Assessing the process of clinical problem solving. Advan. Health Sci. Educ. 1998; 3: 217–230
  • Schmidt HJ, Norman GR, Boshuizen HPA. A Cognitive Perspective on Medical Expertise: Theory and implications. Acad. Med. (supplement) 1990; 10: 611–621
  • Schuwirth L. An Approach to the Assessment of Medical Problem Solving: Computerised Case-based Testing. Datawyse Universitare Pers Maastricht, Maastricht 1998
  • Van der Vleuten CPM, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001; 357: 45–49
  • Vleuten Cvd, Newble DI. How can we test clinical reasoning?. Lancet 1995; 345: 1032–1035

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.