1,774
Views
16
CrossRef citations to date
0
Altmetric
Research Article

Virtual Patients for assessment of medical student ability to integrate clinical and laboratory data to develop differential diagnoses: Comparison of results of exams with/without time constraints

&
Pages e222-e228 | Published online: 28 Mar 2012

Abstract

Introduction: We have evaluated medical student ability in a problem-based learning course using a Virtual Patient (VP)-based exam with variable parameters for assessment purposes.

Methods: A class of 155 second year medical students was assessed using a VP exam with unlimited access during a 1-week period; 2 years later, the identical exam was administered to 175 students with a 3-h time limit.

Results: Students taking the exam without time constraints utilized approximately twice as much time than students with the time limit. Without the pressure of a time-limit, students utilized half as many inquiries of the patient history, physical, and lab/imaging tests than were used by students having a time constraint, indicating that the time limited students used a “shotgun approach” to try to collect as many “required” inquiries as possible. Most students (91%) taking the untimed exam were able to correctly diagnose the exam case but only 31% of the time limited students correctly diagnosed the VP exam case, despite their higher number of inquiries.

Conclusions: Our results demonstrate that an identical VP exam, administered with variables to compare untimed versus time-limited conditions, resulted in an unraveling of student's ability to integrate the data discovered during the process of progressive disclosure.

Introduction

Computer-based simulations of patient cases have long been used for learning and training in healthcare education (Harless et al. Citation1971; Bergin & Fors Citation2003). These learning tools are frequently referred to as Virtual Patients (VPs), but have also been labeled Interactive Simulated patients, Simulated Cases or Avatars (Bergin et al. Citation2003; Ellaway et al. Citation2008; Andrade et al. Citation2010). VPs may be used throughout the medical school curriculum including pre-clinical courses (Bergin et al. Citation2003). VPs are commonly recommended for teaching clinical reasoning and clinical decision making, but have also been used for teaching basic communication skills with patients (learning how to ask about the patient's illness history) and to broach the subject of bioethics (Stevens et al. Citation2006; Triola et al. Citation2006; Cook & Triola Citation2009; Saleh Citation2010). Furthermore, there are also indications for using VPs to emphasize socio-cultural aspects and cultural differences as they pertain to the discipline of the practice of medicine (Fors et al. Citation2009; Perron et al. Citation2009). Moreover, VPs have been used in both traditional curricula and problem-based learning (PBL)-based programs (Kamin et al. Citation2002; Poulton et al. Citation2009).

Traditional didactic medical education, especially during preclinical coursework, is not structured to provide a reasonable means to present alternative choices when approaching clinical decision-making. In general, bodies of works are presented as factual content in a lecture content thereby limiting a student's introduction to conditions of confounding distractors. While not a criticism of traditional educational methods (students need to be presented with a basic foundation of information upon which to build), didactic sessions are indeed limited by both structure and time. The use of VP's in a preclinical curriculum offers the student an opportunity to explore clinical presentations and begin to integrate didactic subject matter before their clinical training.

VPs for assessment

Today, VPs have been accepted as viable supplements for the delivery of educational resources with good learning outcomes (Botezatu et al. Citation2010a, b). The use of VPs for assessment is a more recent application and has been described as a possible way to assess clinical reasoning and/or clinical decision-making abilities among students (Courteille et al. Citation2008; Waldmann et al. Citation2008; Gesundheit et al. Citation2009; Round et al. Citation2009; Botezatu et al. Citation2010a, b; Forsberg et al. 2010). VPs have also been used in the United States Medical Licensing Exam Step 3 since 1999 for assessment of clinical skill competency. However, there are few published studies regarding how scoring, grading, and the practical implementation of VPs for assessment should be made. The Association of American Medical Colleges (AAMC) has suggested that if using VPs for delivery of clinical content, educators should employ a multiple choice question exam to assess the learner and provide feedback (AAMC Citation2007).

However, one important issue to consider when applying case-based assessment methods is the access time students should have for an exam case. Are there implications that exam results vary significantly if access to the VP exam is given with or without a time constraint? Much of the literature regarding this subject addresses elementary education, students with learning disabilities, or those taking examinations in a language that is not their native tongue. A Pearson Report (Brooks et al. Citation2003) has emphasized that average raw scores of students tested under both timed and untimed conditions vary less than one raw score point. A more recent publication (Lee & Chen Citation2011) relates that most educational assessments are designed to measure knowledge (power tests), are usually administered with time constraints, and before the application of computer-based assessment, it was extremely difficult to evaluate cognitive ability.

PBL and assessment

Problem-based instruction was introduced as a pedagogical approach to medical education at McMaster University in 1968 (Neufeld & Barrows Citation1974). The change of approach was due to observations that students were unable to appropriately integrate medical knowledge in the clinical setting. The classical method of learning medical knowledge via didactic presentation followed by student studying and examination of ability was too passive and removed from context to take on meaning, according to Barrows (Citation2000). Since then, it has gained acceptance as a valuable adjunct method of educational instruction in the healthcare disciplines. Unfortunately, assessing student knowledge and the overall effectiveness of PBL is almost impossible to gage. Inherent is a variability of how PBL courses are designed and utilized in medical education. Most medical schools that utilize PBL in their curricula employ a self-assessment mode of determining student ability. As stated by Arndt (Citation2002): “Since assessment drives learning, the intended goals of PBL will not be realized unless assessment methods reflect these goals.”

The College of Medicine at the University of Toledo (USA) has utilized PBL as part of our preclinical curriculum since 1998. Our courses were modeled after the McMaster program and were very successful in addressing specific objectives to guide small discussion groups. The objectives of our PBL courses include in part, the following points:

At the end of the course, the student should be able to:

  1. demonstrate mastery of the mechanisms the pathophysiology underlying common disease processes in real patients. The depth of basic science knowledge will be at the same level as expected in other basic science blocks.

  2. analyze data obtained during interaction with the simulated patient to direct subsequent inquiry and ordering of diagnostic tests.

  3. develop:

    • differential diagnoses to explain the pathophysiologic basis for the presenting signs and symptoms for each case.

    • mechanisms to test each diagnosis (order appropriate laboratory and ancillary tests) for each case.

    • learning issues that are pertinent to what is unknown about the case.

  4. communicate information obtained by independent self-directed learning/research with peers and other health professionals.

We have attempted to assess our students in the course by individual written examination since its inclusion in our curriculum. Unfortunately, we were unable to develop a suitable assessment tool that could objectively exam our students and therefore, courses were graded on a satisfactory/unsatisfactory basis. Requirements to pass these courses included mandatory attendance, a subjective participation evaluation by facilitators, and completion of a progressive disclosure exam given at the end of each course. Approximately 5 years ago, a deterioration of enthusiasm of both students and faculty was noted and seemed to be related in part, to attitudes of the students related to the satisfactory/unsatisfactory nature of the courses and how case information was disseminated. In addition, the process of PBL had become stale and problematic since it had evolved into document-dominated sessions. Case readers would divulge whole sections of clinical history without inquiry for specific information by the discussion group. Therefore, we abandoned paper-based cases in 2006 and started to use a web-based VP environment (originally from Karolinska Institutet in Sweden), called Web-SP (Zary et al. Citation2006) for our PBL courses. However, there were still questions regarding how to practically implement VPs into the assessment of student ability for these courses, and especially how allocated time for the exam could influence the outcomes. To evaluate the changes made to our course, we obtained Institutional Review Board approval for protocols governing retrospective study of student performance in PBL.

Aim

This retrospective study was performed to compare medical students’ ability to rationally develop differential diagnoses for a PBL course assessment when using a VP exam with and without time constraints including low/high stakes grading schema.

Methods

Course setting and exams

At the University of Toledo (USA), an objective examination process has been established in our preclinical PBL course to assess medical student ability to integrate clinical and laboratory data to develop differential diagnoses. The course is to a certain extent based on VP cases as an introduction to clinical decision making and to practice how to develop differential diagnoses. Utilizing the unique features of Web-SP (the VP platform used) it allows us to determine an individual student's approach to VPs, including the specific steps the students made and their efficiency of differentially diagnosing a chief complaint. Our medical students have been assessed during PBL courses using a midterm formative exam and at the conclusion of the course, a final summative exam and provided a satisfactory/unsatisfactory grade. Most recently (2010), our PBL courses have been folded into a more comprehensive Clinical Decision-Making Course and graded on an Honors, High Pass, Pass, Fail scale with the results of PBL assessment encompassing only a portion of the student's grade. The major limitation is that the exam was used in a low stakes (satisfactory/unsatisfactory grade) and a high stakes environment (stratified grades) with time limitations incorporated into the high stakes exam. No other curriculum changes occurred to have impact upon the PBL exam results.

Scoring and grading

The scoring and grading metrics of the VP exam case used for this study was based upon one specific VP case, carefully selected to be of intermediate complexity, and for which had been previously presented as didactic subject matter during the second year of our integrated medical school curriculum. Although a potential limitation of our study, the case had been used in previous years and was very well vetted by our faculty. Thus, the students had been provided lectures in an organ-systems course including presentation of pathology, physiology, and pharmacology prior to the PBL VP exam. The course director identified clinical presentation/medical history questions, physical exam findings, and laboratory/imaging tests for the VP deemed appropriate to making a correct differential diagnosis and prescribing these individual inquiry selections in Web-SP as “required questions” (not visible to the students). These questions were deemed important to be able to accurately diagnose the VP's medical problem but although inquiries were selected as “required” in Web-SP, the student did not need to select such questions or lab tests to develop their diagnoses. Once the exam had been administered, the total number of required inquiries selected by the students was divided by the total number of inquiries they selected during examination, which established an “efficiency of approach to the VP” score. This efficiency score was arbitrarily weighted as 50% of the total attainable score for each exam. Third, a total of four appropriate differential diagnoses were identified for the VP presentation, with the ultimate diagnosis with narrative justification weighted at 20% and the other three appropriate differentials for the exam case being worth 10% each. Additional differential diagnoses were considered for as much as a 5% mark if the narrative justification of such differentials were compelling enough to sway the course director to grant “points”. Thus, a student correctly diagnosing an exam case automatically passed the exam; a student missing the correct diagnosis, but identifying other potential and appropriate differentials and selecting appropriate required questions could still achieve a score of 80% or more. However, a student using a shotgun approach to making their diagnosis could potentially fail the exam by being extremely inefficient if not focusing upon the patient's chief complaint. Finally, the exam was scored identically for both exam conditions. For the untimed exam given in 2008, a score of 70% or higher was graded as “satisfactory and unsatisfactory for 69 and less. The timed-limited exam was graded as with the stratified grade of Honors, High Pass, Pass, or Fail based on the total scores (90 & >, 80–89, 70–79, and 69 & <, respectively). Comparisons made in this report utilized the percentage scores for each class of second year medical students. Due to the different grading schemes, the untimed exam did not have the same high-stakes as the time limited exam. Regardless, both exams were used as assessment tools for student performance in our second year PBL course with students required a minimum score of 70% to pass the course.

Using a VP for assessment is based on the assumption that we might be able to objectively measure the ability of preclinical medical students to appropriately work through a patient's history, physical, and laboratory tests to rationalize a differential diagnosis. We have attempted to be as objective as possible but unquestionably, a degree of some subjectivity has been included in assessment of student performance. Unlike multiple choice exams that are the norm for most course assessments, the structure we have used enables us to effectively assess a student's ability in the integration of knowledge to appropriately create a reasonable differential impression/diagnoses list and it also allows us to determine the thought process by which they create their differentials.

The VP case

For this study, one actual and well-vetted patient case was used to create the VP. The patient presented to our hospital emergency with an acute abdomen and characteristics of an acute appendicitis. Mitigating circumstance and the patient's age were also important for the students to consider. The patient had emergency surgery approximately 40 h after his initial chief complaint and subsequently developed Acute Respiratory Distress Syndrome (ARDS). Students had access to hundreds of inquiry options, of which 65 patient history questions, 25 physical exams, and 25 lab/imaging tests were set as “required”. The correct diagnosis for the VP exam case was ARDS worth 20 “points” and the students were instructed to list at least three differential/impression, each worth 10 “points”; for example, complications of acute appendicitis was considered an appropriate differential. The case was used as subject content for a number of years in the PBL course and provided in both paper-based and VP formats; thus, it had had extensive review and vetting prior to its use as a VP exam case. It was “retired” for 1 year before use as an exam case.

Time aspects and study set-up

The VP exam case was first used when students had experience using Web-SP for 2 years and none of them had experienced a paper-based exam in PBL. The original assessment metrics included unlimited access to the exam case for a period of 1 week. The case was again “retired” to exclude as much student cross-talk among matriculated classes as feasible. Most recently, we decided to limit access to the exam case for a period of 3 h maximum. The 3-h limit was deemed as appropriate due to the fact that 95% of the students taking the exam with unconstrained time limits actually used less than 3 h to complete the exam. All other parameters for the exam, including its “required questions” and grading metrics were exactly the same. Didactic presentation of case subject material was also provided within weeks of the time between lecture and exam administration as described above for the original VP exam. Students were provided a detailed document outlining the appropriate approach to be successful with the VP exam case; they were specifically instructed to use a focused history and physical exam and they were also made aware of the scoring and grading metrics to be used. For each exam, students were provided identical instructions except for knowing that they had 1 week (previously) or limited to 3 h (current course) to interact with the VP and develop differential diagnoses including their rationale/justification statement to support each diagnosis and all students had experience taking VP exams.

Results

A total of 155 second year medical students were administered the ARDS VP exam case with unlimited access during a 1-week period and 2 years later, the identical VP exam was administered to 175 second year medical students with a 3-h time limit. With only a few exceptions, all students were able to complete their VP exam cases within the allotted window of access/time.

Time spent

Students taking the exam with unlimited access time utilized approximately twice as much elapsed time than students with a 3-h constraint to complete the exam. The average time used for solving the case without time limits was 4 h and 13 min, while the students with time constraint used in average 2 h and 1 min to complete the exam (). As shown, very few of the students with time constraints actually used the full 3-h allowed to complete the case.

Figure 1. Comparison of the time used by students with unlimited access to the exam case versus students limited to 3 h maximum.

Figure 1. Comparison of the time used by students with unlimited access to the exam case versus students limited to 3 h maximum.

Approach to the case

Students without the pressure of a time-limit utilized half as many inquiries of the patient history, physical, and lab and imaging tests than were used by students having a time constraint to complete their VP exam case (). The mean score for an appropriate inquiry (the approach to the case as scored objectively using the required questions divided by the total number of inquires; this is the “efficiency of approach to the VP” and expressed) by students taking the VP exam without time constraint was 54.62 ± 1.24%; the mean score of students with time limits was 48.18 ± 0.97%.

Figure 2. Comparison of two different second year medical student classes regarding student approaches to the identical exam case with the 2008 class having an untimed exam versus the 2010 class that had a 3 h time limit and graded exam.

Figure 2. Comparison of two different second year medical student classes regarding student approaches to the identical exam case with the 2008 class having an untimed exam versus the 2010 class that had a 3 h time limit and graded exam.

The following observations were made, not considering the final test scores:

  • Regardless of the number of VP inquiries made, the efficiency of the approach was similar; however, the accuracy of the differential diagnoses and the respective exam scores were significantly different.

  • Approximately, 100–150 inquiries were made by students without time constraints whereas 250–300 inquiries were performed by the students in the timed exams, despite the shorter time allotted.

Accuracy

Surprisingly, 141 of the 155 “untimed” students (91.0%) were successful in appropriately diagnosing ARDS for the VP, whereas only 54 of the 175 time-limited students (30.9%) were able to correctly diagnose the VP ().

Figure 3. Comparison of two different second year medical student classes regarding student accuracy of developing differential diagnoses for the exam case with different time limits and grading schema.

Figure 3. Comparison of two different second year medical student classes regarding student accuracy of developing differential diagnoses for the exam case with different time limits and grading schema.

Discussion

The purpose of our study was to compare the performance of second year medical students taking a VP-based exam with the variables of unlimited time and satisfactory/unsatisfactory grading versus an exam time constraint and stratified grading scale when using an identical exam VP. We have evaluated differences of the student performances including the amount of time spent on the exam case as well as their inquiry approach and accuracy in diagnosing the VP's disease.

Regardless of the exam structure, it is important that students should be familiar with the assessment model to be a fair examination tool. Previous studies utilizing VPs for student ability assessment emphasized this essential condition (Courteille et al. Citation2008; Waldmann et al. Citation2008).

Time

As expected, the time utilized by students without a time constraint was greater than time spent by students having a limited time to use for their exam (). However, it was unexpected that students having only 3 h to complete their exam used approximately only two-thirds of their allotted time (). This outcome may have been due to stress induced by the exam time-limit; one could surmise that they were more concerned with the constraint than integrating the medical knowledge they gained during the exam to make an accurate diagnosis. The phenomenon has been described in a number of studies of item response theory in which examinees exhibit solution behavior or rapid-guessing behavior are related to time-limit tests that have important consequences for the examinees (high-stakes tests) (Lee & Chen Citation2011). An apparent trend exists for the exam without a time constraint in which students who scored higher overall within the “satisfactory” grading scale, spent less time than students who had difficulty or received an unsatisfactory score. In contrast, the time spent by students taking the time-limited exam was similar across the spectrum of ability demonstrated by their stratified assessment grade. Respectfully, the final mean exam scores (inquiry score combined with the differential diagnosis score) demonstrate that students taking the exam in a more “relaxed” environment performed much better than students taking the exam with added stress of time constraints and stratified grading. Regardless of the ability demonstrated by students for the time-limited exam, there was no real difference in how much time was used.

Student approach to the exam case

Regarding the student approach to the exam VP, it was surprising to discover that students having a limited amount of time to solve the case, used significantly more inquiries including patient history questions and the physical exam, and ordered more lab tests and procedures than students without such constraint. This may have been due to the grading metrics provided to all students days prior to the exam release that clearly stated that there would be no penalty for requesting any clinical information that they might desire; however, the potential for a lower inquiry score was known if one were to take a “shotgun” approach to the VP as the more inquires made potentially increased the denominator for the required questions available to discover. Therefore, an explanation of these results could be related, not only to stress as mentioned before, but also due to the stratified grade that had more significance to the students than a satisfactory/unsatisfactory assessment value. Thus, they were more concerned about “missing” important clinical information resulting in numerous nonessential inquires rather than utilizing the medical knowledge gained to efficiently integrate that information to accurately diagnose the case.

Accuracy of the student's conclusions

We were not surprised that 91% of the students taking the untimed VP exam case were able to correctly identify the accurate diagnosis of ARDS. However, the fact that a majority of the students (69%) with a time limit could not correctly diagnose the exam case is striking, especially since they discovered more clinical knowledge than students without the time constraint. Interestingly, students who were limited to a maximum of 3 h access to the VP exam generated twice the number of inquiries than used by untimed students and only 31% time-limited students were able to correctly diagnose the case. The exam failure rate for the unlimited access was minimal (14/155, 9%), whereas when the stakes were much higher and the exam graded rather than based upon a satisfactory/unsatisfactory performance, more than half the class failed the VP exam case (). Apparently, when using a time-limited VP exam to assess second year medical students to simulate a real patient in which medical knowledge is progressively disclosed, it induced an inability to reason and integrate the medical knowledge discovered during the exam and correctly diagnose the case. It is apparent that a high stakes exam condition resulted in rapid guessing behavior in contrast to the untimed exam where the students worked in a more reflective way.

In contrast to the results discussed for our study in which students enrolled in a preclinical medical school curriculum, other published studies regarding VP examination have assessed senior medical students having clinical experience. Those studies reported that the use of VPs was suitable for assessment (Waldmann et al. Citation2008; Round et al. 2010; Botezatu et al. Citation2010a, b). Furthermore, those studies did not indicate the time allotted for the exams.

Other considerations that should be discussed to explain the difference between our two test groups could include differences in demographics such as the level of medical education, student age, and clinical exposure. However, both classes were essentially identical in make-up with the 2010 class actually having better overall MCAT and GPA scores than the 2008 class. The 2010 group of students did have an organ systems exam the same week as the VP exam, which might have increased their stress level.

Conclusions

We believe that the use of VPs for assessing medical student ability is extremely beneficial. Our results demonstrate that an identical VP exam, administered with variables to compare untimed versus time-limited conditions, resulted in rapid guessing approach to the exam more so than the student's ability to integrate the data discovered during the process of progressive disclosure. Undoubtedly, the higher stakes included the variable of a stratified grade rather than solely the condition of a time limited exam. These results may indicate a potential pitfall in the structure of a medical education curriculum where most of the examinations used for assessment are multiple-choice questions. Thus students are apt to study for pattern recognition and “best answer” choices. Utilizing a VP for medical student assessment requires the student to integrate medical knowledge to render differential diagnoses which should guide their approach to clinical reasoning and decision making.

Future studies should evaluate the best method of grading metrics for VP-based exams, especially regarding students that indiscriminately use the “shotgun” approach to obtain clinical information. An application of an item response time approach may be warranted as the concept has been suggested to distinguish between examinees, who engage in solution behavior and those engaging in rapid-guessing behavior (e.g., Yamamoto & Everson Citation1997; Bolt et al. Citation2002). Potential consideration for study includes limiting students to a predetermined number of inquires for a test case as well as tentatively penalizing students for inappropriate or extraneous inquiry.

Declaration of interest: The authors report no conflicts of interest.

References

  • 2007. AAMC, Effective Use of Educational Technology in Medical Education, Colloquium on Educational Technology: Recommendations and Guidelines for Medical Educators
  • Andrade AD, Bagri A, Zaw K, Roos BA, Ruiz JG. Avatar-mediated raining in the delivery of bad news in a virtual world. J. Palliative Med 2010; 13(12)1415–1419
  • Arndt K. Creating a culture of co-learners with problem-based learning. Essays on Teaching Excellence Toward the Best in the Academy 2002; 14(5)2002–2003
  • Barrows H. Foreword. Problem-based Learning: A research perspective on learning interactions, DH Evensen, CH Hmelo. Lawrence Erlbaum Associates, Mahwah, NJ 2000; vii–ix
  • Bergin R, Fors U. Interactive simulation of patients – an advanced tool for student-activated learning in medicine & healthcare. Comput Educ 2003; 40(4)361–376
  • Bergin R, Youngblood Y, Ayers M, Boberg J, Bolander K, Courteille O, Dev P, Hindbeck H, Stringer J, Thalme A, et al. Interactive simulated patient – experiences with collaborative e-learning in medicine. J Educ Comput Res 2003; 29(3)387–400
  • Bolt DM, Cohen AS, Wollack JA. Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. J Educ Meas 2002; 39: 331–348
  • Botezatu M, Hult H, Kassaye Tessma M, Fors U. Virtual patient simulation systems: Knowledge gain or knowledge loss?. Med Teach. 2010a; 32(7)562–568
  • Botezatu M, Hult H, Tessma M, Fors U. As time goes by: Stakeholder opinions on the implementation of a virtual patient simulation system. Med Teach 2010b; 32(11)509–516
  • Brooks TE, Case BJ, Young MJ, 2003. Timed versus untimed testing conditions and student performance. Pearson Assessment Report, San Antonio, TX, June
  • Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ. 2009; 43(4)303–311
  • Courteille O, Bergin R, Stockeld D, Ponzer S, Fors U. The use of a virtual patient case in an OSCE-based exam – a pilot study. Med Teach 2008; 30(3)e66–e76
  • Ellaway R, Poulton T, Fors U, McGee JB, Albright S. Building a virtual patient commons. Med Teach 2008; 30(2)170–174
  • Fors UGH, Muntean V, Botezatu M, Zary N. Cross-cultural use and development of virtual patients. Med Teach 2009; 31(8)732–738
  • Forsberg E, Georg C, Ziegert K, Fors U. Virtual patients for assessment of clinical reasoning in nursing – a pilot study. Nurse Educ 2011; 31(8)757–762
  • Gesundheit N, Brutlag P, Youngblood P, Gunning WT, Zary N, Fors U. The use of virtual patients to assess the clinical skills and reasoning of medical students: Initial insights on student acceptance. Med Teach 2009; 31(8)739–742
  • Harless WG, Drennon GG, Marxer JJ, Root JA, Miller GE. CASE: A computer-aided simulation of the clinical encounter. J Med Educ 1971; 46(5)443–448
  • Kamin C, Deterding R, Lowry M. Student's perceptions of a virtual PBL experience. Acad Med 2002; 77(11)1161–1162
  • Lee Y-H, Chen H. A review of recent response-time analysis in educational testing. Psychol Test Assess Model 2011; 53(3)359–379
  • Neufeld VR, Barrows HS. “The McMaster philosophy”: An approach to medical education. JMed Educ 1974; 49(11)1040–1050
  • Perron NJ, Perneger T, Kolly V, Dao MD, Sommer J, Hudelson P. Use of a computer-based simulated consultation tool to assess whether doctors explore sociocultural factors during patient evaluation. J Eval Clin Pract 2009; 15(6)1190–1195
  • Poulton T, Conradi E, Kavia S, Round J, Hilton S. The replacement of ‘paper’ cases by interactive online virtual patients in problem-based learning. Med Teach 2009; 31(8)752–758
  • Round J, Conradi E, Poulton T. Improving assessment with virtual patients. Med Teach 2009; 31(8)759–763
  • Saleh N. The value of virtual patients in medical education. Ann Behav Sci Med Educ 2010; 16(2)29–31
  • Stevens A, Hernandez J, Johnsen K, Dickerson R, Raijb A, Harrison C, DiPietro M, Allen B, Ferdig R, Foti S, et al. The use of virtual patients to teach medical students history taking and communication skills. Am J Surg 2006; 191: 806–811
  • Triola M, Feldman H, Kalet AL, Zabar S, Kachur EK, Gillespie C, Anderson M, Griesser C, Lipkin M. A randomized trial of teaching clinical skills using virtual and live standardized patients. J Gen Intern Med 2006; 21: 424–429
  • Waldmann U-M, Gulich M, Zeitler H-P. Virtual patients for assessing medical students – important aspects when considering the introduction of a new assessment format. Med Teach 2008; 30: 17–24
  • Yamamoto K, Everson H. Modeling the effects of test length and test time on parameter estimation using the HYBRID model. Applications of latent trait and latent class models in the social sciences, J Rost, R Langeheine. Waxman, New York 1997; 89–98
  • Zary N, Johnson G, Boberg J, Fors U, 2006. Development, implementation and pilot evaluation of a Web-based Virtual Patient Case Simulation environment – Web-SP. Biomed central medical Education. [Published 2006 February 21]. Available from: http://www.biomedcentral.com/1472-6920/6/10

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.