673
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Virtual Patient Simulation Offers an Objective Assessment of CME Activity by Improving Clinical Knowledge and the Levels of Competency of Healthcare Providers

ORCID Icon, &
Article: 2166717 | Received 26 Aug 2022, Accepted 04 Jan 2023, Published online: 11 Jan 2023

ABSTRACT

The main goal of continuing medical education (CME) is to help healthcare providers (HCP) improve their knowledge and levels of competency with an ultimate enhancement of their performance in practice. Despite the long and well-intentional history of CME, the proof of success (based on improved clinical outcomes) is difficult to obtain objectively. In the past several years, the traditional CME world has been disrupted by replacing multiple-choice questions with virtual simulation. We utilised an innovative, next-generation virtual patient simulation (VPS) platform to develop objective measures to assess the success of educational activities that can be applied to the CME. This VPS platform was used at five distinct educational events designed to assess learners’ knowledge and competency in the guideline-driven management of Type 2 diabetes, hyperlipidaemia, and hypertension. A total of 432 learners (medical doctors, nurse practitioners, and clinical pharmacists) participated in these educational events of whom 149 went through two consecutive cases with a similar clinical picture and educational goals. Their ability to achieve glycaemic, lipid, and blood pressure control improved significantly as they moved from the first to the second case. The participants improved their test performance in all categories – between 5 and 38%, achieving statistically significant increases in the many goals examined. In conclusion, this study employed the pioneering application of technology to produce, collect and analyse the VPS data to evaluate objectively educational activities. This VPS platform allows not only an objective assessment of the effectiveness of the CME activity but also provides timely and helpful feedback to both learners and providers of a given educational event.

This article is part of the following collections:
Special Collection 2022: Innovation and Impact in CME/CPD

Introduction

The main goal of continuing medical education (CME) is to help healthcare providers (HCP) improve their knowledge and levels of competency with an ultimate enhancement of their performance in practice. Despite the long and well-intentioned history of CME, the proof of success (based on improved clinical outcomes) has been difficult to obtain. Many of the traditional CME programmes require the learner to answer a few multiple-choice questions pre- and post-CME activity. Even when the answers show an improvement in the post-CME testing, it is not trivial to ascertain how this improvement will impact better clinical performance.

In the past several years, the CME world has been disrupted when direct observation of performance-in-practice, within the interactive digital encounters of virtual simulations, began to replace multiple-choice questions as an assessment of learning and change [Citation1–4]. This technology requires both the knowledge and the skills of programmers who can translate the dry text of a case into an interactive digital encounter.

We developed and implemented an innovative, advanced Virtual Patient Simulation (VPS) application to deliver CME activities and assess outcomes. This platform can be accessed from mobile devices and desktop browsers. Working with this platform the learner selects patients from a virtual waiting room [Citation5, Citation6] and goes through several clinical encounters where there is an opportunity to perform a diagnostic work-up and to demonstrate their decision-making processes as applied to a patient’s management and follow-up.

The VPS platform was developed by a group of physician-educators and technology specialists specifically to promote and demonstrate improvement in HCP knowledge and competency [Citation5, Citation6]. The VPS platform provides the ability for its customers, Medical Education providers, to create case-based virtual patient learning modules, distribute them as part of an application to their end-users (HCPs) who experience these close-to-real-life virtual patient encounters, treat them, and receive annotated feedback (and optionally CME credits). The Medical Education providers receive detailed periodic feedback on the learner’s activities such as clinical goal completion, prescription patterns, and handling of side effects. The platform is based on research on how adults learn: it leverages repeated and spaced learning, personalised debriefing, and reflection, as well as optional comparison with peers. It provides modern, cost-effective, and easy-to-use theory-to-practice learning tools.

We utilised this VPS platform to create objective measures of competency and learning in CME activities. We presented VPS clinical cases to learners (medical doctors, nurse practitioners, and clinical pharmacists) at several educational events in the fields of diabetes, hyperlipidaemia, and hypertension. Learners’ knowledge and practice gaps were assessed by analysing their decision-making processes in two consecutive cases with similar learning goals. Their ability to achieve glycaemic, lipid, and blood pressure control improved significantly as they moved from the first to the second case. These data strongly suggest that this novel VPS platform can be used to both deliver clinical case studies as well as assess learner progression to evaluate change in knowledge and competence.

Methods

We used the platform’s mobile and desktop application that provide the learner with a self-propelled simulation of disease, disease progression, and effects of treatments, to determine an assessment of providers’ knowledge, competence, and performance while offering personalised and detailed feedback on the learner’s actions. A full description of this platform has been published previously [Citation5].

In brief, the analytical data collected by the platform in real-time includes completeness of physical exam, diagnostic considerations, prescribing pattern of medications, appropriateness of laboratory and imaging investigation, referral pattern, and adherence to guidelines and best practices. This VPS platform thus collects individual learner data which is then analysed to identify knowledge gaps and compare the performance of a given learner to other learners. Like in real life, many cases have overlapping or similar medical problems, allowing the learners to demonstrate their improved performance without repeating the same case. We used a two-sided p-value test to calculate the significance of changes in their competency.

The VPS platform was used at five distinct educational events designed to assess learners’ knowledge and patient care in the guideline-driven simulated management of type 2 diabetes, hyperlipidaemia, and hypertension. A total of 432 learners (194 medical doctors, 124 nurse practitioners, and 114 clinical pharmacists) participated in these educational events of whom 149 (67 medical doctors, 43 nurse practitioners, and 39 clinical pharmacists) went through two consecutive cases that had a similar clinical picture and educational goals. Both cases had similar goals in addressing the diagnosis and management of these conditions. The participants completed the first VPS clinical case before being asked to address a second case. The time interval between the first and the second cases varied from 1 day to 6 weeks, 10 days on average. There were no educational sessions provided via the platform at any time before accessing the cases. In all cases, the assessment was made after the completion of both the first and the second case.

We compared the learners’ knowledge in treating these two virtual patients enabling us to assess improvement in their competency. For each case, the learners received detailed, annotated, feedback which could then be incorporated by the learner to improve his/her performance in future cases.

Results

We observed an improvement in achieving the learning goals in all major categories. All individual goals improved substantially, but not all the changes reached statistical significance. While only 68% of learners achieved the goal of glycaemic control in Case 1 (haemoglobin A1c below 7%), this number increased to 77% while addressing the second case of uncontrolled Type 2 diabetes (). Similar improvements (by 10 to 25 percentage points) were seen in other aspects of diabetes management, such as referring patients to certified diabetes educators, discussing appropriate diet, prescribing physical activity, and ordering appropriate tests, like kidney and liver function tests (). The most significant improvement occurred in learners’ in starting patients with diabetes and cardiovascular disease on SGLT2 inhibitors or GLP-1 receptor agonists () that showed a 25% increase in the use of these medications (p < 0.001). The rate of referring patients with diabetes to ophthalmology has also increased from 60 to 75%.

Figure 1. Percent of learners achieving goals: (a) Glycaemic control; (b) Referring patients to formal patient education; (c) Ordering appropriate tests (* including Haemoglobin A1c, creatinine, and liver function tests); (d) Discussing and advocating a physical activity program; (e) Discussing appropriate diet and meal plan; (f) Prescribing SGLT2i or GLP-1 receptor agonists (**in patients with cardiovascular problems).

Figure 1. Percent of learners achieving goals: (a) Glycaemic control; (b) Referring patients to formal patient education; (c) Ordering appropriate tests (* including Haemoglobin A1c, creatinine, and liver function tests); (d) Discussing and advocating a physical activity program; (e) Discussing appropriate diet and meal plan; (f) Prescribing SGLT2i or GLP-1 receptor agonists (**in patients with cardiovascular problems).

Making a diagnosis of hyperlipidaemia and hypertension represented additional goals during this virtual simulation in patients with co-morbidities. While 72% of learners diagnosed hyperlipidaemia in the first case, only 43% of learners made a correct diagnosis of hypertension. Both numbers improved significantly by 21% in the case of hyperlipidaemia and by 38% in the case of hypertension (; p <0.03 and 0.005, respectively) while working with the second case.

Figure 2. Percent of learners achieving goals: (a) Making a diagnosis of hyperlipidaemia; (b) Achieving LDL targets; (c) Making a diagnosis of hypertension; (d) Achieving blood pressure targets.

Figure 2. Percent of learners achieving goals: (a) Making a diagnosis of hyperlipidaemia; (b) Achieving LDL targets; (c) Making a diagnosis of hypertension; (d) Achieving blood pressure targets.

A significant test performance improvement was also seen in the management of hyperlipidaemia and hypertension. The percent of learners who achieved LDL targets using high-intensity statins rose from 41% to 77% (; p < 0.001) and the percent of learners achieving blood pressure targets (130/80 mmHg and below) rose from 71% to 89% (, p < 0.03).

Discussion

A fundamental question in CME is whether CME activities improve HCP performance and patient health outcomes. In 2015, Cervero and Gaines [Citation7] analysed 8 systematic reviews of CME effectiveness published since 2003 where physicians’ performance was included as an outcome measure. The authors concluded that these reviews support the notion that CME activity does improve physician performance and patient health outcome.

Even though this conclusion is widely accepted, the process of assessing the effectiveness of CME is complex and somewhat controversial. Traditionally, this effectiveness is evaluated by post-activity exams that are not totally reliable factors in translating exam answers into better practical performance [Citation8, Citation9].

It has been frustratingly difficult to measure objectively the relationship between teaching, learning, clinical competence, and patient outcome [Citation10, Citation11]. In the absence of these direct data, the analysis relied upon self-reported knowledge gains and behaviour change as a result of participation in the CME activity [Citation12–14]. Even though self-reporting is an important tool in evaluating the effectiveness of CME activity, it need not be the sole basis for such determination [Citation15–17].

As the controversy about CME effectiveness continues [Citation18–20], a direct and compelling case for the effectiveness of CME is still difficult to make.

Interactive techniques and simulations, in particular, appear to offer an effective approach to learning. Assessing the effectiveness of simulation has been complicated by the diversity of simulation types and platforms [Citation17]. Virtual simulation, however, has been greatly helped by new developments in digital learning [Citation5, Citation6].

The VPS platform used in this study, available on mobile devices and desktop computers, brings fundamentally new and distinct advantages to both CME activity itself and the assessment of learner gaps in knowledge, and improvement in clinical competency [Citation5, Citation6, Citation21, Citation22]. Given the nature of the VPS, as with most simulation platforms, this is not solely a method to assess a CME activity, because in addition to assessing the level of knowledge or competence of a learner it also plays an important dual role in promoting learning.

The present communication demonstrates the utility of this platform in measuring the effectiveness of a learning activity. We designed two or more cases in the same field of medicine with similar goals for the diagnosis and management of certain problems. The learner receives detailed feedback after each case and therefore could and does apply newly acquired knowledge to the decision-making process in the subsequent cases.

The present study demonstrates the effectiveness of this approach and this VPS platform. The participants improved their test performance in all categories – between 5 and 38%, achieving a statistically significant increase in the many goals examined. Even when statistical significance was not achieved, the improvement ranged between 5 and 11%.

One could argue that an improvement in learners’ competency might be attributable to their improved skills in using the application advancing from the first to the second case rather than learning medicine from the platform. Because an improvement in competence was observed several days and even weeks after addressing the first case with no additional educational activities provided via the platform between the first and the second case, one can surmise that this is not an effect of a repeated exercise but an acquisition and retention of knowledge.

The feedback included information on goals achieved and missed, appropriateness of exams, testing, diagnosis, prescription of medications, and other teaching goals based on published standards of care. We believe that personalised feedback based on the most current literature and the guidelines was the most important factor in enhancing their knowledge. That said, this study has been designed to estimate the degree of improvement in competency between the groups and not among the individual learners. Even though our platform has the capability of assessing individual results, it was not addressed in this study. Further studies among learners fully familiar with the platform would help answer this question definitively.

The CME provider collects individual and/or group data to identify the knowledge gaps and the degree of improvement. This information is used not only to assess objectively the effectiveness of a given CME activity, but to allow the CME provider to improve the activity and its delivery by observing both the strengths and the weaknesses of the programme, thus providing mutual benefit to the learner and the provider.

The limitation of our study is related to the fact that we (as do all other assessment platforms) measure improvement in competence and not a direct improvement in clinical performance. Changes in competence do not necessarily translate into improved performance. However, VPS platforms come as close as possible to real-life performance and are used as the best tool currently available to enhance providers’ competence and performance [Citation23–26].

In conclusion, this study employed the pioneering application of technology to produce, collect and analyse the VPS data to evaluate objectively educational activities. VPS platform allows not only an objective assessment of the effectiveness of the CME activity but also provides timely and helpful feedback to both learners and providers of a given educational event.

Disclosure Statement

All authors confirm that there are no relevant financial or non-financial competing interests to report.

References

  • Ryall T, Judd BK, Gordon CJ. Simulation-based assessments in health professional education: a systematic review. J Multidiscip Healthc. 2016;9:69–6.
  • Berman NB, Durning SJ, Fischer MR, et al. The role for virtual patients in the future of medical education. Acad Med. 2016;91:1217–1222.
  • Kononowicz AA, Woodham LA, Edelbring S, et al. Virtual patient simulations in health professions education: systematic review and meta-analysis by the digital health education collaboration. J Med Internet Res. 2019;21:e14676.
  • Lucero KS, Spyropoulos J, Blevins D, et al. Virtual patient simulation in continuing education: improving the use of guideline-directed care in venous thromboembolism treatment. J Eur CME. 2020;9(1):1836865.
  • Iancu I, Pozniak E, Draznin B. Virtual patient simulation platform challenging traditional CME: identification of gaps in knowledge in the management of type 2 diabetes and hyperlipidemia. J Eur CME. 2021;10:1993430.
  • Draznin B, Iancu I. Use of a novel virtual patient simulation application to assess providers’ knowledge in management of T2 Diabetes. Diabetes. 2021;70(Suppl. 1):22–LB.
  • Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. J Contin Educ Health Prof. 2015;35(2):131–138.
  • Moore DE. How physicians learn and how to design learning experiences for them: an approach based on an interpretive review of evidence. In: Hager M, Russell S, Fletcher SW, editors. Continuing education in the health professions: improving healthcare through lifelong learning. New York: Josiah Macy, Jr. Foundation; 2007. p. 50–55.
  • Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1–15.
  • Jordan S. Educational input and patient outcomes: exploring the gap. J Adv Nurs. 2000;31(2):461–471.
  • Mazmanian PE, Davis DA, Galbraith R. Continuing medical education effect on clinical outcomes. Chest. 2009;135(3 Suppl):49S–55S.
  • Davis DA, Mazmanian PE, Fordis M, et al. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. J Am Med Assoc. 2006;296(9):1094–1102.
  • Parker FW, Mazmanian PE. Commitments, learning contracts, and seminars in hospital-based CME: change in knowledge and behavior. J Contin Educ Health Prof. 1992;12(1):49–63.
  • Wakefield J, Herbert CP, Maclure M, et al. Commitment to change statements can predict actual change in practice. J Contin Educ Health Prof. 2003;23(2):81–93.
  • Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med. 1991;66(12):762–769.
  • Eva KW, Regehr G. “I’ll never play professional football” and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28(1):14–19.
  • Redesigning Continuing Education in the Health Professions. Institute of medicine (US) committee on planning a continuing health professional education institute. Washington (DC): National Academies Press (US); 2010 [Cited 2020 Jun 1]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK219802/
  • Sibley JC, Sackett DL, Neufeld V, et al. A randomized trial of continuing medical education. N Engl J Med. 1982;306(9):511–515.
  • Davis D, O’Brien MA, Freemantle N, et al. Impact of formal continuing medical education: do conferences, workshops, rounds, and other traditional continuing education activities change physician behavior or health care outcomes? J Am Med Assoc. 1999;282(9):867–874.
  • Mazmanian PE, Davis DA. Continuing medical education and the physician as a learner: guide to the evidence. J Am Med Assoc. 2002;288(9):1057–1060.
  • Iancu I, Draznin B. Abstract 9539: treatment of hyperlipidemia: assessment of providers’ competency with edocate® virtual patient simulation application. Circulation. 2021;144(Suppl_1):A9539.
  • Iancu I, Draznin B. Abstract TP73: use of a novel patient simulation application to assess physicians’ knowledge in the management of atrial fibrillation and stroke prevention. Stroke. 2022;53(Suppl_1):73.
  • Sperl-Hillen J, O’Connor PJ, Ekstrom HL, et al. Educating resident physicians using virtual case-based simulation improves diabetes management: a randomized controlled trial. Acad Med. 2015;89(12):1664–1673.
  • Tabatabai S. COVID-19 impact and virtual medical education. J Adv Med Educ Prof. 2020;8(3):140–143.
  • Chernikova O, Heitzmann N, Stadler M, et al. Simulation-based learning in higher education: a meta-analysis. Rev Educ Res. 2020;90:499–541.
  • Fuoad SA, El-Sayed W, Marei H. Effect of different teaching/learning approaches using virtual patients on student situational interest and cognitive load: a comparative study. BMC Med Educ. 2022;22:763–770.