1,319
Views
1
CrossRef citations to date
0
Altmetric
Brief Report

Evaluating the Impact of Continuing Medical Education in the Interdisciplinary Team: A Novel, Targeted Approach

, , , , &
Article: 2161730 | Received 29 Jul 2022, Accepted 19 Dec 2022, Published online: 15 Jan 2023

ABSTRACT

In order to maximise the learning potential of medical education programmes aimed at interdisciplinary or multidisciplinary teams, it is important to understand how the effectiveness of these programmes can vary between healthcare professionals from different specialities. Measuring the impact of educational activities between specialities may facilitate the development of future interdisciplinary and multidisciplinary education programmes, yielding enhanced learner outcomes and, ultimately, improving outcomes for patients. In this analysis, we report on a new approach to measuring change in knowledge and competence among learners from different physician specialities. We did this by tailoring post-activity competency assessments to three specialities – primary care physicians, pulmonologists and immunologists caring for patients with severe asthma. Our findings revealed that primary care physicians had markedly improved knowledge, measured using assessment questions, compared with the other specialities after completing the activity. We also report on differences between these specialities in intention to change clinical practice, confidence in clinical practice, and remaining educational gaps. Understanding how different members of the interdisciplinary team have benefited from an educational activity is essential for designing future educational activities and targeting resources.

Introduction

An interdisciplinary team or multidisciplinary team (MDT) approach to patient care – involving healthcare teams composed of multiple healthcare disciplines [Citation1] – is associated with improved management and clinical outcomes of patients with chronic conditions [Citation2,Citation3], including asthma [Citation4]. Consequently, an interprofessional team that does not function effectively could be detrimental to patients, their families and healthcare systems.

Healthcare professionals (HCPs) are required to complete continuing medical education (CME) to stay abreast of the latest developments in their field [Citation5]. CME modules are available in a variety of formats, including online, for busy HCPs. Including multiple specialities, e.g. primary care physicians, pulmonologists and immunologists, in online education programmes can be reflective of real-world patient management. Therefore, we developed the touchMDT format to reflect clinical practice and highlight best practices for inter- and multi-disciplinary working. These educational activities are centred on predefined learning objectives, which address gaps in HCP education identified by thorough literature reviews, discussions with expert faculty, and analysis of the outcomes and learner feedback from previous activities. The format is patient-centric, often including a patient or patient representative to better reflect clinical practice. Medical education featuring patients provides a critical perspective and brings to life the real-world issues of patient management, such as the disease burden and shared decision making. There is strong evidence that patient involvement in the education of HCPs benefits learners, educators and patients [Citation6].

Measuring the impact of CME on HCP learners’ knowledge, competence, performance, and on patient outcomes, is important for all programmes. Assessment of change in competence is required as a minimum by accrediting bodies such as the Accreditation Council™ for Continuing Medical Education (ACCME) [Citation7,Citation8]. Moore’s seven-level framework is widely used to measure the impact of CME programmes, and is recommended by a number of consensus papers and best-practice guidance [Citation9]. In this study, we used Moore’s framework to measure changes in knowledge (Level 3) and competence (Level 4).

Assessments of HCP knowledge and competence are typically made using a questionnaire based on the content of the educational activity, and, to the best of our knowledge, the same questionnaire is used for all learners. However, fielding the same questionnaire to learners from multiple specialities can present challenges: it fails to take into account their level of baseline knowledge, and may mask differences in the impact of the education between specialities, making it difficult to elucidate speciality-specific requirements. To address this, questionnaires should be developed that are relevant to specific physician specialities, to ensure that all learners are appropriately assessed. For example, pulmonologists and immunologists may be more familiar with clinical trial data on emerging therapies in severe asthma than primary care physicians.

The aim of this study was to measure the educational impact of a faculty-led, online CME activity in severe asthma, and how this differed between three different physician specialities within the interdisciplinary team caring for patients with severe asthma: primary care physicians, pulmonologists and immunologists. We took a novel, targeted approach to assess the change in knowledge and competence between the specialities by tailoring pre- and post-activity questionnaires to each speciality. We also report on the speciality differences in intention to change clinical practice, confidence in patient management, and unmet educational needs.

Methods

Assessment of Educational Outcomes

In order to evaluate the impact of an interdisciplinary educational activity, we applied Moore’s expanded outcomes framework (Levels 1–4) [Citation7] to one of our CME-accredited touchMDT learning activities. The CME accreditation was provided by University of South Florida [USF] Health, an ACCME-accredited provider of continuing professional development. Level 1 of Moore’s framework assesses participation in an activity, Level 2 assesses participant satisfaction, and Levels 3 and 4 assess knowledge and competence.

To assess Levels 3 and 4 of Moore’s framework, questionnaires were created by touchIME medical directors and approved for scientific and medical accuracy by the faculty. The questionnaires comprised multiple-choice questions on topics relevant to severe asthma. Each question had four possible answers, only one of which was correct. Three distinct questionnaires were created using this format, each targeted to a predefined speciality (primary care physicians, immunologists and pulmonologists). The question topics for each questionnaire are shown in .

Table 1. Question topics included in the Moore’s Level 3 and 4 assessment questionnaires.

The questionnaires were fielded before and after participation in the touchMDT educational activity. This allowed us to measure change from baseline in knowledge and competence, and to assess the difference in educational outcomes between the three specialities.

Pre-activity questionnaires were circulated to a predefined target audience selected from a database of 188,281 physicians 1–2 weeks before the activity was launched. The target audience was selected using inclusion criteria based on physician speciality (primary care physicians, immunologists and pulmonologists) and country (Canada, E5 [comprising France, Italy, Germany, Spain, and the UK], Japan and USA). Physicians who do not treat patients with asthma were excluded. Participation was closed once 50 questionnaires (minimum number for a statistically meaningful assessment) had been completed for each speciality, and the scores were recorded. Physicians who completed the questionnaire pre-activity are hereafter referred to as “respondents”.

The educational activity was then launched and hosted on touchRESPIRATORY.com and made freely available for up to 24 months. The activity, entitled “Strategies for the management of severe type 2 asthma: Expert insight into optimizing care”, comprised three videos (each approximately 10 minutes duration) featuring a primary care physician, an immunologist and a pulmonologist specialising in severe asthma, and a patient with severe asthma. The activity addressed interdisciplinary team management of severe asthma, including the use of biologic therapies, and the burden of disease. The learning objectives are shown in .

Table 2. Learning objectives of the activity.

Immediately post-launch, the same questionnaires as those pre-activity were circulated to a different set of physicians until 50 had been completed by learners from each speciality. Physicians had to engage with the activity in order to complete the post-activity questionnaire. The post-activity questionnaire included an additional set of questions to assess participant satisfaction with the activities (Moore’s Level 2; ). The Level 2 questions had to be completed immediately after engaging with the activity and before answering the Level 3 and 4 questionnaires. Physicians who completed the post-activity questionnaire are hereafter referred to as “learners”.

Table 3. Satisfaction statements in the questionnaire, scored using a 1–5 Likert scale.

Responses to the pre- and post-activity questionnaires were captured and collated by an independent third party (nuaxia Limited [Richmond, UK]), who provided touchIME with the data for analysis.

Participation data (Moore’s Level 1) are captured throughout the activity using Google Analytics, along with the duration of participation.

Further Assessments

Questions probing the physicians’ confidence, in the pre- and post-activity questionnaires, and on their intention to change clinical practice, in the post-activity questionnaire, allowed us to observe the impact of the education. We also asked about topics for further education in the post-activity questionnaire, to identify remaining gaps in education ().

Table 4. Questions on intention to change and confidence in clinical practice and remaining educational gaps.

Statistical Analysis

Data were analysed using SPSS Statistics version 28.0.1 (IBM, New York, NY, USA). To measure satisfaction (Moore’s Level 2), mean scores were calculated for all physicians for the individual questions and an overall satisfaction score was calculated as the average across all satisfaction fields, with a maximum possible satisfaction score of 5 points out of 5. The average score was also converted to a percentage. For the knowledge and competence (Moore’s Levels 3 and 4) analysis, the mean and median numbers of correct answers were calculated for both the pre- and post-activity data sets, and the results were compared using an independent samples t-test. This was calculated first for the data from all specialities, combined, and then for each speciality individually. The questions were analysed first using a paired samples t-test and then by one-way analysis of variance (ANOVA). Individual questions were further analysed through the use of a cluster analysis. The data were analysed further by Moore’s Levels split into 3a, 3b, and 4; an independent samples t-test was used for each. No statistical analyses were performed for the question on physicians’ confidence.

Results

Level 1: Participation

Data collected to-date (5.5 months after the touchMDT educational activity was launched) showed that 8,105 participants had engaged with one or more of the videos, with an overall average length of participation of 09:18 minutes. The activity reached 35 countries, with the largest proportion of physicians based in Italy (26.6%), followed by Portugal (18.7%).

Level 2: Satisfaction

The overall satisfaction score for the activity was 85%, and was similar across specialities. Out of a maximum score of 5.0, respondents gave mean scores of 4.3 for the quality of the activity, 4.3 for meeting learning objectives, 4.1 for the content being free from commercial bias, 4.3 for presenter knowledge and effectiveness, 4.3 for relevance to clinical practice, and 3.9 for the impact on management strategies. These scores were also similar across specialities.

Levels 3 and 4: Knowledge and Competence

Among all three specialities, after participating in the touchMDT educational activity, the learners showed a statistically significant improvement in the percentage of correct answers compared with the respondents at baseline (78% versus 74%, respectively; p = 0.041). By speciality, only primary care physicians showed a significant increase (61% versus 72%; p = 0.015), with no significant change for pulmonologists (81% versus 82%; p = 0.703) or immunologists (79% versus 81%; p = 0.503). The distribution of correct answers among learners and respondents within each speciality is shown in .

Figure 1. Summary of overall correct answer responses by speciality, before and after completing the touchMDT activity.

The horizontal red line within each box indicates the median, the “x” symbol represents the mean, the boxes indicate the interquartile ranges, and the vertical lines (whiskers) extend to the range of values, excluding outliers. Outliers are defined as values that fall outside a distance of 1.5 times the interquartile range from the upper and lower quartiles, and are represented by empty circles. Respondents and learners are defined as healthcare professionals who completed the pre- and post-activity questionnaire, respectively.
Figure 1. Summary of overall correct answer responses by speciality, before and after completing the touchMDT activity.

Looking at differences between the specialities for Moore’s Level 3, improvement in Level 3a (declarative knowledge) from baseline to follow-up was greatest among primary care physicians (57% at baseline versus 74% after the activity) compared with pulmonologists (81% versus 95%) and immunologists (89% versus 92%; ). The improvement in Level 3b (procedural knowledge) was also greatest among primary care physicians (40% at baseline versus 59% after the activity) compared with pulmonologists (76% versus 69%) and immunologists (75% versus 83%; ). Level 4 (competence) changes had minimal variation by speciality ().

Figure 2. Rate of correct answers by speciality before and after completion of the touchMDT activity, according to Moore’s framework (A) Level 3a, declarative knowledge; (B) Level 3b, procedural knowledge; (C) Level 4, competence.

Figure 2. Rate of correct answers by speciality before and after completion of the touchMDT activity, according to Moore’s framework (A) Level 3a, declarative knowledge; (B) Level 3b, procedural knowledge; (C) Level 4, competence.

Intention to Change Practice, Confidence and Remaining Educational Gaps

The proportion of learners who reported that they would change their clinical practice following their participation in the activity was highest among pulmonologists (66%), followed by primary care physicians (58%) and then immunologists (56%; p = 0.563 between groups). The proportion who indicated that they would not change their clinical practice was lowest among primary care physicians (10%), compared with pulmonologists (24%) and immunologists (20%).

Of the pre-activity respondents, 66% of primary care physicians, 80% of pulmonologists and 74% of immunologists reported that they were moderately or extremely confident in the management of patients with type 2 asthma. Following participation in the learning activity, these rates decreased among primary care physicians and pulmonologists to 48% and 76%, respectively, and increased slightly among immunologists to 76%.

Considering the most important unmet educational needs, each speciality selected a different topic. The most popular selections for each speciality group were “understanding emerging biologics with new targets for severe type 2 asthma” for primary care physicians, “decision trees for supporting the clinician’s choice of biologic therapy” for pulmonologists, and “how to apply asthma guidelines in clinical practice, including challenges and resources needed” for immunologists.

Discussion

The pooled results from this analysis showed statistically significant improvements in knowledge and competence among physicians who participated in an interdisciplinary educational activity related to severe asthma. However, using a novel approach of tailoring questionnaires for each speciality, we were able to uncover valuable differences in the educational outcomes.

Primary care physician learners were the only specialists who demonstrated a significant increase in the total number of questions answered correctly after the activity compared with baseline respondents. Improvements in declarative and procedural knowledge were greatest among primary care physicians, compared with pulmonologists and immunologists. Baseline declarative and procedural knowledge of pulmonologists and immunologists was high; these specialists answered 75–89% of knowledge-based questions correctly at baseline, compared with primary care physicians who answered 40–57% correctly. The lower level of baseline knowledge among primary care physicians may account for the greatest improvement in the learner scores for knowledge, compared with pulmonologists and immunologists. We suggest this could be because pulmonologists and immunologists have more detailed asthma training, compared with primary care physicians, and potentially have fewer guidelines to review and implement. In addition, pulmonologists and immunologists would be expected to have higher levels of severe asthma knowledge than primary care physicians who must address a broad range of different conditions.

The contrast in baseline knowledge levels of primary care physicians and pulmonologists and immunologists may also explain the difference in confidence reported before and after the activity. Primary care physicians reported lower confidence levels than pulmonologists and immunologists at baseline and after the activity. Following the activity, the proportions of primary care physicians who reported they were moderately or extremely confident in treating patients with severe asthma decreased. This may suggest an overestimation of their confidence at baseline or increased recognition of the evolving treatment landscape after completing the activity. Despite differences between the specialities in levels of knowledge and confidence post-activity, a similar proportion of learners across each speciality indicated they would change their clinical practice after the activity, and a similar proportion reported that additional education would be needed to facilitate practice changes. Interestingly, immunologists reported the highest level of practical limitations to changing their practice compared with pulmonologists and primary care physicians. We suggest that the practical limitations may reflect country, clinic or region-specific access issues or may suggest gaps in inter- or multi-disciplinary communication to support referrals between secondary and primary care for the optimal management of severe asthma. Further research is required to elucidate this finding.

The differences in unmet educational needs reported by each specialist highlight their specific role in the severe asthma interdisciplinary team. While pulmonologists and immunologists prioritised education to support evidence-based, personalised care in the clinic, primary care physicians valued education that may help HCPs integrate emerging treatments into patient management.

The findings from this activity highlight the value of using targeted assessment of inter- and multi-disciplinary education outcomes between physician specialities; notably, differences in the impact of the educational activity, as well as in unmet educational needs, were identified. While the content of the education itself was not targeted according to specialities, the availability of the activity as three short videos meant that physicians could self-direct their learning within the online platform. Furthermore, healthcare teams could use the findings from the targeted outcomes assessments to tailor future activities to the individual unmet educational needs identified in this study, with themes and topics that are relevant to a single speciality. The educational outcomes could also help address the challenges relating to confidence among primary care physicians in the clinical management of severe asthma. These important insights for subsequent activities would have been masked if the usual approach of a single outcomes questionnaire had been used in the study. One limitation of the study is that the comparisons of change in knowledge or competence between specialities were based on different sets of questions for each speciality, and it is possible that these questions differed in their difficulty for that speciality, despite extensive input from expert faculty within each speciality, and medical writers and directors at touchIME. Another limitation is self-selection bias, whereby those who consider their knowledge lacking in a topic may be more likely to participate in education than those with a greater knowledge base. However, the main objective of this analysis was to compare change between specialities, and it is likely this limitation would have a similar effect on the three specialities included in the activity. While aggregated rather than matched data were used for assessing Levels 3 and 4, a previous study of CME outcomes indicated these data are comparable and likely to be sufficiently accurate for many programme evaluation purposes [Citation10]. As with any analysis of this type, subgroup analyses were limited by the sample size (50 respondents pre- and post-activity) and as such, may not be generalisable to a larger population of physicians. In future studies, a larger sample size may allow an increase in the statistical power of subgroup analyses.

Another limitation was that respondents and learners came from North America, Europe and Japan, so could have had different baseline education and clinical experience. To ensure the content of the education and the questionnaires was relevant to all learners, we focused on international guidelines that are in use across the countries we targeted, and the clinical research included has been published and presented at international journals and conferences.

This educational activity, using the touchMDT format, improved knowledge among learners from different disciplines within the interdisciplinary team, who had varying base levels of knowledge. This format allows for multiple entry points, the most appropriate of which can be selected by the individual learner.

This analysis showed that measuring educational outcomes with Moore’s framework-based assessments, using an approach tailored to each physician speciality, revealed clear differences in learners’ educational outcomes after completing this touchMDT activity in severe asthma. Understanding how different members of the interdisciplinary team have benefited from an educational activity is essential for designing future educational activities and targeting resources.

Acknowledgments

We would like to thank Professor Ioana Agache, Professor Alberto Papi and Ms JoJo O’Neal for their contributions to the educational content of the touchMDT activity; Sola Neunie, ISMPP CMPP™ for her medical direction of the educational activity and helpful review of the manuscript; the touchIME (touch Independent Medical Education) audience outreach team (Joel Turner, Will Tabraham and Hannah Morton-Fishwick) for collection and analysis of the Level 1 data; Alex Noble from touchIME for statistical analyses; and Joanna MacDiarmid for copy editing.

Disclosure statement

BY reports advisory board or panel fees from Novartis, AstraZeneca, Boehringer Ingelheim, GSK and TEVA; consultancy fees from AstraZeneca and Boehringer Ingelheim; and grant/research support from GSK. BJ, KL, AN, HS and AS are employees of touch Independent Medical Education Ltd.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

References

  • Chamberlain-Salaun J, Mills, J, Usher K, et al. Terminology used to describe health care teams: an integrative review of the literature. J Multidiscip Healthc. 2013;6:65–7.
  • Winters DA, Soukup T, Sevdalis N, et al. The cancer multidisciplinary team meeting: in need of change? History, challenges and future perspectives. BJU Int. 2021;128(3):271–279.
  • Kesson EM, Allardice GM, George WD, et al. Effects of multidisciplinary team working on breast cancer survival: retrospective, comparative, interventional cohort study of 13 722 women. BMJ. 2012;344(apr26 1):e2718.
  • Burke H, Davis J, Evans S, et al. A multidisciplinary team case management approach reduces the burden of frequent asthma admissions. ERJ Open Res. 2016;2. DOI:10.1183/23120541.00039-2016
  • Ahmed K, Wang TT, Ashrafian H, et al. The effectiveness of continuing medical education for specialist recertification. Can Urol Assoc J. 2013;7(7–8):266–272.
  • The Health Foundation. Can patients be teachers? Involving patients and service users in healthcare professionals’ education. Oct 2011 cited 11 Jul 2022]. Jul 11: https://www.health.org.uk/sites/default/files/CanPatientsBeTeachers.pdf
  • Moore DE Jr, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1–15.
  • Accreditation Council for Continuing Medical Education. The ACCME accreditation requirements. 2021. cited 10 Jul 2022]. 2022 Jul 10: https://www.accme.org/sites/default/files/2021-12/626_20211221_Accreditation_Requirements.pdf
  • Bannister J, Neve M, Kolanko C. Increased educational reach through a microlearning approach: can higher participation translate to improved outcomes? J Eur CME. 2020;9(1):1834761.
  • Fagerlie SR, Heintz AA, Haas M, et al. A comparison of matched and aggregated group outcomes data for evaluating continuing education of hematology and oncology health care professionals. J Contin Educ Health Prof. 2014;34(1):S23–9.