3,044
Views
9
CrossRef citations to date
0
Altmetric
Special Collection on Outcomes in CME/CPD

Increased Educational Reach through a Microlearning Approach: Can Higher Participation Translate to Improved Outcomes?

, &
Article: 1834761 | Received 09 Sep 2020, Accepted 05 Oct 2020, Published online: 20 Oct 2020

ABSTRACT

The quality of continuing medical education (CME) is frequently measured using the Moore’s Level of Outcome framework, with higher-level outcomes (5 and above) perceived as more valuable than lower-level outcomes (such as Level 3 – knowledge). Higher-level outcomes require more rigorous evaluation, increasing the time requirements of an interaction; however, there is a trend among adult learners towards a preference for shorter, more informal education such as microlearning. This allows for greater reach but prevents outcome evaluation to higher levels. We explored the utility of combining microlearning with more traditional eLearning formats (“microlearning programme”) to increase participation while retaining the ability to measure knowledge- and competence-level outcomes. Comparing two recent programmes with similar content run previously (“comparator programmes”), we identified a slight improvement in completion of evaluation activities associated with the microlearning programme. However, the significant reach microlearning affords presents a clear need to bridge the gap between participation and evaluation. Considering these two cases, we concluded that future microlearning initiatives should incorporate evaluation at the point of education, providing a combination of microlearning and microevaluation to drive knowledge gain in a form that is measurable in terms of educational outcomes.

This article is part of the following collections:
Special Collection 2020: Outcomes in CME-CPD

Introduction

Since the publication of Moore and colleagues’ proposed framework for outcomes evaluation in continuing medical education (CME)[Citation1], “Moore’s Levels” have become a vital aspect of instructional design across CME programmes. Recognising the wide utility of this framework in measuring the outcomes of an educational programme, Moore’s Levels have been recommended in numerous consensus papers and best practice recommendations, including the Good CME Practice Group’s position paper on guiding principles for medical education and the International Council of Ophthalmology guide to Effective CPD/CME [Citation2,Citation3].

This has led to the current state of the CME industry, where Moore’s Levels are used ubiquitously and detailed outcome plans are a prerequisite for most grant applications to support development of CME programmes. However, there is also a perception that higher Moore’s Levels ratings equate to higher quality education; programmes that demonstrate higher-level outcomes are seen as intrinsically more valuable that those that deliver lower-level outcomes[Citation4]. This is a particular challenge for CME providers because programmes must be designed and disseminated in such a way that the education reaches the largest number of relevant learners and in a manner that its educational value can be measured (which typically requires a time-consuming evaluation step).

Alongside this trend towards a preference for higher-level learning from CME professionals and financial supporters, there has been an observed change among adult learners in preferred learning habits. Social, economic, and technological developments over the last decade have led to a preference for smaller, incremental educational interactions (e.g. 5–15 minutes in length) rather than the traditional format of longer formal educational activities [Citation5,Citation6]. This so-called “microlearning” is characterised by a dynamic, flexible structure that allows learners to explore topics at an individualised pace[Citation5], and has been associated with greater effects in knowledge-based outcomes than formalised, long-form interactions[Citation6].

There are a number of opportunities to capitalise on this preference for microlearning. For example, social media is now routinely used by a notable proportion of the world’s population; its usage has demonstrated behavioural changes in areas such as civic engagement and political participation [Citation7–9]. Developing medical education for use within social media platforms is an intriguing opportunity that is being used more oftenCitation[9]; however, to-date, these channels have not offered an intrinsic method of measuring outcomes beyond Level 1 (participation) and potentially level 2 (satisfaction).

A blended approach may therefore be a solution, with the combination of formal education and informal microlearning potentially bridging the gap between reach and evaluability. We tested this hypothesis by designing a microlearning accompaniment for two of our recent, more conventional educational initiatives, by disseminating education through the wider distribution channels of social media with a URL linking back to the full programme, including the evaluation form. This approach would theoretically drive both the overall impact of the educational initiative and increase the number of learners responding to outcome evaluation forms.

The aim of this case analysis was to compare the outcome of two recent microlearning programmes (including links to the full educational activity) against two eLearning programmes without the microlearning components. In doing so, we would observe if the microlearning component contributed towards increased participation in the full programme and completion of outcome evaluation forms, and whether there were differences in outcomes between microlearning plus eLearning as compared to eLearning only.

Methods

Educational Content Development

Two programmes were selected for this trial of microlearning: 1) a set of three 15 min, interactive, case-based online modules (“case study clinics”) and 2) a 15-episode series of 15-min audio clips (“podcasts”). Each programme followed best practice approaches to CME design – subject matter experts were selected to chair individual modules and worked alongside an experienced instructional designer to develop the content to respond to previously identified educational needs and subsequently specified learning objectives.

The case study clinics were submitted for CME accreditation through an ACCME-accredited provider and were subsequently recognised as worth 0.25 AMA PRA category 1 credits each for completion.

The podcast episodes were not submitted for accreditation, but followed the same accreditation criteria (disclosure of conflicts of interest, clear pre-activity information identifying target audience and funding statement, etc.).

Both programmes were hosted on a dedicated online resource for health-care professionals that specialised in the relevant therapy area (individualising treatment in type 2 diabetes).

Microlearning Component

Educational segments of case study clinics and podcasts were identified and converted into 30–180 second media clips to include on social media as standalone pieces of microlearning. These media clips were accompanied by a brief written description of the education, as well as a call to action for learners to follow a URL to complete the full programme (either a 15 minute case study clinic or 15 minute podcast) and earn CME credits (where relevant).

Each microlearning unit was distributed through the social media platforms Twitter and LinkedIn over three months, at a rate of two posts per platform per week. Using the individual promotion settings of each site, the social media posts were targeted specifically to users that matched desired criteria, including an interest in healthcare, an interest in diabetes, a job title that matched the target audience (health-care professional ideally working in the relevant therapy area), residing within a target country (France, Germany, Italy, Spain, UK, USA).

In addition, the podcast series was disseminated through a third-party provider, which distributed relevant RSS feeds through to eight common podcast applications: Anchor, Apple podcasts, Breaker, Google podcasts, Overcast, Pocket Casts, Radiopublic and Spotify.

Comparator Educational Content

In order to compare the outcomes collected in these programmes, similar programmes run previously were identified based on the following criteria: subject matter/topic area, format, duration, posted within the last 12 months. Two comparator programmes were identified: 1) another set of case study clinics released earlier in 2019 (“CSC-2019”), and 2) a set of eight 15-min plenary-style videos with accompanying multiple choice questions (“digital symposia”). Both CSC-2019 and digital symposia were designed to be completed in one full sitting, taking between 15 and 60 minutes at a time; as such, they were not considered to be examples of microlearning for this analysis.

Evaluation of Programmes

Both case study clinics and podcasts included an evaluation form and commitment to change form hosted alongside the content on the main eLearning website, which were designed to measure up to Level 4 (competence). In addition, the case study clinic featured a multiple-choice pre- and post-assessment, which allowed for measurement of Level 3 (knowledge).

Outcome Measures

eLearning participants were defined as individual users who completed parts or all of an eLearning module (including case study clinics, podcasts, CSC-2019 or digital symposia) on the host website. Only individuals who completed the full module were eligible to participate in evaluation activities; these activities included a post-assessment, a feedback form, and a commitment to change activity, and evaluated the effect of education up to self-reported Moore’s Level 4 (competence).

Microlearning participants were defined as individuals who watched part or all of a social media clip or podcast episode, regardless of whether they subsequently followed the URL through to the full programme on the host site.

The effect of microlearning on learning outcomes was evaluated by comparing the total number of eLearning participants and the proportion of learners completing evaluation forms between the two groups: microlearning programmes (case study clinics plus podcasts) versus the comparator programmes (previous case study clinics plus digital symposia).

Satisfaction was measured by including the question “How satisfied were you with the program?” based on a Likert scale (1–5, where 1 represented immensely dissatisfied and 5 represented full satisfaction) within the post-activity evaluation form. Knowledge gain was measured by comparing the improvement in score between pre- and post-assessment. Competence improvement was measured as the proportion of learners committing to the accompanying best practice statement for each activity.

Results

The social media microlearning component reached 71,968 unique individuals over the three-month period. Of these individuals, 18,341 were learners (participated in the microlearning component), and 292 clicked through to the eLearning platform. A total of 3536 individuals listened to a podcast via the RSS distribution network. This produced a pooled total of 21,877 microlearning participants. The eLearning component of the microlearning programmes received 185 participants (69 podcast, 116 case study clinic), of whom 64 (21 podcast, 43 case study clinic) went on to complete at least one evaluation activity.

The comparator programmes received 136 participants (49 digital symposia, 87 CSC-2019), and 24 (9 digital symposia, 15 CSC-2019) went on to complete at least one evaluation activity.

In terms of evaluation completion rates, 34.6% of eLearning participants in the microlearning programme completed the evaluation compared with 17.6% in the comparator programme.

Educational outcomes for both groups (up to Moore’s Level 4) are summarised in .

Table 1. Educational outcomes observed across both pairs of eLearning programmes. For satisfaction, the outcome was measured using a 5-point Likert scale, with the above score representing the mean rating for satisfaction. Knowledge gain was measured as a % increase in score between pre- and post-assessment, which consisted of a multiple choice questionnaire. Competence was measured by the % of learners identifying and committing to implement a change in practice based on the education

Discussion

Examining the two groups (microlearning programmes and comparator programmes), there does appear to be a relationship between the increased reach of microlearning and the increase in completion of an evaluation form observed in this study. However, this misses a much more important lesson from this initiative: the microlearning format saw as high as a 100-fold increase in participation compared with traditional eLearning (social media vs case study clinics). Looking at the individual components, this was most apparent with the podcasts – for each learner who participated accessing the traditional eLearning website, 51 learners listened to an episode through a dedicated app. Although we did observe an increase in the number of people completing an evaluation form, as many as 99.2% of total educational participants reached through the microlearning approach did not evaluate the programme.

This presents an interesting conundrum for CME providers as we adapt to the new opportunities afforded by modern learning styles. Microlearning clearly allows for increased reach of a target population, and in doing so, may allow for greater knowledge gain. In doing so, distributing education through microlearning channels can increase the theoretical benefit of an educational programme. However, without these learners completing a suitable outcome evaluation activity, this benefit cannot be determined by using traditional outcome measurements.

One possible solution could be a form of “microevaluation”. For example, posing a single, short question before and after a social media clip would allow for a measurement of knowledge gain (or, if implemented appropriately, competence improvement). In this theoretical approach, the educational interaction would remain brief and convenient for a learner, while allowing for more in-depth analysis of the education’s effectiveness in terms of Level 3 or Level 4 outcomes. Unfortunately, this sort of functionality is not currently available with social media platforms or similar distribution platforms – however, the authors are developing alternative potential solutions to combine microlearning with microevaluation in future programmes.

Another point for consideration is the limitation of microlearning in measuring outcomes of level 5 or above. While the above describes a possible solution for knowledge- or competence-level outcome measurement, formal evaluation of performance, patient health or community health improvement (as a result of the educational intervention) requires much more rigorous evaluation than a 5–15 minute interaction allows. This means microlearning is inherently limited in terms of how high an outcome based on the Moore’s scale can be achieved. Given the perception that higher outcomes are indicative of a higher quality of education, what role can microlearning play in CME?

Microlearning can have a clear advantage when designing a comprehensive instructional design strategy to address identified needs. As outlined in the Good CME Practice Group’s position statement on guiding principles, educational activities should be designed to be “appropriate”, addressing specific objectives derived from a coherent and objective needs assessment process[Citation2]. With this in mind, there are a wide range of educational needs where a change in knowledge or competence would be more appropriate than changes in practice or patient health; for example, increasing awareness of a novel oncology therapeutic and its potential applications before the agent has received an approved indication. This agent may be the first new treatment option in the particular cancer type for many years, and so there would be a legitimate educational need for clinicians to understand its safety and efficacy data in order to understand the potential role of this new treatment in patient care, following approval. This would make knowledge- or competence-level learning objectives appropriate. Performance-level objectives, however, would be inappropriate, as the agent has not been approved and practice cannot change until it becomes available.

These kinds of specific educational needs are where microlearning can play a role as a particularly novel and effective tool, either as a standalone programme or as a partner component of a wider initiative designed to address higher levels of outcome. The latter is of particular interest – in the same way, we have seen the growth of blended learning experiences over the last decade, we can expect to see programmes that utilise a microlearning component to address knowledge-level objectives while simultaneously utilising other components to address higher-level needs.

Conclusion

In this case analysis of a combined microlearning and traditional eLearning approach, we observed a movement towards increased learner participation through to evaluation completion compared with similar previous educational programmes. However, there was a significant proportion (65.4%) of learners who did not complete an evaluation, limiting the scope of outcome evaluation for the programme in its entirety. Future microlearning approaches should be designed in such a way that outcome evaluation occurs at point of education, allowing for more reliable analysis of educational effectiveness and impact on knowledge or competence.

Acknowledgments

The authors would like to acknowledge Novo Nordisk A/S and Merck Sharp & Dohme corp. for the unrestricted educational grants provided to produce the programmes explored in this article. Although no employee of either company was involved in this article, it could not have been possible without their original financial assistance.

Disclosure Statement

The authors report no relevant conflicts of interest for this publication.

References

  • Moore DE, Green JS, Gallis HA, et al. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1–5.
  • Farrow S, Gillgrass D, Pearlstone A, et al. Setting CME standards in Europe: guiding principles for medical education. Curr Med Res Opin. 2012;28(11):1861–1871.
  • Filipe HP, Mack HG, Golnik KC, et al. Continuing professional development: progress beyond continuing medical education. Ann Eye Sci. 2017;2(7):46.
  • Stevenson R, Moore DE. Ascent to the summit of the CME pyramid. JAMA. 2018;319(6):543–544.
  • Buchem I, Hamelmann H. Microlearning: a strategy for ongoing professional development. eLearning Papers. 2010; 21(7):1-15.
  • Giurgiu L. Microlearning an Evolving Elearning Trend. Sci Bull. 2017;22(1):18–23.
  • Boulianne S. Social media use and participation: a meta-analysis of current research. Inf Commun Soc. 2015;18(5):524–538.
  • Gil de Zúñiga H, Jung N, Valenzuela S, et al. Social media use for news and individuals‘ social capital, civic engagement and political participation. J Comput Med Commun. 2012;17(3):319–336.
  • Cheston CC, Flickinger TE, Chisolm MS, et al. Social media use in medical education: a systematic review. Acad Med. 2013;88(6):893–901.