1,479
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Net Promoter Score (NPS): What Does Net Promoter Score Offer in the Evaluation of Continuing Medical Education?

Article: 2152941 | Received 18 Apr 2022, Accepted 24 Nov 2022, Published online: 29 Nov 2022

ABSTRACT

Net promoter Score (NPS) has been used in many fields, such as software, clinical care, and websites, as a measure of customer satisfaction since 2003. With a single question, NPS methodology is thought to determine brand loyalty and intent to act based on experiences with the brand or product. In the current study, accredited continuing medical education or continuing education (CME/CE) was the product. Providers of CME have utilised NPS rating (the individual score on a scale of 0 to 10) to collect data about the value of the experience a clinician has with CME activities, but there has been no research to examine what it actually is associated with. This study looked to understand – relative to other self-reported and assessment outcomes in CME, what does NPS at the activity level indicate? From 155 online CME programmes (29,696 target audience learners with complete data), potential outcomes of CME, including whether knowledge or competence improved via assessment score, mean post-confidence rating, and whether one intended practices changes and was committed to those changes, were examined as predictors of NPS. NPS is unique in that it cannot be calculated at the individual level; individual scores must be aggregated, and then the percentage who selected ratings of 0 to 5 is subtracted from the percentage who selected 9 or 10. Results showed that percentage of learners who are committed to change predicts 70% of the variance in NPS, which suggests NPS is a valid indicator of intention to act. These results have implications for how we might, as a field, incorporate the utilisation of a single standardised question to examine the potential impact of online CME and call for additional research on whether NPS predicts change in clinical practice.

Introduction

Evaluation of accredited continuing medical education or continuing education (CME/CE) activities is how we understand whether a programme is effective in meeting its learning objectives. Evaluation also allows programme developers to understand how to improve future educational offerings. One outcomes framework used commonly in CME/CE has outlined potential outcomes from CME/CE which include participation, satisfaction, knowledge, competence, performance, patient health, and community health, referred to as Levels 1–7, respectively[Citation1]. The majority of evaluations that are conducted on CME/CE activities involve assessment of Levels 1–4 in the Moore’s Outcomes Framework (1. participation, 2. satisfaction, 3. knowledge, 4. competence)[Citation1]. Most of the methods for data collection for satisfaction, knowledge, and competence are self-report surveys and knowledge and competence-based assessments.

There has been very little validation of the survey-based and assessment tools commonly used in CME/CE in the published literature, and report of validity is uncommon[Citation2]. Measurement validity for the purposes of this study means the extent to which theory and evidence support the intended interpretation of the measure. For the purposes of this study, a contemporary approach to validity was used which consists of content validity (assessment captures the construct as intended), response process (minimisation of error in data collection), internal structure (items if applicable “hang together” to represent one construct), criterion (assessment predicts outcomes measures), and known-group (assessment correlates with related measures; herein “correlational validity”) validity[Citation2].

In their review, Ratanwongsa and colleagues [Citation2] found only 47 out of 136 total published articles that evaluated CME/CE included at least one method with validity and/or reliability reported, and Hoover and colleagues [Citation3] found that 59 of 198 published articles had both validity and reliability described, and they mostly reported content validity. Overall, there is little understanding of correlational validity via statistical methods – do the evaluation and outcomes questions that have conceptual association have a statistical relationship? Evaluators may assume that if satisfaction is present, one gained knowledge or competence from the CME/CE, but little evidence exists to support that assumption. Furthermore, evaluators in CME/CE examine self-efficacy to enact practices (herein referred to as confidence) and intention to act, but little is known about how these outcomes are associated with other outcomes from participation in CME/CE. There is some evidence to suggest that they may be mechanisms through which knowledge and competence lead to practice change [Citation4–6]. Although Lucero and Chen found that knowledge/competence improvement and reinforcement predict change in confidence and that confidence post-CME/CE predicts commitment to change [Citation4], few published studies have examined the relationships among confidence assessments, intention to act measures, and knowledge and competence assessments, so little evidence of validity exists.

Net Promoter Score (NPS) is used in many fields. It has been widely used for software, websites, shopping experiences, patient experience, etc., since 2003 [Citation7–9]. The single question – “How likely are you to recommend X to a friend [or colleague]?” is rated from 0 – Not at all likely to 10 – Extremely likely. The responses are put into categories of promoters (9 and 10), passives (6, 7, 8), and detractors (<6). Promoters are thought to be “loyal” and will speak positively about the company, service, etc. Detractors will speak unfavourably, and passives are thought to do neither [Citation7,Citation8]. After the percentage of each category is calculated, detractors are subtracted from promoters to get the NPS.

NPS is potentially attractive for use in CME/CE because it is a single question with a standard scale, and implementation, analysis, and reporting are straightforward. It can then serve as a benchmark for the field when assessing value of the experience at an aggregate level from a CME/CE activity, type of activities, or from a single medical education provider. In these ways, NPS could help move the field forward to better understand what works best for whom and when. It also provides a potential standardised measure for the Moore’s Outcomes Framework Level 2 (satisfaction)[Citation1], which is collected in programme evaluations but is typically skewed to high scores. However, it does come with some limitations in the context of measuring brand performance, which have been succinctly summarised by Fisher and Kordupleski [Citation7]:

  1. It is only a single rating that provides no information on what can be done to improve

  2. It focuses on keeping customers and not winning new ones

  3. Conceptually, there is no such thing as a “passive” customer

  4. It is not relative to other offerings (no competitive data)

  5. It is internally focused rather than externally focused (focuses on keeping loyal customers vs giving customers – existing and potential – what they want/need)

Some of these limitations (numbers 3 and 4 above) do not apply when thinking about NPS as a way to understand clinician learners’ experience with a CME/CE activity. For #3, when thinking about NPS on a specific CME/CE activity, a detractor may be someone who felt the activity was a waste of time, and the passive participant may have felt the activity was worthwhile but has no intention to enact what was learned; whereas, a promoter is someone who intends to do something with the information (#3). The “passive” rater for CME/CE is one who may not act, so there may be a truly “passive” consumer of CME/CE; however, this should be investigated but is not in the scope of this study. For CME/CE, it is not as essential to understand the NPS at the time of rating relative to another CME/CE one accessed because the learning has already occurred (#4). It may, however, be useful to compare aggregated NPS of one learning format to another (e.g. text with case examples vs. text without case examples).

Some of these limitations may be present for CME/CE (numbers 1, 2 and 5). NPS in CME/CE does have the limitation of being a single rating when collected alone (#1), and other information is necessary in order to understand what can be improved. In addition, other outcomes levels, when working from the Moore’s Outcomes Framework[Citation1], should be assessed given the learning objectives. For #2, CME/CE is focused on providing education to target audiences, but what if someone in the target audiences does not want to attend? That is still a limitation for CME/CE. NPS alone cannot be used to understand how to improve the chances of a clinician participating in an activity. Finally, NPS for CME/CE activities is more about understanding an outcome of learning from activities that were presumptively built with the intent of giving clinicians what they want and/or need (employing needs assessments that accompany accredited CME/CE by its very definition according to the Accreditation Council for Continuing Medical Education [Citation10] and European Accreditation Council for Medical Education [Citation11] accreditation standards) and promoted to those who may want and need it most (#s 2 and 5). To some extent, using NPS in CME/CE does exclude potential participants just as it does in other fields (#5). However, other data can be collected to understand what the target population needs and is usually done so in the needs assessment process. Despite the limitation of being a single rating without information on what could improve (#1), NPS may offer a way to understand how valuable a group of participants’ views their experience with a CME/CE activity. In order to understand further what this single rating may be associated with, correlational validity was assessed with common outcomes measured in CME/CE evaluation.

In CME/CE evaluation and for the current study, value is equated with how much one may have learned, feels more efficacious in an area of practice, or how much one may have intention to do something differently in practice. To date, the author could only locate one published study that used NPS for online education, specifically, massive open online courses (MOOCs), although there have been non-published observations of its use sporadically in CME/CE. Despite a low course completion rate of 6.8%, 93% of MOOC participants rated they are extremely likely to recommend a course on Coursera to a friend or colleague, with an NPS of 56[Citation12]. Of note, this study did not use the standard 0 to 10 scale but used a 5-point scale with each point labelled. The current study focuses on NPS in online, voluntary CME/CE. Transactional, or programme-based, NPS surveys were deployed at the end of each online CME/CE programme.

The objective of this study was to understand of common outcomes in CME/CE, which are correlated with NPS? Conducting this study may offer some understanding of what NPS indicates in the context of a CME/CE activity than a brand. Achieving this goal will examine the correlational validity, or the observable relationship with other conceptually similar measures [Citation2,Citation13] of NPS by examining relationships with other variables that are indicative of the value of one’s experience after participation in CME/CE.

Materials and Methods

Sample

One hundred and fifty-five online CME/CE programmes that launched between 1 November 2020, and 28 February 2021 on an online continuing medical education platform were utilised in the analysis. All CME/CE programmes in the sample were text- or video-based, ranging from 15 to 60 minutes in length. All activities were developed with the same process of gap analysis, needs assessment, identification of desired outcomes, translation of outcomes to learning objectives, and then development of content to meet those learning objectives. All activities contained pre- and post-assessment questions and the same set of evaluation questions. Data were pulled on 8 April 2021. A total of 323,381 learners in the primary target audience viewed the content of the activity. There were 59,235 physician, nurse, and pharmacist respondents in the target audience learners who responded to the pre and post and/or evaluation assessment measures, and 29,696 had complete data for this analysis; these learners must have been part of the target audience for the individual programmes (i.e. if cardiologists were not a target audience for a specific programme, they were excluded for that programme). The final sample consisted of nurses/advanced practice nurses (20%), pharmacists (5%) and physicians (75%) representing over 29 specialities. The profession representation was different by less than two percentage points for each profession when comparing the missing data sample to the complete data sample. Of the participants, 21% had complete data for more than one activity.

Human Subjects Protection

This study was exempt from institutional review board approval because it used deidentified data and is not considered human subjects research under 45 CRF 46.102(e)(1)(ii) and 45 CRF 46.102(e)(4)-(6) US Department of Health and Human Services [https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts-2018/index.html#c1].

Measures

Improved knowledge/competence: each activity contained three knowledge/competence questions asked pre and post, and questions were multiple choice (declarative and procedural for knowledge and case-based with at least two steps to arrive at the best answer for competence). Activities had all knowledge, all competence, or a mixture of both knowledge and competence learning objectives; therefore, the questions were combined for a total score. Each learner’s responses to knowledge and competence assessment questions pre and post were examined to create a score of the number of improvements they had (i.e. getting a question correct at post but not at pre). If the number of improvements was greater than 0, then the learner was considered to have improved their knowledge/competence. Note that if they answered correctly at pre and then incorrectly at post but still had an improvement on at least one question, then they still were considered improved.

Confident post-CME/CE: a Likert-type confidence question was asked at the end of each programme. The stem was “How confident are you right now in …” with a global topic (e.g. your ability to make treatment recommendations for patients with severe COVID-19). Learners rated their confidence on a scale of 1 – not at all confident to 5 – very confident. Those who are considered “confident” choose a rating of 4 or 5.

Commitment to change: learners could select up to 7 (including “other” and option to write in practice change) practice changes at the end of each programme which are defined by the content developers. Following, a Likert-type commitment to change question was asked (“What is your level of commitment to making the changes stated in the previous question?”). Learners rated their commitment to intended behaviour changes as a whole as a result of education on a scale of 1 – not committed to 4 – very committed. Those who are considered “committed” to change chose a rating of 3 or 4.

NPS: assessed by asking at the end of each programme: How likely are you to recommend this activity to a friend or colleague? Learners rated on a scale of 0 – not at all likely to 10 – extremely likely. This individual selection is called NPS rating. A promoter is a rating of a 9 or 10, a passive is a rating of 6, 7, or 8, and a detractor is a rating <6. NPS is calculated by subtracting the percentage of detractors from the percentage of promoters. NPS rating is indicative of the selection of 0 to 10 on the scale, and NPS is indicative of the actual score of an activity by subtracting percentage of detractors from percentage of promoters and multiplying by 100. This provides a single NPS for each activity.

Analysis

Three potential outcomes of learning after participation in the CME/CE were examined: percentage who improved knowledge/competence, percentage who were confident, and percentage who were committed to practice change, and their prediction of NPS was modelled using linear regression in using PROC REG in SAS 9.2. Models were built to predict activity level NPS (dependent variable). The models were built in a step-wise approach – adding the learning outcome that the exploratory analysis found to be most impactful first, followed by subsequent predictors. R2 was calculated for each model to examine explained variance by each. Standardised coefficients were examined to assess magnitude of impact of each variable on NPS rating. Prior to conducting regression analysis, exploratory cross-tabulations were calculated to examine the magnitude of the relationship between the dependent and independent variables. Of note, gaining confidence from pre-to-post-CME/CE was tested but not significant in any of the models.

Results

The 155 online CME/CE programmes generated an average NPS of 52 (Min 11, Max 100). Cross-tabulations showed post-assessment score was positively correlated with NPS category (i.e. detractors, passives, promoters) – those who improved their knowledge or competence vs. did not improve (8.85 vs. 8.28 individual NPS rating), were confident after education vs. were not confident (8.51 vs. 8.44 individual NPS rating), or were committed to intended practice changes vs. were not committed (8.81 vs. 7.03 individual NPS rating) after education had higher mean individual NPS ratings.

Linear regression models confirmed that commitment to change was the best predictor of NPS (). When this is translated to the full calculation of NPS (promoters minus detractors) on an activity, a regression model at the activity level showed that percentage of committed (rating of committed or very committed) learners significantly predicted the NPS for the activity, controlling for percentage who were confident (rating of mostly or very confident). Alone, the percentage of committed learners predicts 70% of the variance in NPS (). When translated to NPS, when 50% of learners are committed to practice change, NPS is 34, but if 67% of learners are committed to practice change, NPS rises to 49 ().

Table 1. Linear regression model for Net Promoter Score (NPS) – activity level.

Table 2. Predicted activity Net Promoter Score (NPS) by percentage of learners committed to change.

Conclusions

Implications

The results of this study provide evidence for the correlational validity of NPS for use in assessing value of online, self-directed CME/CE. They suggest that NPS is correlated with commitment to change practice; therefore, it may be a viable method for CME/CE providers to use when evaluating the impact of their interventions if practice change is an expected outcome of the intervention. The proportion of learners who rate themselves as committed or very committed to change after participation in CME/CE predicts 70% of the variance in NPS for the activity, so it may be a valid indicator of intention to act after participation in CME/CE. This result suggests that NPS is a strong indicator of the number of participants who will change practice if commitment to change is truly linked to behaviour change as extant research [Citation14] suggests.

Although being confident and improving knowledge/competence post-CME/CE were significant predictors, they predicted very little variance in NPS, just 2%. Therefore, the results suggest that NPS is not a strong indicator of how much participants improve their understanding on a topic or how many feel confident after CME/CE participation. The type of value from the experience of a CME/CE activity that NPS assesses may be described as an experience that compels one to want to change behaviour.

It is important to note that the intent of the question is to assess intention to act after participation in a CME/CE activity, not to assess brand loyalty, which is what NPS was originally developed to assess. This means that the question should be worded such that it is specifically related to the intervention, activity, or whatever term is most appropriate. For example, “How likely are you to recommend this activity to a friend or colleague?” should be used and not “How likely are you to recommend <insert brand or company> to a friend or colleague?”

Limitations

Limitations of this study include that the sample of activities assessed was from only one CME/CE provider. While length (i.e. 15-, 30-, 45- or 60-minutes) and format types (text, video, or text/video combination) of online, self-directed activities may play a role in the relationships assessed in this study a two-level, unconditional means model showed that the variation in NPS rating between activities was very small (intraclass correlation = 0.014), so these factors are unlikely to play a role in the major findings of this study. The measures of knowledge and competence, confidence, and commitment to change have predictive validity with each other as shown in another study that utilised a sample of activities from the same CME/CE platform[Citation4], but the measures of knowledge and competence and confidence may not capture all gains in these areas because knowledge and competence was assessed with only three questions, and only one type of confidence was assessed which is another limitation. It could be argued the education improved areas in other domains in each of these outcomes than those assessed. In addition, the NPS rating of those who do not respond to the question is unknown, and we are not taking into consideration what would have been their score. Finally, although percentage of respondents who are committed to making practice changes predicts 70% of the variance in NPS, NPS is likely indicative of other satisfaction or outcomes measures. We suspect that commitment to change practice is linked to actual practice change, but until more work is done in this area, we do not know the validity of commitment to change as measured in this study to predict actual practice in the real world.

Significance

Despite limitations, this study was robust in its large sample size of activities and participants. Since the COVID-19 pandemic, a large portion of CME/CE happens online or in a virtual environment, so the results are relevant for online accredited CME/CE. However, they are also worth examining in the context of live, in-person events. Future research also should examine other potential outcomes with which NPS rating is associated. Commitment to change has been shown to be associated with actual behaviours in the real world[Citation14], but taking into consideration Kane’s Framework Validity[Citation13], further research should be conducted to examine the implication of NPS. Is it predictive of the behaviour change one commits to? Finally, future research should examine the predefined cutpoints used in the NPS rating when applying it to experience with a specific CME/CE activity. Do the same cutpoints apply, but should they be called something else such as “agents”, “passives”, and “critics”? Overall, NPS (the score generated from subtracting detractors [ratings 0 to 5] from promoters [ratings of 9 and 10]) may be a worthwhile key performance indicator for CME/CE as this study provides evidence to suggest it does indicate a CME/CE activity’s impact on compelling action in its learners.

Acknowledgments

Rachael Findley and Stacey Murray for copy edit and author guidelines review. The late Thomas Kellner for discussions with me about Net Promoter Score and advocacy for its use in continuing education evaluation. Donald E Moore for inspiration to examine survey and self-assessment based tools.

Disclosure Statement

This study has been solely submitted as original research to the Journal of European CME and is neither published nor is under consideration elsewhere.

References

  • Moore DE Jr, Green JS, Gallis HA. Achieving desired results and improved outcomes by integrating planning and assessment throughout a learning activity. J Contin Educ Health Prof. 2009;29:1–6.
  • Ratanawongsa N, Thomas PA, Spyridon SM, et al. The reported validity and reliability of methods for evaluating continuing medical education: a systematic review. Acad Med. 2008;83(3):274–283.
  • Hoover MJ, Jung R, Jacobs DM, et al. Educational testing validity and reliability in pharmacy and medical education literature. Am J Pharm Ed. 2013;77(10):A213.
  • Lucero KS, Chen P. What do reinforcement and confidence have to do with it? A systematic pathway analysis of knowledge, competence, confidence, and intention to change. J Euro CME. 2020;9(1):1834759.
  • Williams B, Kessler HA, Williams MV. Relationship among practice change, motivation, and self-efficacy. J Contin Educ Health Prof. 2014;34:s5–s10.
  • Lucero KS, Johnson SS. How confident are you that you are maximizing confidence assessments in your outcomes. Presentation at the Annual Meeting of the Alliance for Continuing Education in the Health Professions, San Francisco, California, Jan 9 2020.
  • Fisher NI, Kordupleski RE. Good and bad market research: a critical review of net promoter score. Appl Stoch Models Bus Ind. 2019;35(1):138–151.
  • Reichheld FF. The one number you need to grow. Harv Bus Rev. 2003;81(12):46–54.
  • Krol MW, de Boer D, Delnoij D, et al. The net promoter score – an asset to patient experience surveys? Health Expectations. 2014;18:3099–3109.
  • Accreditation Council for Continuing Medical Education. Educational needs. 2022 Nov 15. https://www.accme.org/accreditation-rules/accreditation-criteria/educational-needs
  • European Accreditation Council for Medical Education European. EACCME criteria for the accreditation of e-learning materials (ELM). 2022 Nov 15. https://www.uems.eu/__data/assets/pdf_file/0017/40157/EACCME-2.0-CRITERIA-FOR-THE-ACCREDITATION-OF-ELM-Version-6-07-09-16.pdf
  • Palmer K, Devers C. An evaluation of MOOC success: net Promoter Scores. In: Bastiaens T, Van Braak J, Brown M, et al., editors. Proceedings of EdMedia: world Conference on educational media and technology. Association for the Advancement of Computing in Education (AACE) (Waynesville, NC: Association for the Advancement of Computing in Education (AACE)). 2018. p. 1648–1653. 2021 Aug 23. https://www.learntechlib.org/p/184392/
  • Cook DA, Brydges R, Ginsburg S, et al. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015;49:560–575.
  • Wakefield J, Herbert CP, Maclure M, et al. Commitment to change statements can predict actual change in practice. J Contin Educ Health Prof. 2003;23(2):81–93.