2,336
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Outcomes Standardisation Project (OSP) for Continuing Medical Education (CE/CME) Professionals: Background, Methods, and Initial Terms and Definitions

, , , , , & show all
Article: 1717187 | Received 21 Aug 2019, Accepted 10 Jan 2020, Published online: 13 Feb 2020

ABSTRACT

Despite an increased focus and urgency for CE/CME professionals to effectively and systematically assess the impact of their educational interventions, the community has struggled to do so. This struggle is in large part due to the lack of a standardised outcomes language and a set of unified approaches to measure and communicate impact. In the spring of 2018, a group of volunteer educational research scientists and CE/CME professionals established a rigorous consensus-building process in an effort to address this need. This report describes the background, methods and first-year output (Glossary V1) of the Outcomes Standardisation Project (OSP); begins to introduce examples of how the OSP Glossary V1 may support the CE/CME professional community and concludes with plans for the future of establishing a common framework for the profession.

Background

A profession, specifically one focusing in medicine, science or education is grounded in the establishment of a cadre of like-minded and connected individuals working within a common taxonomy, and a shared theoretical framework and evidence base [Citation1Citation4]. These essential elements of a profession are required for identified professions to standardise their practices and efficiently advance their societal contributions. As a result, professions hold an elevated position in most modern societies and are expected to self-regulate, abiding by a core set of ethical principles.

While the profession of continuing education or continuing medical education (CE/CME) was loosely established in the mid-1970s, many of the elements of the profession have struggled to evolve; namely the notion of a common or standardised taxonomy and a shared theoretical framework to guide the development and evaluation of educational interventions [Citation5Citation11].

In September 2006, The Accreditation Council for Continuing Medical Education (ACCME) issued criteria to be phased in from 2008 to 2012, challenging CME providers operating in the US to employ “assessment or measurement tools … to analyse changes in strategy, performance, or patient outcomes achieved as a result of (their) activities/educational interventions [Citation12].” These requirements are general in nature, asking providers to demonstrate that they are making an effort to set predefined goals for each activity, to attempt to assess in some way the extent to which those goals are being achieved and to show they are taking steps to improve subsequent activities based on this assessment [Citation13]. The challenge that has emerged from these necessary efforts is that the foundational taxonomy of measurements and analyses in continuing education (a.k.a. outcomes) is missing. This challenge has been noted by others, including Marinopoulos et al, “The CME literature in general lacks standardisation of terminology related to media type, educational techniques, and exposure volume, which makes it difficult to determine the impact of these factors on the effectiveness of CME [Citation14].”

More recently, there were two catalytic events that lead to a groundswell of momentum and support coming out of the Alliance for Continuing Education in the Health Professions (Alliance) Industry Summit meeting in the spring of 2018. First, in conversations building off the morning’s keynote and in the midst of a session by Cerenzia and SalinasFootnote1 entitled, “Standardisation of Outcomes: Lessons Learned from Attempts to Aggregate” – the idea for an outcomes standardisation project (OSP) emerged. While these conversations were initially energising, shortly thereafter Ruiz-Cordell, DeMatteo and ReillyFootnote2 led a session entitled, “The Tower of Babel: Enough Already, Can We Please Speak the Same Language?” and the call for a solution was echoed. At this moment, the OSP Steering Team (OSPST) began to take shape and a number of educational research scientists within the room took responsibility for delivery on the OSP idea.

In the weeks that followed, a plan of action was devised and galvanised around the following initial problem statement:

The community struggles to effectively understand and communicate the value of CE, in part, BECAUSE the community lacks a standardised outcomes language and a set of standardised approaches to measuring impact; as a result, effectively comparing and aggregating outcomes data and insights remains impossible.

Importantly, given the diversity of the community of professionals, this problem statement can be read through three lenses:

  1. CE/CME professionals who design and build educational interventions for healthcare providers (HCPs) need this standardisation to effectively and equitably measure and consistently communicate the impact of their efforts, as well as to facilitate evidence-based analysis of ongoing needs and educational gaps;

  2. CE/CME research scientists need this standardisation to effectively advance our branch of educational research, allowing for comparative analyses and to objectively establish appropriate best practices; and

  3. CE/CME professionals who provide commercial support need this standardisation to effectively understand and communicate the impact of supported activities in closing educational and practice gaps, as well as persistent needs and/or barriers to optimising patient care, to continually strive to support innovative, high-quality interventions that will best address those needs.

The remainder of this article is organised as follows: first, it provides a detailed explanation of the OSP consensus-building approach, then it presents the definitions for the first 25 standardised terms (full glossary is available at www.outcomesinCE.org), and finally it . The article concludes with a deeper exploration of lessons learned during the consensus-building process and how the glossary may begin to address the real-life challenges of the CE/CME professional community.

Methods

A rigorous consensus-building approach was designed and applied to ensure both a critical depth of investigation and a contribution from the diverse perspectives and experience across the CE/CME community.

This consensus-building approach included the following six phases ():

Figure 1. OSPST Consensus Building Approach

Figure 1. OSPST Consensus Building Approach

Phase One: June–September 2018

  • Steering Team (OSPST) was formed and the plan of action was established.

  • An initial OSP working document of potential terms was created.

  • A list of nominated CE/CME community thought-leaders (to later be interviewed) was collected.

  • Planning and initial outreach to the CE/CME community was begun (i.e. to the 11,000+ members of the LinkedIn CME Group, Twitter, etc.).

The OSPST was formed initially from a group of volunteers (BSM, WT, JO, GS, AM, KRC). A final volunteer (SM) was added at the request of the Alliance to serve as a liaison in order to facilitate alignment among different groups engaged in addressing challenges resulting from the current lack of standardised measurement and communication. The work done by this group includes hundreds of hours of evidence gathering, consensus-building debate, one-off calls amongst the OSPST members and separate calls with community stakeholders.

In building the initial working document, an initial list of 80+ terms was collated. OSPST members each then selected approximately 15–20 terms to explore and, individually, developed and put forth to the group the following: 1) a standard definition; 2) methods if applicable to the term; 3) reference/source for established evidence (if available) and 4) an example of the term’s application in CE/CME. This effort ensured that all terms were being explored by at least two members of the steering team. Through this initial definitions work, an additional 20+ terms were identified and proposed for standardisation, bringing the complete list to more than 110 terms.

Even the most basic review of the complete list of proposed terms, definitions, evidence and examples was found to be overwhelming both in terms of the ongoing consensus-building work and in the ability for the community to embrace and employ the standardised glossary. To ensure that the standardisation effort could progress expediently, the list was narrowed to the 25 terms that were determined to be most foundational for the community. This resizing effort was then validated throughout the following phases of the project to ensure that the draft glossary was consumable and practically employable by CE/CME professionals within their work settings.

Phase Two: September–October 2018

  • Interviews with ten community thought-leaders were conducted, recorded and reviewed by the OSPST.

  • Feedback from the thought-leader interviews was explored by the OSPST.

  • The OSPST evolved the OSP draft glossary through consensus building.

  • Outreach to the CE/CME community continued (project status updates, Alliance Almanac article, CMEpalooza session).

  • Planning for the CE/CME community focus groups began.

Each interview was semi-structured and included CE/CME professionals who focus on outcomes working as providers, research scientists, and commercial supporters and held diverse perspectives on the challenges caused by the lack of a standardised taxonomy. In preparation, interviewees were nominated and debated by OSPST members with a goal of identifying CE/CME professionals who had an established professional track record of medical education research and/or outcomes specialisation; and who represented the diverse roles and responsibilities of the community. Twelve interviewees were invited, 10 accepted the invitation.

Interviewees were sent a working draft of the OSP glossary no less than one week before the call and were asked to review the glossary in terms of its definitions, as well as its structure and format. The interviews began by asking interviewees to share their general feedback on the need for the project, the contents and structure of the draft glossary, and to propose any terms or concepts that they found missing. Interviewees were then asked to walk systematically through their thoughts and notes on each of the terms and definitions within the draft glossary. Points of concern were explored in greater depth to ensure they were fully articulated. At the end of each conversation, interviewees were asked to share any additional notes or annotations they had on the draft glossary itself.

Following each interview, the recordings (approximately 80 minutes each) and the interviewee’s notes were reviewed and discussed by all OSPST members. As themes emerged in the interviews, the draft glossary was edited/updated prior to future interviews thereby allowing the OSPST to efficiently collect increasingly focused feedback on the areas of greatest discordance. Following the tenth interview, the OSPST met to refine the draft glossary and prepare for the focus groups.

Phase Three: October–November 2018

  • Three focus groups of CE/CME community volunteers were conducted, recorded and reviewed by the OSPST.

  • Feedback from the focus groups was explored by the OSPST.

  • The OSPST evolved the OSP draft glossary through consensus building.

  • Outreach to the CE/CME community continued.

  • Planning for the CE/CME community call for comments began.

As with each interview, the focus groups were semi-structured and included more than twenty CE/CME professionals working in academic medical centres, hospital and healthcare systems, large medical associations, small medical societies, medical education companies and commercial supporters. In preparation, focus group volunteers were each sent a working draft of the OSP glossary one week before the call and were asked to review the glossary in terms of its definitions, as well as its structure and format. The focus groups each began by asking volunteers to share their general feedback on the need for the project, the contents and structure of the draft glossary, and to propose any terms or concepts that they found missing. Volunteers were then each asked to specifically share thoughts on items of greatest discordance and a discussion amongst all focus group volunteers was facilitated with the goal of allowing debate and beginning consensus building. At the end of each focus group, volunteers were asked to share any additional notes or annotations they had on the draft glossary itself.

Following each focus group, the recordings (approximately 75 minutes each) and the volunteers’ notes were reviewed and discussed by all OSPST members. As with the interviews, as themes emerged in the focus groups, the draft glossary was edited/updated prior to future focus groups which allowed the OSPST to quickly collect increasingly focused feedback on the areas of greatest discordance. Following the third interview, the OSPST met to refine the draft glossary and prepare for the Call for Comments.

Phase Four: November 2018–January 2019

  • A public Call for Comments was launched through the www.outcomesinCE.org website and widely promoted to the CE/CME community, responses were collected and reviewed by the OSPST.

  • Feedback from the call for comments was explored by the OSPST.

  • The OSPST evolved the OSP draft glossary through consensus building.

  • Outreach to the CE/CME community continued (project status update).

  • Planning for the launch and advocacy efforts began.

The public call for comments was open for one month and responses were received from ten organisations. Responses offered additional perspectives from providers, educational research scientists, commercial supporters and several of the largest distribution partners. Each Call for Comments response was delivered as annotations within the OSP draft glossary. At the close of the Call for Comments period, the responses were reviewed and discussed by all OSPST members and a final consensus glossary was created.

Phase Five: February–May 2019

  • The OSPST established the final format for the OSP Glossary V1.

  • On 17 April 2019 the OSP Glossary V1 was published (the full glossary is available at www.outcomesinCE.org).

  • Outreach to the CE/CME community continued (project status update (including this publication), CMEpalooza session, Alliance for Continuing Education in the Health Professions (ACEhp) member section webinar).

Members of the OSPST continue to track downloads of the glossary and utilisation within the profession. Case studies of how the glossary is being used and lessons learned from its initial roll-out are helping inform ongoing efforts.

Phase Six: June–December 2019; in progress

  • The consensus-building approach began anew to identify and prioritise the next set of terms, concepts, and definitions to standardise and continue to extend the OSP glossary over time.

Standardised Terms and Definitions

The full list of terms along with the standardised definitions, evidence base, examples and relevant contextual information can be found on the OSP website (www.outcomesinCE.org). The site has been conceived of and designed as a wikipedia-like experience where viewers can search the glossary, each term has a unique page, and the glossary can evolve and adapt over time.

  1. Participation Funnel – Term used to describe the series of events experienced by an HCP from exposure to an available educational experience through to the request and fulfilment of credit (if available).

  2. Intended Reach – Term used to indicate the number of unique HCPs to whom the availability of educational activities is being promoted.

  3. Participant – This term, perhaps more than any other considered by the OSP Steering Team, has such varied application and understanding by the community that it SHOULD NOT be used in outcomes efforts or reporting without additional context. Absent of this additional context, the term creates ambiguity and confusion.

  4. Start – Term used to describe the action an HCP takes to begin the core educational content/intervention. If an Activity is preceded with CE/CME front matter or a pre-test, a Start occurs AFTER an HCP has navigated through these items.

  5. Learner – Term used to describe an HCP who Starts the core educational content/intervention. The term is designated only for individuals that have progressed beyond the CE/CME front matter and pre-test (if available) and have started to consume/participate in the educational experience.

  6. Completion – Term used to describe when an HCP has finished the core educational content/intervention. Importantly, whether a Learner chooses to participate in the post-test or evaluation that may follow the education activity does NOT impact completion.

  7. Completer – Term used to describe an HCP that has finished the core educational content/intervention. Importantly, whether a Learner chooses to participate in the post-test or evaluation that follows the education activity does NOT impact completion.

  8. Completion Rate – Term used to define the percentage of Learners that completed the core educational content/intervention. The Completion Rate is a ratio of one stage of the Participation Funnel and provides specific insights into the quality of the educational content and experience.

  9. Learning Actions – Term used to describe the behaviour of a Learner while consuming/participating in the core educational content/intervention.

  10. Engagement – Term used to describe the learning actions or behaviours of an HCP while consuming/participating in the core educational content/intervention.

  11. Moore’s Level 1, Participation – The first level in one established outcomes framework, Moore’s Level 1 emphasises the need to count the number of HCPs progressing through each stage of the Participation Funnel.

  12. Moore’s Level 2, Satisfaction – The second level in one established outcomes framework, Moore’s Level 2 emphasises the need to measure the degree to which the expectations of the Learners about the setting and delivery of the CME activity were met.

  13. Moore’s Level 3a, Declarative Knowledge – The third level in one established outcomes framework, Moore’s Level 3a emphasises the need to measure the changes in declarative knowledge that are associated with an educational intervention.

  14. Moore’s Level 3b, Procedural Knowledge – The third level in one established outcomes framework, Moore’s Level 3b emphasises the need to measure the changes in procedural knowledge that are associated with an educational intervention.

  15. Moore’s Level 4, Competence – The fourth level in one established outcomes framework, Moore’s Level 4 emphasises the need to measure the changes in competence that are associated with an educational intervention.

  16. Moore’s Level 5, Performance – The fifth level in one established outcomes framework, Moore’s Level 5 emphasises the need to measure the changes in performance that are associated with an educational intervention.

  17. Moore’s Level 6, Patient Health – The sixth level in one established outcomes framework, Moore’s Level 6 emphasises the need to measure the changes in patient health outcomes that are associated with an educational intervention.

  18. Moore’s Level 7, Community Health – The seventh level in one established outcomes framework, Moore’s Level 7 emphasises the need to measure the changes in community health outcomes that are associated with an educational intervention.

  19. Assessment – Term used to define the measurement of changes in knowledge, competence, performance or healthcare outcomes that are associated with the planned educational intervention.

  20. Pre-test – Term used to define the measures (data from question(s) or data collection) BEFORE educational content is presented. Pre-tests can measure baseline knowledge, competence, present or anticipated behaviour, experienced or observed health outcome, or other topics.

  21. Post-test – Term used to define the measures (data from question(s) or data collection) AFTER educational content is presented. Post-tests can measure resultant knowledge, competence, present or anticipated behaviour, experienced or observed health outcome, or other topics. This is typically intended as a measure of immediate change or impact.

  22. First Post-test Score – Term used to define a measure of a Learner’s performance on his or her first attempt at a post-test, before any feedback or additional rational is provided.

  23. Final Post-test Score – Term used to define a measure of a Learner’s performance on their final attempt at a post-test. With each repeated attempt at a post-test a Learner’s experience with the test evolves. The final post-test score is a measure of what a Learner was able to achieve through this evolving experience.

  24. Evaluation – Term used to define the measurement of a Learner’s satisfaction with the content and learning experience and/or the perception of bias within the activity. Evaluations can provide a far richer understanding of learning and impact beyond correct/incorrect test questions.

  25. Follow-up Assessments/Evaluation – Term used to define the data collected in the days, weeks or months following an educational experience. While Assessments or Evaluations are typically designed to make a measurement immediately after a learning experience, Follow-up Assessments or Evaluations are designed to make measurements over time.

Discussion and Implications

The outcomes standardisation project was conceived of and operationalised by CE/CME Professionals and research scientists to solve a critical problem that has undermined the profession in general and its ability to evolve as a professional community. This problem statement can most concisely be stated in the following terms: The community struggles to effectively understand and communicate the value of CE in part BECAUSE the community lacks a standardised outcomes language and a set of standardised approaches to measuring impact; as a result, effectively comparing and aggregating outcomes data and insights remains impossible.

What the problem statement perhaps fails to adequately describe is the real-life challenges that occur as a result of the lack of standardisation. In the course of this project, these challenges were described over and over again by those participating in the interviews, focus groups and call for comments. For example:

  1. The absence of a standardised, outcomes taxonomy means that despite the 150,000+ CE activities produced and delivered in 2017 [Ref 15?], there is little-to-no ability to build a collective data set from which necessary educational research questions can be answered. As a result, data-mining and meta-reviews are challenging to complete and their results and conclusions are necessarily tempered.[Ref 14?]

  2. Educational Providers who plan, develop and deliver more than one million hours of instruction in 2017 [Citation15] had limited evidence to guide advancement in the design of these interventions. As a result, status quo approaches to educational interventions are accepted without a complete and adequate understanding of their relative effectiveness.

  3. Supporters of continuing education (including government agencies, commercial supporters and/or non-profit organisation) have limited ability to fully understand the impact of the educational interventions that they are supporting. As a result, it remains challenging to compare interventions and to assess persistent needs and/or the most effective methods to address them.

  4. Communication between CE/CME professionals or organisations is undermined as terms, definitions and methods remain ambiguous. As a result, additional efforts must be made to ensure accurate communication and to avoid disparate, inaccurate and/or misleading interpretations of results and reports.

The bottom line is that the lack of standardisation has both stilted the professional community’s evolution and led to sparse resources being spent without the benefit of consistent, validated evidence.

Moreover, since the ultimate goal of the CE/CME professionals is to help improve the quality of healthcare through their educational intervention, these inefficiencies and failures have a rippling effect – by not efficiently and optimally planning, developing, implementing and analysing the educational interventions that are being directed to healthcare providers, the quality of care is also undermined. While this is likely not the primary driver of the healthcare challenges in the US [Citation16], it is undoubtedly a material element of care variation and quality gaps.

The initial 2018 standardisation efforts will not solve every problem that the professional community faces, but it creates a strong foundation upon which ongoing work can build. More so, it is not the intention of the OSP Steering team to suggest that all outcomes efforts and reporting mandate the inclusion of each item that is standardised, rather the idea is that if these terms are used in outcomes efforts and reporting, that these standardised definitions be universally applied.

Importantly, this is just the start. The OSPST has been working with CE/CME professionals who have begun to employ the Glossary V1. This “experience in the field” is critical to informing the science and evidence upon which the glossary is built. These case studies and examples are then being fed back into the online glossary to support the community in their own implementations.

Finally, the OSPST has begun to identify the next set of terms, concepts or best practices that might be standardised and the consensus-building approach for Glossary V2 will repeat through 2020 with an anticipated launch date of later in the year.

Acknowledgements

The members of the OSPST would like to recognise the CE/CME professionals that provided meaningful contributions to the Outcomes Standardisation Project including: Michael Reilly - Wendy Cerenzia - Riaz Baxamusa - LB Wong - Hilary Schmidt - Dale Kummerle - Asma Ali - Patty Jassak - Katie Robinson - John Ruggiero - Jamie Reiter - Shunda Irons-Brown - Derek Dietze - Anne Grupe - Amanda Glazer - Katie Lucero - Mindi Daiga - Vanessa Gray - Deborah Augustus - John Juchniewicz - Nick Marzano - Allyson Baer - Molly Mooney - Laurie Mannon - Andy Crim - Greselda Butler - R. Michelle (Tyner) Skidmore - Jill Erickson - Jan Perez - Kim Cheramie - Kenny Cox - Kim Vadas - Mary Faulkner - Maureen Doyle-Scharff - Monique Johnson - Del SanValentin - Denise C. LaTemple - Terry Ann Glauser - Rick Watson - Matt Gagalis - Allison Moran - Lawrence Sherman - Steven Kawczak - Phil Dombrowski - Derek Warnick - Scott Kober - Naomi Moeller - Noreen Duffey - Caroline Pardo - Christine Hoffman - Laurie Kendall-Ellis - Nancy Lutz-Paynter

This list is far from exhaustive as it cannot recognise the countless professionals whose contributions pre-date this project, nor the countless professionals whose work is yet to be done.

Disclosure statement

The members of the Outcomes Standardisation Project Steering Team have contributed in a fully volunteer capacity. Their efforts and contributions are their own and do not necessarily represent their employers.

Notes

1 Chief Executive Officer (Cerenzia) and President (Salinas) at CE Outcomes, LLC; Birmingham, AL.

2 Associate Director, Medical Education Data Analytics (Ruiz-Cordell); Director, Medical Education (DeMatteo); and Senior Director, Medical Education at Regeneron (Reilly); Tarrytown, NY.

References

  • Calman K. The profession of medicine. BMJ. 1994;309:1140–7.
  • Saks M. Defining a profession: the role of knowledge and expertise. Professions Professionalism. 2012;2(1):1–10.
  • Evetts J. The sociological analysis of professionalism occupational change in the modern world. Int Sociology. 2003;18(2):395–415.
  • Freidson E. Professionalism: the third logic. London: Polity Press; 2001.
  • Sajdlowska J, Grant RE, Van Hoof TJ, et al. Context and terminology in continuing education: improving the use of interventions in quality improvement and research. J Contin Educ Health Prof. 2015 Spring;35(Suppl 1):S27–28.
  • Knox AB Reflections on terminology in the continuing education of health professionals. J Contin Educ Health Prof. 2015 Spring;35(Suppl 1):S43–44.
  • Grant RE, Van Hoof TJ, Sajdlowska J, et al. Terminology in continuing education: a hybrid methodology for improving the use and reporting of interventions in continuing education. J Contin Educ Health Prof. 2015 Fall;35(Suppl 2):S45–50.
  • SACME 1, Van Hoof TJ, Grant RE, Miller NE, et al. Society for academic continuing medical education intervention guideline series: guideline 1, performance measurement and feedback. J Contin Educ Health Prof. 2015 Fall;35(Suppl 2):S51–54.
  • SACME 2, Van Hoof TJ, Grant RE, Campbell C, et al. Society for academic continuing medical education intervention guideline series: guideline 2, practice facilitation. J Contin Educ Health Prof. 2015 Fall;35(Suppl 2):S55–59.
  • SACME 3, Van Hoof TJ, Grant RE, Sajdlowska J, et al. Society for academic continuing medical education intervention guideline series: guideline 4, interprofessional education. J Contin Educ Health Prof. 2015 Fall;35(Suppl 2):S65–69.
  • SACME 4, Van Hoof TJ, Grant RE, Sajdlowska J, et al. Society for academic continuing medical education intervention guideline series: guideline 3, educational meetings. J Contin Educ Health Prof. 2015 Fall;35(Suppl 2):S60–64.
  • Regnier K, Kopelow M, Lane D, et al. Accreditation for learning and change: quality and improvement as the outcome. J Contin Educ Health Prof. 2005;25(3):174–182.
  • Weiner SJ, Jackson JL, Garten S. Measuring continuing medical education outcomes: a pilot study of effect size of three CME interventions at an SGIM annual meeting. J Gen Intern Med. 2009 May;24(5):626–629.
  • Marinopoulos S, Dorman T, Ratanawongas N, et al. Effectiveness of continuing medical education. Evidence Report/Technology Assessment No. 149. AHRQ Publication No. 07-E006. Rockville, MD. Agency for Healthcare Research and Quality;2007 January.
  • ACCME. ACCME data report – growth and diversity in continuing medical education – 2017. [cited 2019 Jun 20]. Available from: http://www.accme.org/sites/default/files/2018-07/778_20180712_2017_Data_Report.pdf
  • GBD 2016 Healthcare Access and Quality Collaborators. Measuring performance on the healthcare access and quality index for 195 countries and territories and selected subnational locations: a systematic analysis from the global burden of disease study 2016. Lancet. 2018;391:2236–2271.