208
Views
2
CrossRef citations to date
0
Altmetric
Validation

A Target Population Derived Method for Developing a Competency Standard in Radiograph Interpretation

, ORCID Icon, ORCID Icon, , ORCID Icon, , & show all
Pages 167-177 | Received 17 Jan 2020, Accepted 16 Apr 2021, Published online: 17 May 2021
 

Abstract

Construct

For assessing the skill of visual diagnosis such as radiograph interpretation, competency standards are often developed in an ad hoc method, with a poorly delineated connection to the target clinical population.

Background

Commonly used methods to assess for competency in radiograph interpretation are subjective and potentially biased due to a small sample size of cases, subjective evaluations, or include an expert-generated case-mix versus a representative sample from the clinical field. Further, while digital platforms are available to assess radiograph interpretation skill against an objective standard, they have not adopted a data-driven competency standard which informs educators and the public that a physician has achieved adequate mastery to enter practice where they will be making high-stakes clinical decisions.

Approach

Operating on a purposeful sample of radiographs drawn from the clinical domain, we adapted the Ebel Method, an established standard setting method, to ascertain a defensible, clinically relevant mastery learning competency standard for the skill of radiograph interpretation as a model for deriving competency thresholds in visual diagnosis. Using a previously established digital platform, emergency physicians interpreted pediatric musculoskeletal extremity radiographs. Using one-parameter item response theory, these data were used to categorize radiographs by interpretation difficulty terciles (i.e. easy, intermediate, hard). A panel of emergency physicians, orthopedic surgeons, and plastic surgeons rated each radiograph with respect to clinical significance (low, medium, high). These data were then used to create a three-by-three matrix where radiographic diagnoses were categorized by interpretation difficulty and significance. Subsequently, a multidisciplinary panel that included medical and parent stakeholders determined acceptable accuracy for each of the nine cells. An overall competency standard was derived from the weighted sum. Finally, to examine consequences of implementing this standard, we reported on the types of diagnostic errors that may occur by adhering to the derived competency standard.

Findings

To determine radiograph interpretation difficulty scores, 244 emergency physicians interpreted 1,835 pediatric musculoskeletal extremity radiographs. Analyses of these data demonstrated that the median interpretation difficulty rating of the radiographs was −1.8 logits (IQR −4.1, 3.2), with a significant difference of difficulty across body regions (p < 0.0001). Physician review classified the radiographs as 1,055 (57.8%) as low, 424 (23.1%) medium or 356 (19.1%) high clinical significance. The multidisciplinary panel suggested a range of acceptable scores between cells in the three-by-three table of 76% to 95% and the sum of equal-weighted scores resulted in an overall performance-based competency score of 85.5% accuracy. Of the 14.5% diagnostic interpretation errors that may occur at the bedside if this competency standard were implemented, 9.8% would be in radiographs of low-clinical significance, while 2.5% and 2.3% would be in radiographs of medium or high clinical significance, respectively.

Conclusion(s)

This study’s novel integration of radiograph selection and a standard setting method could be used to empirically drive evidence-based competency standard for radiograph interpretation and can serve as a model for deriving competency thresholds for clinical tasks emphasizing visual diagnosis.

Acknowledgements

We would like to acknowledge the learner participants and program directors who enthusiastically engaged in the learning intervention. We would also like to thank the orthopedic surgeons, plastic surgeons and parents who participated in the panel discussions of this research. Finally, we would like to thank Dr. Martin Pecaric of Contrail Consulting Services Inc. for providing software support for this study.

Disclosure statement

The data for deriving the radiograph interpretation difficulties were obtained from diagnostic interpretations available on an education platform (ImageSim) that hosts the radiographs examined in this study. Dr. Boutis is the academic director of ImageSim but does not receive any financial compensation for her role. Dr. M. Pecaric, the lead consultant of Contrail Consulting Services Inc, provides software support to ImageSim and is married to Dr. Boutis. This relationship is formally managed by the Relationship Management team at the Hospital for Sick Children and University of Toronto. The other authors do not have any conflicts of interest to report.

Additional information

Funding

This works was funded by the Royal College of Surgeons of Canada Medical Education Grant. The funders had no role in the design, analysis, or manuscript preparation of this work.

Funding

This works was funded by the Royal College of Surgeons of Canada Medical Education Grant. The funders had no role in the design, analysis, or manuscript preparation of this work.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 65.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 464.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.