976
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Assessing clinical reasoning in airway related cases among anesthesiology fellow residents using Script Concordance Test (SCT)

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2135421 | Received 19 May 2022, Accepted 10 Oct 2022, Published online: 18 Oct 2022

ABSTRACT

Introduction

Clinical reasoning is a core competency for physicians. In the field of anesthesia, many situations require residents to use their clinical reasoning to make quick and appropriate decisions such as during emergency airway cases. The Script Concordance Test (SCT) is a test developed in recent years and validated that objectively assess clinical reasoning ability. However, studies involving SCT to assess clinical reasoning in airway management is scarce.

Aim

To evaluate SCT in assessing clinical reasoning for airway management in anesthesiology residents.

Method

A cross-sectional study involving residents and anesthesiology consultants from the Department of Anesthesiology and Intensive Care, Faculty of Medicine Universitas Indonesia was conducted to complete SCT. A panel of five anesthesiology consultants with more than 15 years of work experience constructed 20 SCT vignettes based on prevalent airway cases in our center from the past 10 years. Each SCT has three nested questions, with a total of 60 questions, to be answered within 120 min.

Results

The SCT of 20 case vignettes with three nested questions were tested on 99 residents from the junior, intermediate, and senior residents, compared to answers from the expert group consisting of ten anesthesiology consultants with more than 5 years of experience. There were significant differences in mean SCT scores in the junior, intermediate, senior and expert groups, 59.3 (46.1–72.8), 64.7 (39.9–74.9), 67.5 (50.6–78.3), and 79.6 (78.4–84.8); p < 0,001 consecutively. Cronbach Alpha 0.69 was obtained, indicating good reliability.

Conclusion

Our SCT was proven to be a valid and reliable test instrument to assess the clinical reasoning in airway management for anesthesiology residents. SCT was able to discriminate between groups of different clinical experiences and should be included to evaluate airway competencies in anesthesiology residents.

Introduction

Clinical reasoning is a critical component for physicians. It is a complex cognitive process to integrate various kinds of information previously owned with new data leading to clinical decisions and the formulation of an effective management plan [Citation1,Citation2]. Clinical reasoning is important to be taught from the early years of medical school as it was found to be beneficial in increasing students’ ability during their clinical placement years [Citation3]. Adequate clinical reasoning is an essential skill for daily practice, especially in emergency conditions to prevent inappropriate decision-making that may harm the patients. In medical education, clinical reasoning skills can be trained or improved through case-based discussions, clinical case presentations, and clinical problem-solving exercises [Citation1,Citation2].

Educators need to be able to assess the level of clinical reasoning skills before the start of training to formulate the right teaching strategy. Clinical reasoning can be assessed using various methods; multiple-choice questions (MCQ), open short answer questions (OSAQ), key feature test, oral examination test, script concordance test (SCT), long case examination, mini-clinical evaluation exercise (mini-CEX), and portfolios [Citation2]. Among those methods, SCT can objectively assess clinical reasoning ability and has been developed in recent years and validated. Other tests only assess factual knowledge acquired during education, whereas SCT can assess the ability to manage knowledge and clinical reasoning [Citation4].

In anesthesia and intensive care, many situations require the use of clinical reasoning skills to make quick and appropriate decisions in emergency conditions. Airway emergency cases will lead to mortality and morbidity, if not managed promptly. Continuous evaluation of residents’ clinical reasoning ability will help plan interventions to improve their clinical reasoning ability to provide the best clinical care for all patients [Citation1,Citation4]. Most residency education centers have applied multistep training for airway management. Our anesthesiology residency program in Faculty of Medicine Universitas Indonesia, based in Cipto Mangunkusumo Hospital provides education and training in airway management using various lectures, workshops, examinations and bedside teaching. Airway lectures were given in a module with discussions held every week, lasting for a month. Each month the Department of Anesthesiology and Intensive Care (Department) organized an airway day, where residents apply airway management techniques and advanced devices to patients in the elective operating room. Airway workshops and theoretical examinations were consistently held every three months. Even though this multistep training proved to help hone the skills needed in airway management, assessing the clinical reasoning skill of residents is still a challenge. Thus, SCT could provide the instrument needed to evaluate clinical reasoning.

To date, several fields of medicine have applied SCT to assess clinical reasoning in residents [Citation5–12]. However, there is still lack of study that assess SCT for clinical reasoning in anesthesiology residents. One study was found to assess clinical reasoning in anesthesiology residents in general but not specifically for airway management [Citation5]. This study aims to evaluate SCT performance in assessing clinical reasoning for airway management in anesthesiology residents.

Material and methods

Study design

This study was a cross-sectional study, performed in November 2021 using SCT distributed via google form to participants recruited from Faculty of Medicine Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia.

Inclusion and exclusion criteria

The inclusion criteria were residents from a minimum of second semester and consultants with minimum of five years of experience. The exclusion criteria were only if anesthesiology residents or consultants refused to participate in the study.

Script concordance test

Vignettes were made based on incidence reports and airway-related cases in Cipto Mangunkusumo Hospital over the past ten years. The cases have been analyzed comprehensively by experts through root cause analysis about the cause and solution of the cases. Previous study found that it is necessary to sample questions broadly [Citation13,Citation14]. Using fewer cases with three questions per case was found to improve reliability [Citation13]. Thus, we constructed SCT that consisted of 20 vignettes (20 clinical cases), of which each vignette consisted of 3 statements (a total of 60 statements). The questions were categorized as assessment, investigation, and management. A panel of five experienced anesthesiologists with more than 15 years of experience, constructed the vignettes and list of questions for each vignette using the focus group discussion method to achieve expert consensus, ensuring the face and content validity. An example of a question on the SCT can be viewed on . Participants were anesthesiology residents from three different level of clinical experience group. Ten anesthesiology consultants with a minimum of five years of experience were selected as the ‘expert’. SCT containing 60 questions were distributed using google form, to be completed within 120 minutes. Responses from residents and experts were compared and analysed.

Table 1. An example of an airway management vignette with Script Concordance Test (SCT) items.

Scoring system

The scoring involved comparing answers by residents with expert. A five-anchor Likert scale was constructed from answers provided by expert group. Previous study found that initial SCT studies were composed of seven-anchor Likert scales. However, it was found that this was not beneficial[Citation15]. Thus, five-anchor Likert scales were used (−2, −1, 0, +1, +2) in this study. For each answer, the credit is the number of members that chose the answer, divided by the modal value of the question. The answer that received the greatest number of votes from experts was rated 1, other answers were rated as a fraction and those that were not chosen were rated 0. [Citation1,Citation15] For example, on one question if 6 experts of out 10 had chosen +1, a resident would receive 1 point (6/6) if they choose +1. If 4 experts had chosen +2, a resident choosing +2 would receive .67 points (4/6). Residents choosing −1, −2, and 0 would then receive 0.

Participant

During their residency, residents were trained for airway management according to their level of training and must be qualified before proceeding to the next level. ‘Junior’ residents received basic airway management training and they could perform basic airway management under the direct supervision of an anesthesiology consultants. After passing the basic airway management theory examination, they will reach intermediate level of residency. ‘Intermediate’ residents were allowed to perform basic airway management with minimal supervision, but they were only allowed to perform difficult airway management under the direct supervision of the anesthesiology consultants. Intermediate residents will have to pass the practical examination of basic airway management and the difficult airway theory examination before proceeding to the next level. ‘Senior’ residents were deemed capable of performing basic airway management without any supervision but still needed to report to an anesthesiologist consultant before and after the procedure. For difficult airway management, they were only allowed to perform procedures under supervision. Difficult airway management that was trained in our residency program includes the use of video laryngoscope, fibreoptic and surgical airway management.

Expert

Prior studies have shown that 10 to 20 experts were needed to ensure study’s validity and there was only little gain recruiting experts more than 20 [Citation15–17]. Therefore, ten anesthesiologists with a minimum of 5 years of experience (‘experts’) were selected to participate from the same institution as the residents. The experts were given the same conditions to answer the questionnaire as the residents.

Ethics approval and consent to participate

The study protocol was approved by the Ethics and Research Committee of Universitas Indonesia (1181/UN2.F1/ETIK/PPM.00.02/2021; protocol no: 20–11-1215; approval date: December 6th, 2021). Written informed consent to participate was obtained from each participant.

Sample size

The required sample size for this cross-sectional study was 92. The sample size was calculated using the validity test formula as follows:

n=Zα+Zβ0.5ln[1+r/1r2+3

Legend:

n = required sample size

α = type 1 error, 5%

β = type 2 error, 10%

r = minimum correlation coefficient that is considered valid, 0.3

Outcome assessment

The study assessed the SCT results of each participant group and did a subgroup analysis based on the components of the SCT, which were assessment, investigation, and management. We also assessed the survey result based on three categories, fidelity, reliability, and clarity of the test.

Statistical analysis

The data obtained were analyzed using the Statistical Package for Social Sciences (SPSS) computer program version 26. Categorical data were presented in numbers and percentages (n (%)). In addition, numerical data were introduced using mean ± standard deviation if the data distribution is normal or the median (minimum-maximum value) if the distribution is skewed. According to data distribution, Students’ T-test or Mann-Whitney test was used to analyze two numerical variables. In contrast, ANOVA or Kruskal Wallis test analyzed more than two numerical variables. The analysis results were considered significant if the p-value was < 0.05. For reliability, the Cronbach alpha test was used with more than 0.6 was deemed to be good reliability.

Results

We enrolled 109 subjects, 99 anesthesiology residents, and ten anesthesiology consultants. The demographic of the participants are presented in . The mean age between each group was similar. However, most participants were male in the intermediate and senior residents group. Regarding the reliability of the test, using Cronbach alpha analysis, we found our SCT value was 0.696, which was considered good reliability.

Table 2. Demographic of anesthesia fellow residents.

SCT results between junior residents, intermediate residents, senior residents, and the expert group were compared using Kruskal-Wallis test, as shown in . The analysis showed that there were statistically significant differences in SCT results between the four groups (H(2) = 44.49, p < 0.001). A Mann-Whitney was conducted to determine whether different resident levels were affected based on semester and supervision levels on SCT results. The result indicated statistically significant differences between each group (p < 0.05). We also did a subgroup analysis based on the components of the SCT using ANOVA. The result revealed that there were significant differences in mean scores between at least three groups on the assessment component (F(3), 105 = [9.870], p < 0.001). Bonferroni’s post-hoc analysis found that the mean value of the SCT score was significantly different between junior – senior, junior – expert, and intermediate – expert groups. The investigation component of SCT found significant differences in mean scores between at least three groups (F(3), 105 = [12.437], p < 0.001). The mean value of the SCT score was significantly different between junior – senior, junior – expert, intermediate – expert, and senior – expert groups. The management component displayed that there were significant differences in mean scores between at least three groups (F(3), 105 = [17.483], p < 0.001). The mean value of the SCT score was significantly different between junior – senior, junior – expert, intermediate – expert, and senior – expert groups.

Table 3. SCT results and subgroup analysis.

50.5% participants agreed and 33.9% participants strongly agreed that SCT was able to assess clinical reasoning and reflect their competency (). In addition, further analysis was performed on the fidelity, reliability, and clarity of SCT (). Most of the participants agreed that the SCT could portray real scenarios, and have straightforward and precise questions.

Table 4. Survey of SCT on clinical reasoning assessment.

Table 5. Survey of fidelity, reliability and clarity of SCT.

Discussion

SCT is primarily designed to evaluate the clinical reasoning abilities [Citation18,Citation19]. It allows objective assessment of clinical reasoning in context of uncertain situations in which other tests cannot [Citation4,Citation18].

In this study, the participants were divided into four groups: junior residents, intermediate residents, senior residents and experts. Compared to junior resident group, most participants in the intermediate and senior residents group were male because there were more male residents in our residency program. Previous studies involving experts and different levels of residents in various specialties have found that SCT is a valid tool to assess clinical reasoning abilities. [Citation9–11,Citation20–23] This study showed that our residency program training is directly proportional to the results of our SCT. We found that SCT was able to discriminate between groups of different clinical experiences in anesthesiology residents with higher mean score was found in expert group. The ability of the test to differentiate level of clinical reasoning between groups is a sign of having satisfactory construct validity. The founding of this study also supported the results of previous study by Ducos, et al [Citation5] 2015 who found that with increased level of experience and training, the higher the SCT scoring. However, Ducos, et al [Citation5] 2015 designed the SCT only to assess clinical reasoning in anesthesiology residents in general, while this study focused more on evaluating airway management clinical reasoning in anesthesiology residents.

This is the first study to constructed SCT for airway management in anesthesiology residents in Indonesia. We assessed the reliability of SCT as a valuable tool using Cronbach’s alpha. The result showed Cronbach’s alpha of 0.696, indicating that SCT appeared to be reliable in assessing clinical reasoning for airway management in anesthesiology residents. Previous studies have also shown good reliability of SCT in various field of medicine with Cronbach’s alpha value of 0.63–0.80 [Citation12,Citation24,Citation25].

The test in our study was formed by five anesthesiology consultants with more than fifteen years of experience using the focus group discussion method to ensure good face and content validity [Citation13]. Items were constructed from the prevalent cases in our hospital over the past 10 years. Exposure to high intensity real-life scenarios that were constructed offer unique opportunities for residents to develop authentic learning opportunities with no risk to actual patients [Citation26]. It also allows residents to be more readily understand the situations when presented with the same case in real life. The key answers were provided by expert group consisted of ten anesthesiology consultants with a minimum of five years of experience. At the end of the test, we asked the participants whether SCT was suitable in assessing the clinical reasoning skills. Most of the participants from junior to senior residents group and expert were agreed that SCT was considered to be suitable to assess the clinical reasoning skills.

Regarding multistep learning in our institution for airway case management, based on this study’s result, SCT allows assessment of clinical reasoning at the end of the learning curriculum. Objective measurement for clinical reasoning allows planning of further interventions for residents who need to improve their clinical reasoning abilities in airway management. Goldmann, et al [Citation27] 2005 mentioned that several approaches can be used to improve the airway management clinical reasoning abilities such as through workshops using manikins, human cadavers, animals, virtual reality airway simulators and high-fidelity full-scale simulators. In our center, residents with low score were given additional course comprising of case-based discussion and structured workshops followed by further assessment. In the future, it would be interesting to continue further research on SCT by evaluating whether an increase in clinical reasoning would help in the development of clinical skill by performing further studies that allows assessment of both clinical skill (using OSCE or mini-CEX) and clinical reasoning (SCT).

Limitations

There were some limitations in this study. First, the questions were all in text form. Even though the questions in our SCT were sufficient to be easily understood by the participants, previous studies on the use of online SCT found that integrating pictures and videos would enhance the SCT’s quality and display more realistic cases [Citation28–31]. Another limitation was the lack of familiarity of the participants to the type of questions, especially since our center was the first to constructed SCT in Indonesia. Several studies also mentioned the same problem [Citation8–10].

Conclusion

This study provides more evidence that SCT is a valid and reliable tool to evaluate clinical reasoning abilities for airway management in anesthesiology residents. We suggest that the SCT could be a standard for medical institutions in Indonesia. SCT could discriminate between groups of different clinical experiences and can be used to determine residents that are in need of remediation in airway training based on their level of knowledge.

List of abbreviations

MCQ Multiple-Choice Questions

Mini-CEXMini-Clinical Evaluation Exercise

OSAQ Open Short Answer Questions

SCT Script Concordance Test

SPSS Statistical Package for Social Sciences

OSCE Objective Structured Clinical Examination

Acknowledgments

We would like to thank all the staff at Cipto Mangunkusumo General Hospital and the Faculty of Medicine Universitas Indonesia for the continuous and unending support for this study.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Ducos G, Lejus C, Sztark F, et al. The Script Concordance Test in anesthesiology: validation of a new tool for assessing clinical reasoning. Anaesth Crit Care Pain Med. 2015;34(1):11–7.
  • Modi JN, Gupta P, Singh T. Teaching and assessing clinical reasoning skills. Indian Pediatr. 2015 Sep;52(9):787–794.
  • Amey L, Donald KJ, Teodorczuk A. Teaching clinical reasoning to medical students. Br J Hosp Med. 2017 Jul 2; 78(7):399–401.
  • Alkhadayel SA, AlSelaim NA, Magzoub ME, et al. Constructing a question bank based on script concordance approach as a novel assessment methodology in surgical education. BMC Med Educ. 2012;12:100–107.
  • Ducos G, Lejus C, Sztark F, et al. The script concordance test in anesthesiology: validation of a new tool for assessing clinical reasoning. Anesth Crit Care Pain Med. 2015; 34: 11–15.
  • Gómez CI, Sequeros OG, Martínez DS. Clinical reasoning evaluation using script concordance test in primary care residents. Anales de Pediatría (English Edition). 2022 Aug 1;97(2):87–94.
  • Sibert L, Darmoni SJ, Dahamna B, et al. On line clinical reasoning assessment with Script Concordance test in urology: results of a French pilot study. BMC Med Educ. 2006 Dec;6(1):1–9.
  • Mathieu S, Couderc M, Glace B, et al. Construction and utilization of a script concordance test as an assessment tool for dcem3 (5th year) medical students in rheumatology. BMC Med Educ. 2013 Dec;13(1):1–7.
  • Lambert C, Gagnon R, Nguyen D, et al. The script concordance test in radiation oncology: validation study of a new tool to assess clinical reasoning. Radiat Oncol. 2009 Dec;4(1):1–6.
  • Iravani K, Amini M, Doostkam A, et al. The validity and reliability of script concordance test in otolaryngology residency training. J Adv Med Educ Prof. 2016 Apr;4(2):93.
  • Steinberg E, Cowan E, Lin MP, et al. Assessment of emergency medicine residents’ clinical reasoning: validation of a Script Concordance Test. West J Emerg Med. 2020 Jul;21(4):978.
  • Cooke S, Lemay JF, Beran T, et al. Development of a method to measure clinical reasoning in pediatric residents: the pediatric script concordance test. Creative Educ. 2016;7:814–823.
  • Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Med Inform Decis Mak. 2018;8:18.
  • Lubarsky S, Dory V, Duggan P, et al. Script concordance testing: from theory to practice: AMEE guide no. 75. Med Teach. 2013: 1–10.
  • Dory V, Gagnon R, Vanpee D, et al. How to construct and implement script concordance tests: insights from a systematic review. Med Educ. 2012 Jun;46(6):552–563.
  • Gagnon R, Charlin B, Coletti M, et al. Assessment in the context of uncertainty: how many members are needed on the panel of reference of a script concordance test? Med Educ. 2005 Mar;39(3):284–291.
  • Wan M, Tor E, Hudson JN. Improving the validity of script concordance testing by optimizing and balancing items. Med Educ. 2018;52:336–346.
  • Drolet P. Assessing clinical reasoning in anesthesiology: making the case for the Script Concordance Test. Anaesth Crit Care Pain Med. 2015 Feb 1; 34(1):5–7.
  • Nazim SM, Talati JJ, Pinjani S, et al. Assessing clinical reasoning skills using Script Concordance Test (SCT) and extended matching questions (EMQs): a pilot for urology trainees. J Adv Med Educ Prof. 2019 Jan;7(1):7.
  • Humbert AJ, Besinger B, Miech EJ. Assessing clinical reasoning skills in scenarios of uncertainty: convergent validity for a script concordance test in an emergency medicine clerkship and residency. Acad Emerg Med. 2011 Jun;18(6):627–634.
  • Nouh T, Boutros M, Gagnon R, et al. The script concordance test as a measure of clinical reasoning: a national validation study. Am J Surg. 2012 Apr 1; 203(4):530–534.
  • Piovezan RD, Custodio O, Cendoroglo MS, et al. Assessment of undergraduate clinical reasoning in geriatric medicine: application of a script concordance test. J Am Geriatr Soc. 2012 Oct;60(10):1946–1950.
  • Subra J, Chicoulaa B, Stillmunkès A, et al. Reliability and validity of the script concordance test for postgraduate students of general practice. Eur J Gener Pract. 2017 Oct 2; 23(1):209–214.
  • Wan SH. Using the script concordance test to assess clinical reasoning skills in undergraduate and postgraduate medicine. Hong Kong Med J. 2015;21:455–461.
  • Roberti A, Roberti MRF, Pereira ERS, et al. Script concordance test in medical schools in Brazil: possibilities and limitations. Sao Paolo Med J. 2016;134(2):116–120
  • Jenkins KD, Stroud JM, Bhandary SP, et al. High-fidelity anesthesia simulation in medical student education: three fundamental and effective teaching scenarios. Int J Acad Med. 2017 Jan 1. 3. 1. 66
  • Goldmann K, Ferson DZ. Education and training in airway management. Best Pract Res Clin Anaesth. 2005 Dec 1; 19(4):717–732.
  • Kania RE, Verillaud B, Tran H, et al. Online script concordance test for clinical reasoning assessment in otorhinolaryngology: the association between performance and clinical experience. Arch Otolaryngology Head Neck Surg. 2011 Aug 15; 137(8):751–755.
  • Sibert L, Darmoni SJ, Dahamna B, et al. Online clinical reasoning assessment with the Script Concordance test: a feasibility study. BMC Med Inform Decis Mak. 2005 Dec;5(1):1.
  • Dubois JM, Michenaud C, Isidori P. A new way to assess medical competencies: the script concordance test (SCT) on line. Curr Dev Tech –Assis Edu. 2006: 1143–1147.
  • Hornos EH, Pleguezuelos EM, Brailovsky CA, et al. The practicum script concordance test: an online continuing professional development format to foster reflection on clinical practice. J Contin Educ Health Prof. 2013 Dec;33(1):59–66.