1,792
Views
3
CrossRef citations to date
0
Altmetric
Research Articles

Candidate perceptions of the UK Recorded Consultation Assessment: cross-sectional data linkage study

, ORCID Icon, ORCID Icon, , , ORCID Icon, , ORCID Icon & ORCID Icon show all
Pages 32-40 | Received 20 Mar 2021, Accepted 15 Aug 2021, Published online: 30 Aug 2021

ABSTRACT

The Recorded Consultation Assessment (RCA) was rapidly developed to replace the Clinical Skills Assessment (CSA) for UK general practice licencing during COVID-19. We aimed to evaluate candidate perceptions of the RCA and relationships with performance. We conducted a cross-sectional survey of RCA candidates with attitudinal, demographic, and free text response options, undertaking descriptive and factor analysis of quantitative data with qualitative thematic analysis of free text. Binomial regression was used to estimate associations between RCA pass, candidate characteristics and questionnaire responses.

645 of 1551 (41.6%) candidates completed a questionnaire; 364 (56.4%) responders permitted linkage with performance and demographic data. Responders and non-responders were similar in exam performance, gender and declared disability but were significantly more likely to be UK graduates (UKG) or white compared with international medical (IMG) or ethnic minority graduates. Responders were positive about the digital platform and support resources. A small overall majority regarded the RCA as a fair assessment; a larger majority reported difficulty collecting, selecting, and submitting cases or felt rushed during recording.

Logistic regression showed that ethnicity (white vs minority ethnic: odds ratio [OR] 2.99,95% confidence interval [CI] 1.23, 7.30, p = 0.016), training (UK vs IMG: OR 6.88, 95% CI 2.79, 16.95, p < 0.001), and English as first language (OR 5.11, 0% CI 2.08, 12.56, p < 0.001) were associated with exam success but questionnaire subscales, consultation type submitted, or extent of trainer review were not. The RCA was broadly acceptable but experiences were variable. Candidates experienced challenges and suggested areas for improvement.

Introduction

COVID-19 has given rise to huge challenges for medical education and assessment at a time of considerable strain on the medical workforce [Citation1]. High stakes licencing examination bodies have had to adapt objective structured clinical examinations (OSCEs) to reduce infection risk to candidates, examiners, patients or simulators [Citation2]. Licencing examinations continue to be urgently required to support the needs of doctors completing training and to ensure a safe competent workforce.

Alternatives to traditional OSCEs include socially distanced [Citation3] or virtual OSCEs [Citation4,Citation5]. The Recorded Consultation Assessment (RCA) was introduced as a replacement for the OSCE style Clinical Skills Assessment (CSA), part of the UK Membership of the Royal College of General Practitioners (MRCGP) licencing examination for general practice, in preference because it was based on experience of a previous video examination that preceded the CSA [Citation6] and the alternatives were considered impractical to develop and implement within a short timescale. The RCA formed a component of the three-part MRCGP alongside the Applied Knowledge Test and Workplace-Based Assessment.

The RCA was developed in May 2020 by a specially convened team of educational, technical, assessment and psychometric experts to assess GP specialist trainees’ ability to integrate and apply clinical, professional, communication and practical skills for general practice [Citation7]. Following an initial technical pilot of the process and technology involved (phase 1 involving 13 candidates) a further pilot (phase 2) was carried out with 1,551 actual candidates in July-August 2020. GP trainees in Speciality Training Year 3 of their training programme or later, where appropriate, for full time and flexible training programmes could attempt the assessment.

Many trainees were due to take their CSA in April/May 2020 to complete training, receive a licence to practice, and enter the qualified GP workforce by August 2020. With the temporary abandonment of the CSA because of the pandemic in March 2020, they would have been prevented from such progression without the development of the RCA.

The assessment involved each candidate providing 13 examples of patient consultations recorded on, or uploaded to, a specially customised online information technology platform (FourteenFish: https://www.fourteenfish.com/partners/rcgp) for viewing and assessment by examiners.

Unlike the standardised cases designed by examiners for the CSA and using simulators (actors) at a test centre, the RCA uses real patient consultations carried out and selected by candidates from their working environment during the pandemic. Candidates could choose, for each consultation, video (online or face-to-face contact) or audio (telephone contact) recordings. The requirement for 13 consultations was set to align with the CSA which includes 13 face-to-face encounters, sometimes including one telephone consultation as an alternative. Envisaging lower inter-case agreement than in the CSA, each consultation was double marked providing 26 separate evaluations by different examiners for each candidate recording.

In view of the lack of research into the changes in licencing assessments due to the pandemic [Citation2] and the desirability of studying this major change to the assessment, we aimed to evaluate candidate experiences and perceptions of the phase 2 RCA pilot, and to link these where possible to candidates’ examination performance.

Methods

Research question and aim

Research question: What were the experiences and perceptions of candidates taking the RCA, how did they think the exam could be improved, and how did these relate to their exam performance?

Aim: To explore the experiences and perceptions of candidates taking the RCA and investigate the relationships between experiences and exam performance.

Design

We used a cross-sectional survey employing a specifically designed online questionnaire with quantitative and free text responses and linked this to performance data.

Theory

Our theoretical stance was post-positivism, adopting a critical realist stance which acknowledges that observations are imperfectly and probabilistically understood in light of other factors such as experience, culture and social norms [Citation8]. The theory accords with the questionnaire variant of the mixed methods convergent design that we used to gather and analyse both quantitative and free text (qualitative) data from the survey [Citation9].

Questionnaire construction

An online questionnaire survey (Box 1), consisting of a ‘balanced’ mix of 11 positively and negatively framed experience and attitudinal items scored on a scale from 0 – strongly disagree to 4-strongly agree, was developed for the evaluation by members of the team (PC, AF, MD, AS) with experience in survey design and based on previous surveys of the exam, which asked examinees their views on: the ease of collecting, recording, submitting and uploading consultations; whether consultations reflected the variety of GP work across the curriculum; about the online platform, information about the exam handbook or frequently asked questions (FAQs); and perceived test fairness. There were also general questions on whether consultations were mainly conducted remotely via audio, remotely via video, face-to-face or a mixture of these; whether their trainer reviewed consultations before submission; and demographic questions on candidate gender and whether English was their first language. A free text option asked: ‘Do you have any additional comments?’

Participants

The survey was offered to all speciality trainees in general practice who undertook the RCA. Candidates completed the questionnaire on their perceptions of the RCA on a voluntary basis once they had submitted the video but before they received their results. Written consent was sought and gained to link candidates’ questionnaire data with their test performance and demographic data.

Performance and demographic data

RCA performance data (pass/fail; mark scaled to the pass mark) and demographic data collected on application to the examination including attributes such gender, place of primary medical qualification, ethnicity and specific learning difficulties (e.g. dyslexia) were provided by the examination department, linked for those candidates agreeing to this, but available for the remainder of the cohort for comparative purposes.

Statistical analysis

We converted ordinal (Likert scale) to interval (0–4) data to facilitate statistical analysis. Descriptive statistical tests were used to summarise responses (). Negatively framed items (Q2, 3, 5, 8a in Box 1) were recoded so that higher scores indicated more positive attitudes. Box plots were used to show means and the interquartile ranges (IQR) for each question and questionnaire subscale.

Table 1. Responses to 11 experience and attitudinal questionnaire items

Exploratory factor analysis using varimax rotation was conducted after checking if factor analysis was applicable [Citation10], retaining factors with eigenvalues higher than 1. Cronbach’s alpha was calculated for identified factors (subscales). Subscale scores according to candidate gender, language and place of primary medical qualification were reported and assessed using non-parametric Kruskal-Wallis tests.

Frequency and percentage of passing the exam for different categories identified by gender, disability, ethnicity, and training, i.e. UK versus International Medical Graduates (IMGs) were compared using the chi-squared test for exam outcome (i.e. pass/fail) and t-tests or one-way ANOVAs for the overall score obtained in the exam.

Binominal logistic regression was conducted to estimate the association between RCA outcome (i.e. pass or fail) and candidate gender, disability, ethnicity, training, number of attempts, questionnaire subscales, consultation type, and trainer review as predictors.

Data were analysed using the statistical software Stata 15.1.

Qualitative analysis of free text responses

We used thematic analysis for free text responses supported by NVivo 12. Responses were coded and organised into themes, initially based on the questionnaire domains, by three researchers experienced in qualitative research (DL, VP, AS) using a multistage approach: (1) line by line coding of free text responses independently by DL and VP (2) organisation of codes into categories through discussion (DL, VP, AS), and (3) grouping of categories into themes and consensus through discussion (AS, supported by DL and VP).

Results

Responders

Overall, 645 of 1551 (41.6%) RCA candidates in July and August 2020 completed a questionnaire including 350 (54.3%) female vs 276 (42.8%) male, 365 (56.6%) UK Graduates (UKG) vs 261 (40.5%) International Medical Graduates (IMG) with English as a first language 412 (63.9%) vs other first languages 210 (32.6%, missing 23 [3.6%]). Most responders reported that their trainers reviewed some (240, 37.2%) or all (376, 58.3%) of their cases, providing a brief (395, 61.2%) or detailed (219, 34.0%) review.

Responders were not significantly different to non-responders in their pass rate (responders vs non-responders passing 77.9 vs 76.1%, p = 0.70), gender (female 55.4% vs 52.5%, p = 0.35), or declaration of disability (disability declared 12.8% vs 12.9%, p = 0.96), but ethnic minority candidates (ethnic minority 49.3% vs 60.8%, p < 0.001) and IMGs were significantly less likely to complete the questionnaire (IMG 34.1% vs 42.3, p = 0.005).

Greater proportions of responders overall reported (, ) agreement with difficulty collecting, selecting, and submitting consultations demonstrating their knowledge and skills. Most responders agreed feeling rushed during recorded consultations. Most responders were satisfied with the online platform, finding it easy to record on it or transfer data from other platforms and most responders agreed they found the Handbook and FAQ materials helpful.

Figure 1. Comparison of experience and attitude item scores.

Figure showing boxplots for each experience and attitude item with median, interquartile range, minimum, maximum and outliers. Negative items 2,3,5 and 8a were reverse scored; median was 2 so values higher than 2 indicate positive ratings.
Figure 1. Comparison of experience and attitude item scores.

Three factors: Assessment Experience, Resources and Support and Digital platform were identified (Table S1). Males, International Medical Graduates, and candidates who did not report English as their first language were significantly more positive about assessment experience but were no different in their perception of resources or digital platform (Table S2).

Predictors of exam performance

A binomial logistic regression was run to understand the effect of candidate characteristics and experience on performance in the subset of 364 responders who agreed to link their survey and performance data (). Ethnicity, training, and English as first language were all significant predictors of exam pass (p < 0.05). None of the three questionnaire subscales, type of consultation submitted, or extent of trainer review significantly predicted passing the exam.

Table 2. Multivariable logistic regression showing factors associated with RCA pass

Free text themes

The question, ‘Do you have any additional comments?’ elicited 198 responses (denoted candidate C1-198 in the example quotes), which enabled integration of quantitative and qualitative findings. Four themes were identified, the first three reflecting the factors identified in the quantitative analysis and used to seek deeper understanding of these: assessment experience; resources and support; digital platform; and the last being suggestions for improvement (see supplementary data and Table S3).

Theme 1: Assessment experience

Perceptions of RCA and compared with CSA

Some responders were positive about the RCA whereas others were more qualified, ambivalent or negative perceiving it as unstandardised and inconsistent.

‘I feel the RCA is a good alternative to the CSA during the current pandemic but more time at least 2–3 months is needed to collect appropriate recordings. I think there should be more discussion about the consultation models that are appropriate for remote consulting.’ C254.

Reflects work and skills vs unrepresentative

One responder felt that the RCA reflected day-today practice and skills and was a fairer reflection of competencies, but others felt cases were unrepresentative because of the pandemic.

‘I think the main issue with difficulty in demonstrating skill, curriculum coverage and complexity for RCA was shortage of time building up to this particular first sitting rather than the method of examination per se.’ C56.

Difficulties finding, consenting, consulting and selecting cases

Lack of time led to difficulties demonstrating skills, competence, curriculum coverage and complexity.

‘To collect 13 consultations with all criteria sets from RCGP was difficult.’ C11.

Candidates described problems with consent particularly for older patients and those with more sensitive (e.g. mental, sexual or child health) problems.

‘Mental health patients very rarely consented to recording - making getting this ‘domain’ was difficult for me.’ C190.

Fairness overall and compared to CSA vs unfair

Some felt the RCA was unfair because of variation in patients, practice and trainer support, time allowed, and practice characteristics, e.g. ethnicity, language barriers, deprivation, small practice size.

‘A very difficult exam to do when working in a practice with high deprivation and poor English, and during a pandemic where patient demand was so variable, at times we had only <30% appointment slot usage.’ C248.

Impact on individuals, training, work, and patients

Adverse effects of exam consultations on patient care and a negative impact on interpersonal skills were perceived due to time and exam constraints.

‘10 minutes is difficult if RCGP also wish the case to have adequate complexity and challenge- interpersonal skills often suffer if interrupting the patient and rushing to finish in 10 minutes.’ C215.

Theme 2: Resources and support

Organisational problems with exam developed at short notice

There were many complaints of insufficient time to gather and submit cases, worsened by last minute information and deadlines.

‘The difficulty finding a variety of consultations to show my skills isn’t a reflection on the exam but the current pandemic and the different way in which we are working. My main issue with the exam was how last minute all the information and confirmed deadline date was (given that I work part time and had lots of annual leave booked).’ C40.

Logistic, equipment and exam barriers and costs

Examination and equipment costs were felt to be high, adding to trainee burden and stress, particularly when additional charges were made for consultations that exceeded a specified (10 minute) time period.

‘Cost is absolutely ridiculous given we are doing all the work.’ C165.

‘I don’t think it was fair to charge us for using extra minutes.’ C89.

Guidance and communication problems

Contradictory, unclear, changing or late guidance were considered problematic. Some educational supervisors (ESs) were unable to support candidates.

‘Our ESs also did not really have any additional info or support they could offer, as they had no information or experience.’ C152.

Theme 3 Digital platform

Many were happy with the online platform, but others had access or technical problems.

‘Fourteen fish platform surprisingly good but occasionally did weird things like delete my face off a video consult or drop audio in video consult.’ C122.

Theme 4 Suggestions for improvement

Responders requested more time to record and submit cases, more time per case and overall, and additional information (e.g. photos) to be submitted.

‘12–15 minutes is more realistic for RCA consultations since we are dealing with real patients.’ C34.

Clearer guidance was needed on procedures, case suitability and marking. Fairness for IMGs and less than full-time trainees were considered important. More detailed feedback from exam and trainer were requested.

‘It would have been nice to get an email confirmation when they were submitted. I know the FourteenFish site said they were, but I would have felt more reassured by an RCGP confirmation email. Perhaps I was just being paranoid, but I didn’t feel exactly sure it had gone through ok as I assumed I would get an email to confirm it!’ C83.

Discussion

Main findings

Responders were generally positive about the resources, support, and digital platform, but were less positive about their experience of the RCA. A small majority felt the RCA was a fair assessment of clinical skills. A large majority reported difficulty selecting and submitting cases or felt rushed during recording. Primary medical training in the UK, white ethnicity, and English as first language were associated with passing the exam. Exam experience (questionnaire subscales), type of consultation submitted, or extent of trainer review did not statistically significantly predict passing the exam possibly due to the smaller dataset at this early stage in the RCA.

Analysis of free text responses allowed quantitative findings to be explored in greater depth and provided a more mixed picture of candidates’ experiences. Despite responders’ positive perceptions of the digital platform, there were reports of technical problems, inevitable with a new online system. Positive perceptions of resources and support for the RCA contrasted with reports of contradictory or late guidance, insufficient time to gather cases, and logistic, equipment and cost barriers. Responders, particularly in smaller practices or where there were more patients with socioeconomic deprivation or language barriers, expressed difficulty accessing and submitting cases. Negative impacts on trainees, training, work, and patients were described. Candidate suggestions for improvement included increased time to record and submit cases, greater time per case, and allowing submission of supporting information. Improvements in guidance, support, feedback and to the online platform were also advocated.

Strengths and limitations

Key strengths were a good response rate, the integration of quantitative and qualitative findings, and linkage with performance data. Ethnic minority candidates and International Medical Graduates were less likely to complete the questionnaire but the reasons for this were not clear. The questionnaire was designed rapidly and administered without prior qualitative work, piloting or psychometric evaluation, but the survey showed good validity (e.g. good response rate, low levels of missing data), reliability and the qualitative responses were used to support, expand on and validate the quantitative findings [Citation9]. The final logistic regression model was limited by the lower number of responders agreeing to data linkage.

Comparison with existing literature

COVID-19 has, as it has many other aspects of life, profoundly affected medical training including application and interview processes, clinical practice, and high stakes assessments [Citation1,Citation11]. The rapid changes required to clinical examinations during the COVID-19 pandemic, in normal circumstances provided by OSCEs, reflect the challenges of infection risks to candidates, examiners and (simulated) patients, validity and reliability requirements of assessment, and acceptability to stakeholders.

Some programmes have suspended licencing clinical assessments [Citation12] which, in the case of the United States Medical Licencing Examinations, has led to anxiety and burnout among trainees [Citation13]. The challenges of the pandemic have provided opportunities for disruptive innovation, with assessments adapted to the new situation [Citation1,Citation14]. Socially distanced [Citation3] or remote (tele-) OSCEs have been shown to perform as well as standard OSCEs [Citation5].

This study shows that, despite responders’ hope that the exam would be less prone to differential attainment, there were differences in attainment for minority ethnic candidates and International Medical Graduates as found in the CSA [Citation15]. Although male gender and disability were associated with a lower odds of passing, these differences were not significant, in contrast to differences found in the CSA but this may have been because of low numbers [Citation15,Citation16]. An important factor was the association with English as first language with an increased chance of passing, which has been identified as a possible cause of difference in performance in licencing exams [Citation17].

Since the RCA was developed using experience of a previous video exam that preceded the CSA, it incorporated elements [Citation6] of the previous assessment [Citation18], while increasing case numbers from 7 in the previous video exam to 13 in the RCA to enhance reliability and precision. Neither positive perception of the exam nor consultation format submitted was associated with passing and neither was reported trainer review of consultations, despite concerns about the possible differential effects of educator support.

Implications for policy, practice, and research

This study provides evidence that the RCA was a feasible alternative to the CSA during the COVID-19 pandemic but also shows areas for improvement. The study focussed on the early sittings of the RCA and the assessment has been modified subsequently. Guidance on case selection (including the introduction of mandatory case criteria linked to clinical topic areas), support and feedback and improvements to the online submission platform have been updated and length of time allowed for each case has been increased from 10 minutes to 12 minutes [Citation7].

A fundamental review of future options for performance assessments for the MRCGP is currently under way. A detailed assessment of differential attainment in the RCA has been submitted to the UK medical regulator (General Medical Council) and will be published separately (personal communication).

Further studies are needed to evaluate the validity, reliability, and precision of this assessment as it is revised and improved, from the perspective of candidates, examiners, and educators and from analyses of candidates’ performance.

Conclusions

The RCA was implemented as a practical alternative to the CSA but had shortcomings perceived by candidates and showed areas for potential improvement and further evaluation.

Box 1 Candidate survey

What data are we collecting?

The results from this survey will be used to evaluate the pilot of the Recorded Consultation Assessment (RCA) only. This evaluation will be conducted by the RCGP. This survey will not be used to evaluate or assess you as a trainee or the progress you are making in the training programme.

This survey is designed to be anonymous and as such, we ask that you do not provide any personal data or identifying information in any of the ‘free text’ answers.

The data you provide will only be used for the purpose outlined above and stored by the RCGP (https://www.rcgp.org.uk/terms-and-conditions/privacy-statement.aspx) in compliance with all relevant Data Protection laws. This survey is designed with, and hosted by, Online surveys (run by Jisc). Online survey is fully compliant with all UK data protection laws and details on the way they store and process the data you provide can be found here: https://www.onlinesurveys.ac.uk/terms-and-conditions/

Survey

In the survey below you will see a series of statements. Please indicate the extent to which you agree or disagree with each statement. You will have the opportunity to provide free text comments at the bottom of this page.

1. I found it easy to submit consultations reflecting the variety of a GP’s work.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

2. I found it difficult to collect 13 consultations which demonstrated my skills appropriately.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

3. I found it difficult to submit consultations representing a good sample across the curriculum.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

4. I found it easy to collect consultations with an appropriate level of challenge for the RCA.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

5. With some consultations, I felt rushed and ran out of time.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

6. I feel the RCA was a fair examination of my clinical skills.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

7. I found it easy to record consultations using the FourteenFish platform.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

8. Did you use any other platform/method other than the FourteenFish platform to record some/all of your consultations?

Yes – I used another platform │ No – I only used the FourteenFish platform

8.a. It was difficult to transfer my recordings onto the FourteenFish platform.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

9. The consultations I submitted were:

Mainly remote (audio) │ Mainly remote (video) │ Mainly face to face │ A mixture of these

10. The Candidate Handbook was helpful in preparing me for the RCA examination.

Strongly agree │ Agree │ Neutral (neither agree nor disagree) │ Disagree │ Strongly disagree

11. The RCA Frequently Asked Questions (FAQ) were helpful in preparing me for the RCA examination.

Strongly agree: Agree: Neutral (neither agree nor disagree): Disagree: Strongly disagree

12. Did your trainer review any of the consultations before submission?

All submissions │ Some submissions │ No submissions

12.a. For those submissions that your trainer reviewed, in general, what was the nature of the review?

Detailed discussion │ Brief review

13. Do you have any additional comments?

14. With which gender identity do you most identify?

Male │ Female │ Other

14.a. If you answered ‘other’, you can specify if you wish:

15. Were you at Medical School in the UK?

Yes │ No

16. Is English your first language?

Yes │ No

17. What is your GMC number?

Contributions

All authors contributed to the conception and design of the study, revision, and final approval of the paper. All authors agree to be accountable for all aspects of the accuracy and integrity of the study.

Ethical approval

Ethical approval was granted by the Research Ethics Committee of the University of Lincoln.

Supplemental material

Supplemental Material

Download MS Word (40.9 KB)

Acknowledgments

We are grateful to the MRCGP examinations department for supplying anonymised data. Our thanks to members of the Community and Health Research Unit for comments on the paper.

Disclosure statement

PC, AF, MD, and ANS have received funding from and are members of the panel of MRCGP examiners. RW is psychometrician to the RCA and has received funding from the RGCP in this role. DL, VP, VB and GL have declared no competing interests.

Supplementary material

Supplemental data for this article can be accessed here.

Additional information

Funding

Royal College of General Practitioners, UK.

References

  • Hauer KE, Lockspeiser TM, Chen HC. The COVID-19 pandemic as an imperative to advance medical student assessment: three areas for change. Acad Med. 2021;96(2):182-185.
  • Gordon M, Patricio M, Horne L, et al. Developments in medical education in response to the COVID-19 pandemic: a rapid BEME systematic review: BEME Guide No. 63. Med Teach. 2020;42(11):1202–1215.
  • Boursicot K, Kemp S, Ong T, et al. Conducting a high-stakes OSCE in a COVID-19 environment. MedEdPublish. 2020;9(1):54 (1-8). DOI:https://doi.org/10.15694/mep.2020.000054.1.
  • Lawrence K, Hanley K, Adams J, et al. Building telemedicine capacity for trainees during the novel coronavirus outbreak: a case study and lessons learned. J Gen Intern Med. 2020;35(9):2675–2679.
  • Lara S, Foster CW, Hawks M, et al. Remote assessment of clinical skills during COVID-19: a virtual, high-stakes, summative pediatric Objective Structured Clinical Examination. Acad Pediatr. 2020;20(6):760–761.
  • Elfes C. Video component of the MRCGP. Practitioner. 2006;250(1687):66–68.
  • Recorded Consultation Assessment Candidate Handbook. [cited 2021 Jul 17]. Available from: https://www.rcgp.org.uk/training-exams/mrcgp-exam-overview/mrcgp-recorded-consultation-assessment.aspx
  • Lincoln YS, Lynham SA, Guba E G. Paradigmatic controversies, contradictions, and emerging confluences. In: Denzin NK, Lincoln YS, editors. London: The Sage handbook of qualitative research. 5th ed. 2017, pp. 163-188. [cited 2021 Jul 17].
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Third ed. Los Angeles; London: Sage; 2018.
  • Field AP. Discovering statistics using IBM SPSS statistics. 5th ed. Thousand Oaks, California: Sage Publications Inc.; 2018.
  • Pelletier-Bui A, Franzen D, Smith L, et al. COVID-19: a driver for disruptive innovation of the emergency medicine residency application process. West J Emerg Med. 2020;21(5):1105–1113.
  • Liesman DR, Pumiglia L, Kemp MT, et al. Perspectives From rising fourth year medical students regarding strategies to counteract the effects of COVID-19 on medical education. J Med Educ Curric Dev. 2020;7:2382120520940659.
  • Borsheim B, Ledford C, Zitelny E, et al. Preparation for the United States Medical Licensing Examinations in the face of COVID-19. Med Sci Educ. 2020;1–6. [Epub ahead of print]. DOI:https://doi.org/10.1007/s40670-020-01011-1.#60;
  • Fuller R, Joynes V, Cooper J, et al. Could COVID-19 be our ‘There is no alternative’ (TINA) opportunity to enhance assessment? Med Teach. 2020;42(7):781–786.
  • Asghar Z, Williams N, Denney M, et al. Performance in candidates declaring versus those not declaring dyslexia in a licensing clinical examination. Med Educ. 2019;53(12):1243–1252.
  • Pope L, Hawkridge A, Simpson R. Performance in the MRCGP CSA by candidates’ gender: differences according to curriculum area. Educ Prim Care. 2014;25(4):186–193.
  • Pattinson J, Blow C, Sinha B, et al. Exploring reasons for differences in performance between UK and international medical graduates in the Membership of the Royal College of General Practitioners Applied Knowledge Test: a cognitive interview study. BMJ Open. 2019;9(5):e030341.
  • Siriwardena AN, Edwards AG, Campion P, et al. Involve the patient and pass the MRCGP: investigating shared decision making in a consulting skills examination using a validated instrument. Br J Gen Pract. 2006;56(532):857–862.