3,920
Views
26
CrossRef citations to date
0
Altmetric
Research Article

A remedial intervention linked to a formative assessment is effective in terms of improving student performance in subsequent degree examinations

, , , &
Pages e185-e190 | Published online: 30 Mar 2010

Abstract

Background: Intervention may help weaker medical students improve their performance. However, the effectiveness of remedial intervention is inconclusive due to small sample sizes in previous studies. We asked: is remedial intervention linked to a formative assessment effective in terms of improving student performance in subsequent degree examinations?

Methods: This was a retrospective, observational study of anonymous databases of student assessment outcomes. Data were analysed for students due to graduate in the years 2005–2009 (n = 909). Exam performance was compared for students who received remediation versus those who did not. The main outcome measure was summative degree examination marks.

Results: After adjusting for cohort, gender, overseas versus home funding, previous degree and previous performance in the corresponding baseline third year summative exam, students receiving a remedial intervention (after poor performance on a formative objective structured clinical examination and written exams mid-fourth year) were significantly more likely to obtain an improved mark on end-of-fourth year summative written (p = 0.005) and OSCE (p = 0.001) exams compared to those students who did not receive remediation.

Conclusion: A remedial intervention linked to poor assessment performance predicted improved performance in later examination. There is a need for prospective studies in order to identify the effective components of remedial interventions.

Introduction

A small proportion of medical students perform poorly on measures of clinical or academic performance. Several factors may contribute to poor performance, such as study skills and/or personal problems (Tooth et al. Citation1989; Cleland et al. Citation2005). The complex patterns of assessment in medicine mean that struggling students may continue with little guidance or support (Sayer et al. Citation2002) and supervising clinicians are often reluctant to fail under-performance (Speer et al. Citation2000; Dudek et al. Citation2005; Cleland et al. Citation2008b). Thus, students’ learning problems remain unaddressed, leading to repeated failure and under-performance (Tooth et al. Citation1989; Cleland et al. Citation2005).

Weak students tend not to recognise their difficulties or seek support appropriately (Challis et al. Citation1999; Cleland et al. Citation2005; Langendyk Citation2006; Srinivasan et al. Citation2007; Sinclair & Cleland Citation2007) so the onus is on Faculty to intervene by identifying and addressing poor performance through remediation processes. Remediation can be defined as the act or process of correcting a deficiency. Remediation usually consists of three steps – diagnosis, remedial activities, and re-testing – with different institutions using a variety of approaches, and emphasis, to each step (Frellsen et al. Citation2008; Hauer et al. Citation2008). Most medical schools have a remediation process which usually will have evolved over time on the basis of, for example, staff availability and interest, the nature of students’ difficulties, and one which is flexible enough to be tailored to student needs. Whatever the specific approach, remediation processes place substantial time demands on Faculty (Sayer et al. Citation2002; Hauer et al. Citation2008), which can be difficult to find (Hauer et al. Citation2008).

Is remediation effective? Those studies, which have evaluated performance after specific remedial input, have usually found positive outcomes, but have had small sample sizes (Lavin & Pangaro Citation1998; Sayer et al. Citation2002; Denison et al. Citation2006). While providing some tentative evidence to support the effectiveness of remediation, these findings seem at odds with recent qualitative data showing that Faculty report uncertainty about the efficacy of remediation. Hauer et al.'s (Citation2008) participants admitted uncertainty about the effects of their schools’ remediation processes, raising concerns about lack of rigorous outcome data, as they viewed re-tests post-remediation as easier than the original examinations. They also expressed uncertainty in terms of how remedial students would perform in actual student–patient interactions. Moreover, different reasons for poor performance may not be equally amenable to change.

There is a clear need for studies focusing on the effectiveness of remedial intervention plans (Frellsen et al. Citation2008; Hauer et al. Citation2008). We approached this task via a retrospective, observational study of anonymous databases of student assessment outcomes. As per previous database studies from this group, ethics permission was considered unnecessary due to the anonymous nature of the assessment data (Cleland et al. Citation2008a). The aim of this article was to study the effects of remediation in medical education, and to promote discussion on the subject of improving quality in medical education research.

The intervention

Within our institution, fourth year medical students on a 5-year undergraduate medical degree undergo summative examinations at the end of each academic year as well as formative assessment mid-fourth year (December). The format of the summative and formative examinations is the same: written (Modified Essay Questions [MEQs], Extended Matching Questions [EMQs], Multiple-Choice Questions [MCQs]) and an objective structured clinical examination (OSCE) (Harden & Gleeson Citation1979). These exams are standard set using recognised methods and, in the case of the OSCE, have sufficient stations for reliability (Newble Citation2004).

Students who fail one or both of the formative exams are initially required to attend an individual Advisory Interview with two members of academic staff. Academic staff have access to details of the students exam performance. At this interview, a standard pro forma is used to assess the aspects of the exam where the student has experienced difficulties and the reasons offered by the student for failing. The interviewers then consider what, if any, remedial action should be taken. In many cases, where students admit to insufficient preparation, the remediation may be merely an exhortation to work harder. However, staff have a range of resources and extra teaching which they can offer an individual student based on the problems identified. These include extra clinical skills and communication skills teaching, in simulated and/or ward environments. In cases where illness or personal difficulties are implicated, staff may suggest pastoral care, medical help or give other practical advice. Students with disability issues may be referred to the university's student support services. Students considered at high risk of failing, or about whom significant concerns are raised (e.g., clinically significant levels of depression), are brought back for 6-week follow-up and their progress during the remainder of the academic year may be monitored.

This flexible approach to remediation, which includes diagnosis of the learner deficits, remedial activities and re-testing, is described in detail in Denison et al. (Citation2006), who also identified that it is deemed acceptable by staff and students. It reflects remedial education in other undergraduate and professional settings (e.g., Forrest et al. Citation1999; Hauer et al. Citation2008).

Methods

The study subjects were University of Aberdeen undergraduate medical students due to graduate in 2005–2009. Data on age at entry (mature students classed as over 20 years at entry), undergraduate or graduate entrance, gender, funding status (fees paid by UK or overseas sources), previous degree qualifications, intercalated degree status (traditionally, about 15% of Aberdeen medical students undertake an optional year additional to the basic 5-year undergraduate course to intercalate a further degree; Cleland et al. Citation2009) and summative examination results from third year and fourth year were routinely collected during the selection and degree assessment processes.

Marks are collected in the form of the Common Assessment Scale (CAS), a 21-point scale from 0 to 20 used for all assessments at the University of Aberdeen. Point 9 represents the minimum level of performance needed to pass and 20 indicates the best performance, which can be expected from a student at the relevant level. The CAS is ‘not’ a linear scale and, in converting exam scores to CAS marks, there is no requirement that the same interval of raw marks should apply to each of the 21 CAS marks. CAS marks are grouped into bands, each with their own description: 18–20 outstanding, 15–17 very good, 12–14 good, 9–11 pass (borderline) and 0–8 fail. Note that poor performance was defined as failing (CAS 0–8) both or one of the formative exams (written and/or OSCE).

Data storage and statistical analysis was performed using SPSS version 16 (SPSS Inc., Chicago IL, USA). Chi-squared analysis was used to compare the demographic factors and performance of those students who did and did not receive a remedial intervention. Ordinal regression was then used to examine the influence of the intervention on the fourth year summative exam results after adjustment for potential confounders (baseline marks in third year [either written and OSCE summative exams], cohort and any demographic factors found to be significantly associated with the intervention group on bivariate analysis). Overall, most students were in the higher CAS bands and a Cauchit link function was deemed appropriate. Odds ratios were calculated for the dichotomous covariates by taking the exponential of (minus one multiplied by the parameter estimate) with odds ratios above one indicating a higher CAS score in the summative exam for students in the category of interest compared to the respective base groups. The Nagelkerke pseudo R2 was also documented. This can take a value of between 0 and 1, and is a marker of the improvement in goodness-of-fit of the current model over a model containing just the intercept term. If the independent variables in a model perfectly predicted the outcome, then the Nagelkerke pseudo R2 would equal 1. Cross tabulations of CAS bands from the summative exams (stratified by whether or not individuals received an Advisory interview or not) were calculated for the third year written exam by the fourth year written exam and the third year OSCE by the fourth year OSCE. The marginal homogeneity test was used to examine the agreement between the paired CAS bands. A p-value of ≤0.05 was used to denote statistical significance throughout all analyses.

Results

The study included 909 medical students, of whom 180 were due to graduate in 2005, 171 in 2006, 186 in 2007, 190 in 2008 and 182 in 2009. Most (82.0%) of the students were undergraduate school leavers (i.e., aged 17–20 years on entry) and 56.3% were female. Almost 8% were overseas funded students, 13.2% were graduates and 15.2% of students did an intercalated degree.

A total of 198 (21.8%) students received a remedial intervention due to their poor performance on the mid-fourth year formative examination diet. Note that poor performance was defined as failing (CAS 0–8) both or one of the formative exams (written and/or OSCE).

Table 1.  Demographic comparison of students by whether or not they had a remedial intervention

shows the demographic breakdown for those students who did and did not have an intervention. The students who had an intervention were significantly more likely to be male, had overseas funding and not to have had a previous degree.

As expected, given the criteria for remedial intervention, there was a highly significant association between results from third year exams and receiving remediation, with students in the lower CAS bands being more likely to be selected for interview (written p < 0.001, clinical p < 0.001).

In most cases (176 out of 198: 89%), the reasons provided by the student for poor performance were noted. The most common reason was not studying enough/poor study technique (n = 133: 76%). Other reasons for poor performance included mental health problems (n = 9: 5%), health problems (n = 6: 3%), dyslexia (n = 1: 0.06%), family problems (n = 5: 3%), family death/terminal illness (n = 4: 2%) and financial hardship (n = 4: 2%). A total of 67 students from the original 198 had a follow-up interview recommended.

Data on whether extra clinical communication or clinical skills teaching was required was noted for 165 students out of 198 interviewed (94%). Of these, 30 (18%) were recommended extra communication skills teaching, 38 (23%) extra clinical skills teaching, and 17 (10%) both. Twelve students (7%) were directed to the University's own Academic Learning Support Unit (ALSU, provides advice and support for learning and study skills to all students, in the form of on-line resources, small group teaching and individual sessions, on topics such as revision strategies, improving one's writing skills and so on).

Table 2.  Value of a remedial intervention in predicting fourth year written summative exam results

and

Table 3.  Value of a remedial intervention in predicting fourth year OSCE summative exam results

show the results of the ordinal regression analysis predicting fourth year summative (end-of-year) written and OSCE exam results, respectively. In both models, cohort, gender, funding source (home or overseas) and previous degree were entered as covariates, as well as the corresponding (written or OSCE) baseline third year summative exam results. After adjustment for potential confounders, those who received a remedial intervention were significantly more likely to obtain a higher CAS band on the fourth year summative written exam than in the third year, compared to those students who did not have a remedial intervention (odds ratio 1.99, 95% confidence interval 1.23, 3.22, p = 0.005, ). In addition, males and those without a degree were more likely to have obtained a higher CAS band on fourth year summative written exam. shows that having a remedial intervention was a significant independent predictor of a higher CAS band in the fourth year summative OSCE exam (odds ratio 2.28: 95% confidence interval 1.52, 3.40, p < 0.001). Again, female students did significantly better than males.

CAS bands for the fourth year formative and summative exams were then compared between intervention and non-intervention groups. Among those who received an intervention, 80.2% improved on their formative written exam CAS band by at least one band, 16.6% stayed the same and 2.6% moved to a lower CAS band in the fourth year summative written exam. Comparable figures for the students who had no intervention were 52.4%, 39.5% and 8.1%, respectively, (p < 0.001). For the OSCE, among those who received an intervention, 46.4% improved their formative OSCE-CAS band by at least one, 25.5% stayed the same and 28.1% moved to a lower CAS band in the fourth year summative OSCE. Comparable figures for the students who did not have a remedial intervention were 24.4%, 44.5% and 31.1% (p < 0.001).

Discussion

After adjustment for cohort, gender, previous degree, funding source and third year exam result, a remedial intervention linked to poor performance on a formative assessment diet mid-fourth year was found to predict significantly improved performance in summative exams approximately 6 months later. Students who received the remedial intervention were significantly more likely to score a higher mark in the summative written exam and/or the OSCE than students who had no intervention. This is the first study of remediation in medical education with sufficient numbers to allow robust statistical analysis of data taking into account confounders known to be relevant to assessment outcome (Ferguson et al. Citation2002; Lumb & Vail Citation2004; Wilkinson et al. Citation2004; Yates & James Citation2007). We used routine examination methods rather than introducing additional tools to identify weak students (Martin & Jolly Citation2002). Moreover, student feedback (course evaluation forms) indicates that the summative (re-)test was arguably more (rather than less) difficult than the original formative examination, addressing one of the concerns highlighted by Hauer et al. (Citation2008). Thus, the data indicates that a strategically placed remedial intervention can enable weaker students to perform better in later assessments. This is reassuring given the substantial time demands inherent in remediation processes (Sayer et al. Citation2002; Hauer et al. Citation2008) and faculty concerns as to the efficacy of remediation (Hauer et al. Citation2008).

In agreement with previous work, being male and being funded from overseas predicted poor performance; cultural differences have been proposed as an underlying factor for this pattern of performance (Ferguson et al. Citation2002; Lumb & Vail Citation2004; Yates & James Citation2007; Woolf et al. Citation2009). Perhaps unsurprisingly, being a graduate on entry to the degree programme predicted better performance (Wilkinson et al. Citation2004): graduates are more mature and have acquired skills, which may help them in the study of medicine (McManus et al. Citation1999; Wilkinson et al. Citation2004; Cleland et al. Citation2009). Long-term follow-up and different outcome measures would be required to identify if this remedial intervention was also predictive of subsequent clinical performance (Hamby et al. Citation2006; Hauer et al. Citation2008).

However, while we have identified that the intervention works, the retrospective nature of the study does not allow the identification of the components of the remedial process, which actually made a difference; that is we cannot delineate precisely which components of the remediation were effective due to the retrospective observational design of the study. Nor can we specify the effective components or outcomes of the intervention by reason for remediation. Is the motivator alarm due to poor performance and nothing to do with the remedial intervention whatsoever? Is it because students do not study for formative exams? Is it related to the interview, for example, the clarification which might result from talking through issues with Faculty, or the advice provided? Is it the extra teaching and learning? If so, how much extra teaching and learning is critical? Did students who did badly on skills, and received remediation, which addressed these skills deficits, do better than those with personal difficulties who received pastoral care? Attitudinal aspects and professional attributes may well be encompassed implicitly within some of these assessments: these aspects may well also contribute to poor performance and also may be less amenable to remediation. The list of possible active components and consequences is long, if not endless. Furthermore, when should impact be measured – immediately after the interview, after remediation (if required) or, as we did, after actual behaviour (exam performance)? Again, there are many variables to consider, and the relative contribution of each needs to be evaluated. Another difficulty in generalising these results are inherent in any flexible remediation system: where support is tailored to individual students, using the resources available to staff in a specific institution, the findings from one programme may not be applicable to different settings. Furthermore, we do not know the sensitivity, specificity or reliability of the exams entered into the analysis.

One may argue that an alternative explanation of our finding is due to the phenomenon of ‘regression to the mean’. We selected a non-random sample (i.e., those students deemed to need a remedial intervention based on poor performance on a formative OSCE and written exams mid-fourth year) from a population of students. We know that student scores are determined in part by underlying ability and in part by chance. Therefore, it is unlikely that the remedial group will attain exactly the same CAS mark in their end of fourth year exams as they did 6 months previously. Even if a few of them improve, the overall remedial group mean will be closer to the population's average fourth year results than it was to the remedial group's average exam mark mid-fourth year. However, we did include third year exam mark as a potential confounder in the regression models with the aim of adjusting the overall difference for ‘prior exam ability’. In addition, the observed difference in the magnitude of improvement in CAS bands between the intervention and non-intervention groups is unlikely to be explained solely by regression to the mean.

We propose that evaluating a medical education intervention requires a systematic approach; probably, a prospective approach where possible, in order to tease out components, which contribute to change. Only by doing so can medical education research identify generalisabilities and progress knowledge (Eva Citation2009).

One way of doing this would be to learn from other areas of research, such as health services research. Many similarities can be drawn between the two fields. In both, there are problems relating to the difficulty of standardising the design and delivery of the intervention; sensitivity to features of the local context; organisational and logistical difficulty of applying experimental methods to change; length and complexity of the causal processes linking intervention with outcome (Craig et al. Citation2008). While health services research has also been seen as the poor relation (Bligh & Brice Citation2008) to ‘hard’ medical science, it is now coming of age due to the approaches used such as the Medical Research Council (MRC), framework for complex interventions (Medical Research Council 2008). The basis of the framework includes emphasis on theoretical understanding, process evaluation, careful consideration of research design, the need for a range of measures and adaptation to local settings. Key questions focus on practical effectiveness and how an intervention might work. These factors seem very pertinent to medical education research (Eva Citation2009).

In conclusion, many medical schools have remediation systems which have evolved without evidence to support design. Although we have shown efficacy for our own system, a prospective approach to planning and evaluating remediation, perhaps utilising a method such as the MRC framework is needed to determine how remediation is best undertaken. In other words, remediation may improve performance but how can we best progress understanding of the effectiveness and efficiency of the remedial process?

Acknowledgements

JC, HS and SR had the original concept. JC carried out the literature search and wrote the first draft of the article. AJL performed multivariate analysis and supervised statistical analysis overall. RM undertook data collation, database management and preliminary statistical analysis. All authors contributed to the final manuscript draft.

Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.

Notes on contributors

Additional information

Notes on contributors

Jennifer Cleland

J. A. CLELAND PhD, D Clin Psychol, is a lead for Medical Education Research. Main research interests are assessment and student support.

R. K. Mackenzie

R. K. MACKENZIE, MBChB is a clinical teaching fellow. Main research interests are assessment and student support.

S. Ross

S. ROSS, MBChB is a deputy coordinator, Year 4 MBChB. Main research interests are assessment and student support.

H. K. Sinclair

H. K. SINCLAIR, PhD main teaching and research interests focus on community-based, undergraduate medical education.

A. J. Lee

A. J. LEE, PhD, is a professor of Medical Statistics.

References

  • Bligh J, Brice J. What is the value of good medical education research?. Med Educ 2008; 42: 652–653
  • Challis M, Fleet A, Batstone G. An accident waiting to happen? A case for medical education. Med Teach 1999; 21: 582–585
  • Cleland J, Arnold R, Chesser A. Failing finals is often a surprise for the student but not the teacher: Identifying difficulties and supporting students with academic difficulties. Med Teach 2005; 27: 504–508
  • Cleland JA, Milne A, Sinclair HK, Lee AJ. Predicting performance cohort study: Is performance on early MBChB assessments predictive of later undergraduate grades?. Med Educ 2008a; 42: 676–683
  • Cleland JA, Knight L, Rees C, Tracey S, Bond CM. ‘Is it me or is it them?’ Factors influencing assessors’ failure to fail underperforming medical students. Med Educ 2008b; 42: 800–809
  • Cleland JA, Milne, A, Sinclair, HK, Lee, AJ. 2009. An intercalated BSc degree is associated with higher marks on subsequent undergraduate medical degree exams. BMC 9:24 (19 May)
  • Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. 2008. BMJ 337: a1655.
  • Denison AR, Curry AE, Laing MR, Heys SD. Good for them or good for us? The role of academic guidance interviews. Med Educ 2006; 40: 1188–1191
  • Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med 2005; 80: S84–S87
  • Eva K. Broadening the debate about quality in medical education. Med Educ 2009; 43: 294–296
  • Ferguson E, James D, Madeley L. Factors associated with success in medical school: Systematic review of the literature. BMJ 2002; 324: 952–957
  • Forrest L, Elman N, Gizara S, Tammi Vacha-Haase T. Trainee impairment: A review of identification, remediation, dismissal, and legal issues. Couns Psychol 1999; 27: 627–686
  • Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: Results of a national survey. Acad Med 2008; 83: 876–881
  • Hamby H, Prasad K, Anderson MB, Scherpbier A, Williams R, Zwierstra E, Cuddihy HHH. BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach 2006; 28: 103–116
  • Harden RM, Gleeson FA. Assessment of clinical competence using an objective structured clinical examination (OSCE). Med Educ 1979; 13: 41–54
  • Hauer KE, Teherani A, Irby DM, Kerr KM, O’Sullivan PS. Approaches to medical student remediation after a comprehensive clinical skills examination. Med Educ 2008; 42: 104–112
  • Langendyk V. Not knowing what they do not know: Self-assessment accuracy of third-year medical students. Med Educ 2006; 40: 173–179
  • Lavin B, Pangaro L. Internship ratings as a validity outcome measure for an evaluation system to identify inadequate clerkship performance. Acad Med 1998; 73: 998–1002
  • Lumb AB, Vail A. Comparison of academic, application form and social factors in predicting early performance on the medical course. Med Educ 2004; 38: 1002–1005
  • Martin IJ, Jolly B. Predictive validity and estimated cut score of an objective structured clinical examination (OSCE) used as an assessment of clinical skills at the end of the first clinical year. Med Educ 2002; 36: 418–425
  • McManus IC, Richards P, Winder BC. Intercalated degrees, learning styles, and career preferences: Prospective, longitudinal study of UK medical students. BMJ 1999; 319: 542–546
  • Newble D. Techniques for measuring clinical competence: Objective structured clinical examinations. Med Educ 2004; 38: 199–203
  • Sayer MM, De Saintonge M, Evans D, Wood D. Support for students with academic difficulties. Med Educ 2002; 36: 643–650
  • Sinclair HK, Cleland J. Medical undergraduate students – who seeks formative feedback?. Med Educ 2007; 41: 580–582
  • Speer AJ, Solomon DJ, Fincher RM. Grade inflation in internal medicine clerkships: Results of a national survey. Teach Learn Med 2000; 12: 112–116
  • Srinivasan M, Hauer KE, Der-Martirosian C, Wilkes M, Gesundheit C. Does feedback matter? Practice-based learning for medical students after a multi-institutional clinical performance examination. Med Educ 2007; 41: 857–865
  • Tooth D, Tonge K, McManus IC. Anxiety and study methods in preclinical students: Causal relation to academic performance. Med Educ 1989; 23: 416–421
  • Wilkinson TJ, Wells JE, Bushnell JA. Are differences between graduates and undergraduates in a medical course due to age or prior degree?. Med Educ 2004; 38: 1141–1146
  • Woolf K, McManus IC, Potts HWW, Dacre J, 2009. Ethnic differences on psychological and demographic factors – can they explain the academic underperformance of medical students from ethnic minorities? Conference presentation, The Association for Medical Education in Europe (AMEE) Malaga, Spain, 29 August–2 August 2009
  • Yates J, James D. Risk factors for poor performance on the undergraduate medical course: Cohort study at Nottingham University. Med Educ 2007; 41: 65–73

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.