3,837
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Is continuous assessment inclusive? An analysis of factors influencing student grades

ORCID Icon, ORCID Icon &

Abstract

This paper reports a series of studies that assessed the performance of students on continuous assessment components from two courses in an undergraduate psychology programme. Data were collected from two consecutive cohorts of students (total N = 576) and the grades of students were compared based on additional learning needs (ALN; ALN versus No ALN), whether or not the students had requested an extension to a deadline, and whether or not students had missed any of the tests that made up the continuous assessment component. Results showed no significant differences in attainment between students with and without ALN, supporting the argument that continuous assessment does not differentially impact students who already require additional support. Students who were granted deadline extensions achieved significantly lower scores, but only on the course with content that built week on week. Students who missed one or more tests achieved significantly lower scores even if the grade was calculated ignoring the questions that a student had not attempted. The implications of these findings for assessment practice in higher education are discussed.

The role of assessment in higher education is complex, and it has been described as a central component of the student experience (Brown and Knight Citation1994). On the one hand, assessment is integral to students evidencing their knowledge and ability to apply what they know; on the other, assessment can provide opportunities for learning and practicing skills as they develop across the degree scheme. This has led to a distinction between summative assessment (testing what has been learned) and formative assessment (testing to support learning). In practice, individual assessments often perform both formative and summative roles at once (Heywood Citation2000), and assessment can be ‘learning-oriented’ (Carless Citation2007) such that it presents tasks of a type and at a level that test students’ abilities in a way that allows them to develop and take ownership of their own learning.

The distinction between summative and formative assessment was much more pronounced in ‘traditional’ assessment structures, where the grade a student received for a course was predicated only on their performance on a final examination. However, in recent years, students are increasingly likely to be faced with continuous assessment, particularly as higher education courses have begun to incorporate the principles of online blended learning. Continuous assessment (also referred to as ‘frequent’ assessment: Vaessen et al. Citation2017) refers to including one or more assignments during the course in addition to (or instead of) the final examination (e.g. Day et al. Citation2018a). The continuous assessments could be coursework, essays or mid-term examinations: in this paper we are specifically referring to assessment via timed multiple-choice tests spread throughout the course. The continuous assessment structure has several pedagogical benefits—students are encouraged to study as they progress through the course (Day et al. Citation2018b), to remain engaged (Holmes Citation2015), and to reflect on their performance and refine their learning practices (Day et al. Citation2018a).

A number of studies have demonstrated that higher achievement is associated with continuous assessment versus the traditional single assessment point (Domenech et al. Citation2015; Tuunila and Pulkkinen Citation2015). Data also indicate that continuous assessment evaluates well with students (Isaksson Citation2008; Holmes Citation2015; Vaessen et al. Citation2017). For example, students noted that this form of assessment increased their engagement and knowledge of the content (Holmes Citation2015). Therefore, continuous assessment appears to be a positive step for instructors to take. However, to our knowledge, there is no previous work that has examined whether continuous assessment is effective only for ‘typical’ students. Students with additional learning needs (ALN) make up a significant proportion of those embarking on university degrees (Bunbury Citation2020; Newman et al. Citation2021). It is not yet clear whether the continuous assessment approach is appropriate for these students, although there may be reason to expect that continuous assessment could present additional obstacles.

The first research question we addressed in the current studies is whether students with ALN may be disadvantaged by a continuous assessment approach. In this paper we use ALN to cover a variety of specific learning difficulties (e.g. dyslexia, dyscalculia), attention deficit hyperactivity disorder, autism spectrum disorder and mental health diagnoses (e.g. depression, anxiety) that may make it harder for individuals to learn. Previous research has demonstrated that students with ALN are more likely to withdraw from university or to have lower grades than their counterparts without ALN (DaDeppo Citation2009; Weyandt and DuPaul Citation2013; Cortiella and Horowitz Citation2014; Spence et al. Citation2022). It has been argued that drop-out and failure in these students are not merely attributable to the core symptoms of each difficulty or disorder, but that students with ALN tend also to struggle with time management (Smith, English, and Vasek Citation2002), academic stress (Heiman Citation2007) and other study skills (e.g. DuPaul et al. Citation2017). Given that continuous assessment is designed to encourage and reward consistent effort, it is possible that students who are less likely to be consistent will be disadvantaged by this assessment structure.

Students with ALN are much more likely to miss classes or to fail to submit assignments (Hysenbegasi, Hass, and Rowland Citation2005; Kent et al. Citation2011). Hysenbegasi, Hass, and Rowland (Citation2005) showed that students with depression had missed an average of roughly 15 classes, 5 assignments and an examination in the past year versus the average of 3 classes and 1 assignment in the control group. The tendency for students with ALN to miss classes is not solely attributable to the symptoms of their central diagnosis. Among Kimball, Friedensen, and Silva (Citation2017) sample, they reported the experiences of a student who was required to attend off-campus therapy sessions regularly, and the student would often find the scheduling of classes and other commitments related to the treatment of their diagnosis to be an obstacle. The important point here is that students with ALN can find it difficult to remain consistently engaged (Reschly and Christenson Citation2006; Kimball, Friedensen, and Silva Citation2017), to manage their time (Smith, English, and Vasek Citation2002), and to complete all of the academic assignments that they are set (Hysenbegasi, Hass, and Rowland Citation2005). Continuous assessment rewards consistent engagement, good time management, and increases the number of assessment points through the course. It is therefore possible that students with ALN would be at a disadvantage compared to other students in courses using continuous assessment.

Students with ALN are not the only section of the student body for whom continuous assessments may be inappropriate. There are also occasions when students’ studies are temporarily interrupted due to unforeseen circumstances such as bereavement, illness or injury. Students who experience these life events tend to have lower academic achievement levels than other students (Hojat et al. Citation2003). This could be exacerbated in courses where there is a continuous assessment approach because these students may miss tests, subsequently missing opportunities for feedback and development—an important factor in continuous assessment. In many universities, qualifying students may be offered the opportunity to submit these assignments later in the semester, effectively extending the deadline for their work. However, several of the purported benefits of continuous assessment rely on providing students interim feedback and opportunities to revise and refine their learning practices (Day et al. Citation2018a). Extending a deadline removes one of the opportunities to go through this reflective cycle before the final assessment, and could have an impact on the grade in larger pieces of coursework either because of the reduced number of learning opportunities or because of an accumulation of work to be completed later in the term. Thus, the second research question was whether needing to extend continuous assessment deadlines (or missing a deadline altogether) had a detrimental effect on student attainment at the end of the course.

The studies reported in this paper aimed to examine whether continuous assessment strategies might disadvantage: (a) students with ALN, or (b) students whose studies are temporarily interrupted, potentially as a result of experiencing significant life events. In this context, we are referring to weekly multiple-choice tests as the ‘continuous assessment’ component in courses which also have high-stakes written coursework assignments or a final examination. We have used this terminology for simplicity—most of the questions were indeed multiple choice, but others were fill-in-the-blank format, and roughly 25–30% of the questions required that the students entered numerical answers to statistics problems set in class. We also acknowledge that written coursework, in some programmes, could be considered continuous assessment. In the programme considered in this paper nearly all courses have a single piece of written coursework handed in at the end of the teaching so coursework does not constitute continuous assessment in this context. In both cases, we hypothesised that these groups of students would achieve significantly lower scores than their classmates on the continuous assessment component.

Study 1

Method

Course format

This study examined the performance of students on the continuous assessment component of a second-year course concerning research methods and statistics as part of an undergraduate psychology degree. The course is taught over one semester (11 teaching weeks) with an average of 4 hours of taught statistics content per week. The continuous assessment was made up of weekly multiple-choice tests, 87 questions in all, and the overall continuous assessment component was worth 30% of the course grade. Each week, the tests assessed knowledge and application of the principles of a specific statistical test. There are 10 multiple-choice questions per week, except in one week where there are only 7 (the application of the statistical test to a dataset is assessed in a different way for that week). As is common with statistics courses, the content is organised such that the material for each week builds upon the principles of previous weeks (e.g. correlation is taught before simple linear regression; the t-test is taught before the one-way analysis of variance), and difficulty is somewhat incremental across the course.

Participants

Data was gathered from student records held by the university. Students were eligible for the study if they had been enrolled on a particular second year research methods and statistics course during the first semester of either the 20/21 or 21/22 academic years. Course content, teaching staff and the structure of the delivery was identical in both years. Any student who had: (a) repeated the year because of failure or suspension of studies, (b) suspended or withdrawn from the university midway through the research methods course, or (c) had joined the university after beginning their degree elsewhere was excluded—that is, all participants enrolled on the course only once, and all had completed the same courses in the first year of their studies. Students also had to have completed at least one of the continuous assessment tests to be included. The final sample comprised 576 students. The mean age of the students at the beginning of the course was 20.02 years (SD = 2.32). According to university records, 445 of the students identified as female, 129 as male, and 2 as third gender.

Design and procedure

The study took a between participants design. Three separate independent variables were considered in turn. The first independent variable was ALN status (ALN versus no ALN). Students with ALN were identified as those who were recorded as requiring reasonable adjustments to continuous assessment tests (e.g. 25% extra time to complete a test) and/or had one or more additional needs listed on their student record. The records accessible to the research team were not sufficiently detailed to allow for a precise breakdown of the diagnoses of the students with ALN, but numbers of participants belonging to different overarching categories of need are presented in . In all cases, diagnoses and requirements to qualify for reasonable adjustments to assignments met the thresholds to qualify for Disabled Students’ Allowance in the UK.

Table 1. Percentage of students with Additional Learning Needs in the current sample according to diagnosis (as listed in university records).

The second independent variable separated those students who had been granted a deadline extension on one or more of the continuous assessment tests from those who had completed all tests on time. In both of the analyses the dependent variable was the overall percentage correct in the continuous assessment component.

The third independent variable of interest was missed tests, which contrasted those who had failed to complete one or more tests versus those who had completed all continuous assessment tests. In this case, using an overall percentage for the entire continuous assessment component would almost inevitably result in differences between groups because students who missed tests would forfeit marks by default. Instead, we calculated a relative percentage based on the number of questions attempted. As an example, consider two hypothetical students, both of whom got 40 correct answers across the course. If student A attempted 100 questions, they would score 40%; if student B attempted only 50 questions they would score 80%.

All data was extracted from university student records following the ratification of student grades for the course in question—therefore there was no direct involvement of the students in data collection, nor was there any impact on final course grade as a result of this study. The study was approved by the School of Psychology Ethics Committee at the university at which the study was conducted.

Results

The mean grades (with standard deviations), split by each independent variable in turn, are presented in . The size of the numerical difference between groups varies depending on the independent variable, ranging from 0.52 for the ALN variable to 14.57 for the missed tests variable.

To formally test for differences in performance, we employed independent samples t-tests. The dependent variables were negatively skewed in all analyses, so square root transformations were applied to the data in order to meet parametric assumptions. The overall percentage score was not significantly different in students with and without ALN [t(574) = 0.45, p = .672, Cohen’s d = 0.04]. There was, however, a significant difference in continuous assessment scores between those students who had one or more extended deadlines and those who did not require extensions [t(574) = 2.40, p = .017, Cohen’s d = 0.43], such that students who did not require extensions achieved higher grades. Finally, there was a significant difference in relative percentage of correct answers between those who missed one or more test and those who did not [t(574) = 11.50, p < .001, Cohen’s d = 1.20]. Students who failed to submit one or more tests performed significantly worse on the continuous assessment component even if the questions that were not attempted were excluded from the grade calculation.

Study 1 interim discussion

The results of Study 1 indicate that students with ALN did not achieve significantly lower scores than the students without ALN. The purported benefits of incorporating continuous assessment into higher education courses seemingly apply to all students, not just to some. Unsurprisingly, students who fail to complete one or more tests perform worse overall—it is likely that missing tests are an indicator of disengagement from the course in some form.

The most intriguing finding is that students who require extensions to the deadlines for continuous assessment tests end up performing worse overall, even though they ultimately complete the test before the end of the course. Given the incremental nature of the content of the course considered (i.e. each week builds on the knowledge of the previous week), it is possible that this is an artefact of the course design, rather than the assessment itself. Therefore, a second study was conducted which applied the same analyses to student performance on a second course within the undergraduate psychology degree—a course that also includes a continuous assessment component but which covers content that is more distinct from week to week (i.e. the content of each week does not build on the knowledge from the previous week). The aim of Study 2 was: (a) to replicate the findings of Study 1 with regard to ALN and missing tests, and (b) to further elucidate potential reasons for the findings related to students with extended deadlines.

Study 2

Method

Course format

This study assessed performance of students on the continuous assessment component of a second-year undergraduate course concerning biological psychology. This module is taught in the same semester as the course considered in Study 1, and comprises 4 hours of taught content per week for 11 teaching weeks. The continuous assessment was made up of multiple-choice tests that were set roughly once every 2 weeks, 40 questions in all. Each of the tests comprised 10 questions. The overall continuous assessment component was worth 30% of the course grade. Continuous assessment tests covered topics that were related to lecture content but did not necessarily relate to the test before or after. Thus, the course was not designed in a way that required an understanding of week 2 in order to fully grasp the material covered in week 3.

Participants, design and procedure

Data was extracted from university records for the same 576 participants as in Study 1. The independent and dependent variables were also the same, although in this case they pertained to a different undergraduate course within the programme. The course considered in Study 2 was taught in the same semester as the course from Study 1.

Results

The means and standard deviations of grades, split by each of the independent variable, are presented in . As in study 1, there is variation in the size of the numerical difference between groups depending on the independent variable, ranging from 0.63 for the ALN variable to 6.13 for the missed tests variable.

Table 2. Descriptive statistics of student performance split according to each of the three independent variables.

To formally test for differences in performance, we employed independent samples t-tests. The overall percentage dependent variable was square root transformed to reduce skew. As in Study 1, students with and without ALN did not score significantly differently on the continuous assessment component [t(574) = 0.32, p = .746, Cohen’s d = 0.03]. There was a significant difference in relative percentage of correct answers between those who missed one or more test and those who did not [t(574) = 3.22, p = .001, Cohen’s d = 0.53], also matching the findings of Study 1. Students who failed to submit one or more tests performed significantly worse on the continuous assessment component even when the questions that were not attempted were ignored when calculating grades. In contrast to Study 1, the students who required extensions to deadlines did not perform significantly worse on the continuous assessment component than students who met the original deadlines [t(574) = 0.71, p = .48, Cohen’s d = 0.14].

Study 2 interim discussion

The findings of Study 2 confirm those of Study 1—students with ALN do not appear to be at a disadvantage when using continuous assessments, and students who miss assessment points entirely tend to perform worse than those who submit all assignments even if the questions that were not attempted are ignored in the grade calculation. Interestingly, there is no adverse effect of completing continuous assessment components out of the intended order (i.e. at an extended deadline) for this course design. This suggests that it is the incremental design of the course in Study 1 that adversely impacts students who require extensions, rather than the continuous assessment approach per se.

To further inform and constrain the conclusions from these studies, we conducted a third study to consider the possibility that ALN and the need for extended deadlines (or missing tests altogether) were conflated. In a fourth and final study, we also sought to find evidence for or against the suggestion that students who miss tests are likely to be those who achieve lower grades across their university career, either because they lacked the necessary study skills or because they were less engaged with their studies. These analyses draw on the same data that has already been described.

Study 3

Method and results

We considered the possibility that those students who were identified as having ALN might be more likely to require extensions to the deadline or miss tests altogether by generating contingency tables and applying chi square to the data. To create , students were categorised as with or without ALN. We then further split the groups into those who required extensions to one or more continuous assessment deadlines versus those who did not, for each course described previously. A similar process generated , only this time the ALN groups were split according to whether they missed tests.

Table 3. Descriptive statistics of student performance split according to each of the three independent variables.

Table 4. Crosstabulation of ALN and extension requirements in each course.

Table 5. Crosstabulation of ALN and incomplete tests in each course.

According to chi square analyses (with Yate’s continuity correction applied), there was a significant association between ALN and the need for an extension to one or more deadlines in the Research Methods and Statistics course (incremental continuous assessment) [χ2(1) = 23.90, p < .001, phi = .21] and in the Biological Psychology course (non-incremental continuous assessment) [χ2(1) = 8379, p = .003, phi = .13]. In both cases, a greater proportion of students with ALN required extensions.

Further chi square analyses (Yate’s correction applied) highlighted that there was not a significant association between ALN and missing tests altogether in the Research Methods and Statistics course [χ2(1) = 0.86, p = .353, phi = .04], nor the Biological Psychology course [χ2(1) < .001, p = .991, phi = .01]

Study 4

Method

In both courses, 70% of the grade was not achieved via continuous assessment. In the case of the Research Methods and Statistics course, students also had to complete two coursework reports, in the style of journal articles, based on newly collected data. We calculated an average grade on these reports for each student and compared them to both the continuous assessment component and the independent variables from studies 1 and 2 (ALN, requiring extensions and missing tests). In a similar vein, we compared the continuous assessment scores from the Biological Psychology course to the performance on the final examination for that course. We hypothesised that students who perform poorly on continuous assessment (using the relative score) will also perform poorly on other parts of the assessment. It should be noted that students who had reasonable adjustments for the continuous assessment components also had reasonable adjustments for the assessments considered in this study. Written assignments were marked for the quality of the content and grammatical or structural errors in the work were ignored except where these errors were extensive enough to interfere with the meaning of the message that the student was attempting to convey.

Further, to determine whether missing tests was an indicator of disengagement from studies, or identified students who were likely to achieve lower grades across their degree programme, we conducted a series of analyses comparing continuous assessment performance across courses and to within-course assessment components that were not continuous. We also assessed whether students who had extensions to deadlines or missed tests in one course performed worse in the other course. The prediction is that a disengaged or weak student will perform badly in all courses: thus, missing tests in Research Methods will likely indicate poor performance in Biological Psychology and vice versa. In other words, the poor performance on continuous assessments will be symptomatic of poor study skills more generally.

Results

Intra-course analyses

We first analysed whether requiring extensions on continuous assessments, or missing them entirely, had an impact on other parts of the assessment for a given course. In the Research Methods and Statistics course, missing continuous assessment tests was related to significantly poorer written assignment performance [Welch’s t(574) = 7.71, p < .001, Cohen’s d = 0.92]. The same pattern was not observed in the Biological Psychology course—there was no difference in examination grade between those who missed at least one continuous assessment test and those who completed all tests [Welch’s t(574) = 0.86, p = .396, Cohen’s d = 0.17]. Thus, our hypothesis was not borne out—missing a test was related to written assignment performance but not final examination performance. We again consider this to be a consequence of the structure of the coursework (or examination) rather than relating to the continuous assessment components. Requiring extended deadlines in either course did not lead to poorer performance in non-continuous assessments [Research Methods t(574) = 0.89, p = .374, Cohen’s d = 0.17; Biological Psychology t(574) = 0.39, p = .696, Cohen’s d = 0.08].

Cross-course comparison

We also explored whether the students who missed tests on one course were likely to achieve lower scores on the other course as well. In other words, we examined whether missing tests could be considered a general index of disengagement with studies. As expected, students who missed continuous assessment tests in Research Methods and Statistics did significantly worse in the continuous assessment in the Biological Psychology course [t(574) = 10.06, p < .001, Cohen’s d = 1.05], and vice versa [Welch’s t(574) = 8.11, p < .001, Cohen’s d = 1.33]. In contrast, students who required extensions in Research Methods and Statistics did not perform significantly worse in Biological Psychology [t(574) = 1.16, p = .245, Cohen’s d = 0.21], and students with extensions in Biological Psychology did not perform significantly worse in Research Methods and Statistics either [t(574) = 0.63, p = .526, Cohen’s d = 0.12].

General discussion

The studies described in this paper set out to examine whether students with ALN might be disadvantaged when continuous assessment tests are incorporated into courses. It is a welcome finding that there appears to be no such disadvantage. It also appears that although there is an association between ALN and the need for extended deadlines, this has not confounded the initial findings around continuous assessment grades. Therefore, the potential benefits to student attainment using continuous assessment are available to all of the students in these studies. Two open questions remain, though.

Firstly, we have not collected evidence as to the perceptions of students with (or without) ALN or examined whether there might be a difference in the way that continuous assessments are received. There are aspects of the existing literature that suggest that continuous assessments may be less stressful than high-stakes end-of-semester assessments, but there is also evidence of individual differences in relation to test anxiety. For example, we note that the literature has previously demonstrated a relationship between the importance attached to an assessment and the perceived stress among the students (Cizek Citation2005; Bonaccio and Reeve Citation2010; Banks and Smyth Citation2015), such that low-stakes assessments provoke less anxiety. Each of the tests administered as part of the continuous assessment components of the courses discussed in the current paper was low-stakes and hence should not, on their own, induce high levels of stress or anxiety. Nevertheless, Vaessen et al. (Citation2017) reported that 15% of their participants found that continuous assessment induced additional stress while 10% reported that stress was reduced by this assessment structure, in spite of the lower stakes per assessment point.

It is plausible that students who are generally anxious may find that continuous assessments mean that they are stressed about assessments for the whole semester, so although they perform as well as other students academically, there may be a detrimental effect on their wellbeing. Indeed, von der Embse et al. (Citation2018) meta-analysis demonstrated that students with ALN are likely to experience higher levels of test anxiety. Therefore, it is possible that continuous assessment could prolong the test anxiety experienced by the student, and increase the likelihood that students with ALN drop out of university altogether—maintaining high levels of stress or requiring that students invest additional time into completing frequent continuous assessments (e.g. for those with dyslexia, who spend more time reading and decoding assignments and working with specialist support tutors: Barga Citation1996; Kirby et al. Citation2008) could become unsustainable and result in burnout of students (e.g. Esch et al. Citation2014; Rahmati Citation2015). This is an important consideration for future research in the area because of the potential impact on the broader student experience, on retention and the implications for student support.

The second open question concerns whether students with ALN actually benefit from continuous assessment to a greater extent than students without ALN. As we mentioned in the introduction, students with ALN can struggle with time management (Smith, English, and Vasek Citation2002). We expected that this could make continuous assessment difficult for these students and have an adverse impact on their overall grades. However, it is also possible that the regular assessment points throughout the semester in the courses described in this paper actually serve as external regulation. It has been shown that time management skills can be improved via explicit instructor-implemented interventions (Kelly, Cuccolo, and Clinton-Lisell Citation2022), such as training in scheduling. Continuous assessment might fulfil a similar role—not explicitly training scheduling, but implicitly performing the scheduling function on behalf of the students. This might be of a particular benefit for those students who have less developed time management skills at the start of the course. There are no courses at the same level of the psychology programme from which the current data has been drawn that do not include similar low-stakes multiple choice tests, so we were not able to examine this possibility as part of our study. We consider that this would be a useful avenue for future research.

Aside from the evidence concerning ALN, the findings of the studies can be summarised as indicating that: (a) requiring deadline extensions on continuous assessment tests only affects performance on components that specifically rely on incremental building of course material, and (b) that it is students who are generally achieving lower grades that miss continuous assessment deadlines altogether. These findings have potentially important implications for teaching practice, student support and course design.

We noted, in study 4, that students who missed continuous assessment tests achieved significantly lower scores in the coursework assignments on the Research Methods course; students who missed continuous assessment tests in the Biological Psychology course did not achieve lower grades in the final examination. We suspect that this is because the final examination afforded the opportunity to revise content and memorise information in the days before the examination, whereas the nature of the coursework assignments (in this case) required the application of statistical principles that the students who missed tests may not yet have learned. In this regard, the missed tests in the Research Methods course are indicative of missed learning and missed practice of the skills required to achieve highly in the coursework, and this cannot be overcome with last-minute revision.

With regard to deadline extensions, there are three ways to limit the impact of delayed submission of continuous assessment tests—changing the design of the course, changing or removing continuous assessment, or changing the protocols around deadline extensions. We consider restructuring courses to allow for students to catch up on delayed tests in a timely manner to be the most suitable of these options. Requiring deadline extensions was only detrimental to students on the Research Methods course with course material that built week on week. A simple solution would be to make sure that courses were not arranged incrementally, so delayed tests were no longer detrimental. However, constructivist approaches to learning argue that students are more successful when they build on existing knowledge (see Powell and Kalina Citation2009, for an overview). Furthermore, there are a number of courses for which there is no other option for the structuring of course material. It would therefore seem inappropriate to suggest that courses were redesigned to the detriment of the majority of students in order to accommodate a small number of students who require deadline adjustments, particularly when there are other possible solutions to the problem itself.

An alternative approach would be to remove continuous assessments altogether; if there is a single summative assessment at the end of the course there can be no disadvantage to students that stems from delaying their submission date. This is not without issue—there are sound pedagogical reasons for implementing a continuous assessment strategy and, again, students perform better in courses with continuous assessment (Day et al. Citation2018a). We argue that the continuous assessment strategy simply highlights the issues that some students might be facing in any case, but that otherwise go undetected. In the case of the Research Methods course in this paper, for example, we were able to use a recorded deadline extension (or the missing of a test entirely) as a proxy for students’ engagement with the course, and as an indication that students had ‘fallen behind’ for one reason or another. It is unlikely that the circumstances that led students to miss or delay tests were a direct result of the continuous assessment strategy itself.

We argue that the policies around continuous assessment (or the scheduling of the deadlines themselves) should be such that students can complete the tests in order even if they are delayed compared to the rest of the cohort. Indeed, we suggest that continuous assessments could be used not only to split assessments into bitesize chunks for students, but also to allow faculty to catch potential engagement issues early and to intervene. As an illustrative example, our analyses have shown that those students who are achieving lower grades across their degree programme are likely to miss continuous assessment deadlines altogether, and that students who miss tests in one course tend to perform poorly in other courses too. Therefore, if a student misses a continuous assessment deadline in the first few weeks of a course, both academic and pastoral support could be offered in an effort to prevent a continuing pattern of low attainment and disengagement, and to maximise the potential outcomes of their degree scheme.

In conclusion, the studies described in this paper have demonstrated factors that relate to student performance in continuous assessments. We argue that continuous assessment techniques are likely to be useful in identifying students at risk of academic failure or disengagement, and that it is a suitable approach to assessment for students with or without ALN, but that further research is needed to examine the consequences of continuous assessment on student well-being.

Ethical standard statement

All data was extracted from university student records following the ratification of student grades for the course in question—therefore there was no direct involvement of the students in data collection, nor was there any impact on final course grade as a result of this study. The study was approved by the School of Psychology Ethics Committee at the university at which the study was conducted (application reference: 2022-5489-4681).

Disclosure statement

The authors have no competing interests and no funding was received to support this work.

Notes on contributors

David Playfoot is a Senior Lecturer and Director for the Undergraduate Programmes in the School of Psychology, Swansea University, UK. His research interests include feedback provision and feedback literacy in higher education, blended learning techniques and the impact of testing on learning.

Laura Wilkinson is a Senior Lecturer and School Equality, Diversity and Inclusion (EDI) Lead in the School of Psychology, Swansea University, UK. Her primary research interest is eating behaviour with a current focus on encouraging sustainable eating practice.

Jessica Mead is a Tutor in the School of Psychology, Swansea University, UK. Her primary research interests are centred within positive psychology, wellbeing science, and post-traumatic growth, branching out into student wellbeing and general pedagogical practice.

References

  • Banks, J., and E. Smyth. 2015. “Your Whole Life Depends on It”: Academic Stress and High-Stakes Testing in Ireland.” Journal of Youth Studies 18 (5): 598–616. doi:10.1080/13676261.2014.992317
  • Barga, N. K. 1996. “Students with Learning Disabilities in Education: Managing a Disability.” Journal of Learning Disabilities 29 (4): 413–421. doi:10.1177/002221949602900409
  • Bonaccio, S., and C. L. Reeve. 2010. “The Nature and Relative Importance of Students’ Perceptions of the Sources of Test Anxiety.” Learning and Individual Differences 20 (6): 617–625. doi:10.1016/j.lindif.2010.09.007
  • Brown, S., and P. Knight. 1994. Assessing Learners in Higher Education. London: Kogan Page
  • Bunbury, S. 2020. “Disability in Higher Education – Do Reasonable Adjustments Contribute to an Inclusive Curriculum?” International Journal of Inclusive Education 24 (9): 964–979. doi:10.1080/13603116.2018.1503347
  • Carless, D. 2007. “Learning-Oriented Assessment: Conceptual Bases and Practical Implications.” Innovations in Education and Teaching International 44 (1): 57–66. doi:10.1080/14703290601081332
  • Cizek, G. J. 2005. “More Unintended Consequences of High-Stakes Testing.” Educational Measurement: Issues and Practice 20 (4): 19–27. doi:10.1111/j.1745-3992.2001.tb00072.x
  • Cortiella, C., and S. H. Horowitz. 2014. The State of Learning Disabilities: Facts, Trends, and Emerging Issues. New York, NY: National Center for Learning Disabilities.
  • DaDeppo, L. M. 2009. “Integration Factors Related to the Academic Success and Intent of Persist of College Students with Learning Disabilities.” Learning Disabilities Research & Practice 24 (3): 122–131. doi:10.1111/j.1540-5826.2009.00286.x
  • Day, I. N. Z., F. M. P. van Blankenstein, P. M. Westenberg, and W. F. Admiraal. 2018a. “Explaining Individual Student Success Using Continuous Assessment Types and Student Characteristics.” Higher Education Research & Development 37 (5): 937–951. doi:10.1080/07294360.2018.1466868
  • Day, I. N. Z., F. M. van Blankenstein, P. M. Westenberg, and W. F. Admiraal. 2018b. “Teacher and Student Perceptions of Intermediate Assessment in Higher Education.” Educational Studies 44 (4): 449–467. doi:10.1080/03055698.2017.1382324
  • Domenech, Josep, Desamparados Blazquez, Elena de la Poza, and Ana Mun͂oz-Miquel. 2015. “Exploring the Impact of Cumulative Testing on Academic Performance of Undergraduate Students in Spain.” Educational Assessment, Evaluation and Accountability 27 (2): 153–169. doi:10.1007/s11092-014-9208-z
  • DuPaul, G. J., T. D. Pinho, B. L. Pollack, M. J. Gormley, and S. D. Laracy. 2017. “First-Year College Students with ADHD and/or LD: Differences in Engagement, Positive Core Self-Evaluation, School Preparation, and College Expectations.” Journal of Learning Disabilities 50 (3): 238–251. doi:10.1177/0022219415617164
  • Esch, Pascale, Valéry Bocquet, Charles Pull, Sophie Couffignal, Torsten Lehnert, Marc Graas, Laurence Fond-Harmant, and Marc Ansseau. 2014. “The Downward Spiral of Mental Disorders and Educational Attainment: A Systematic Review on Early School Leaving.” BMC Psychiatry 14: 237. doi:10.1186/s12888-014-0237-4
  • Heiman, T. 2007. “Social Support Network, Stress, Sense of Coherence and Academic Success of University Students with Learning Disabilities.” Social Psychology of Education 9 (4): 461–478. doi:10.1007/s11218-006-9007-6
  • Heywood, J. 2000. Assessment in Higher Education. London: Jessica Kingsley.
  • Hojat, M., J. S. Gonnella, J. B. Erdmann, and W. H. Vogel. 2003. “Medical Students’ Cognitive Appraisal of Stressful Life Events as Related to Personality, Physical Well-Being, and Academic Performance: A Longitudinal Study.” Personality and Individual Differences 35 (1): 219–235. doi:10.1016/S0191-8869(02)00186-1
  • Holmes, N. 2015. “Student Perceptions of Their Learning and Engagement in Response to the Use of a Continuous e-Assessment in an Undergraduate Module.” Assessment & Evaluation in Higher Education 40 (1): 1–14. doi:10.1080/02602938.2014.881978
  • Hysenbegasi, A., S. L. Hass, and C. R. Rowland. 2005. “The Impact of Depression on the Academic Productivity of University Students.” The Journal of Mental Health Policy and Economics 8 (3): 145–151.
  • Isaksson, S. 2008. “Assess as You Go: The Effect of Continuous Assessment on Student Learning during a Short Course in Archaeology.” Assessment & Evaluation in Higher Education 33 (1): 1–7. doi:10.1080/02602930601122498
  • Kelly, A. E., K. Cuccolo, and V. Clinton-Lisell. 2022. “Instructor-Implemented Interventions to Improve College-Student Time Management.” Journal of the Scholarship of Teaching and Learning 22 (3): 89–104. doi:10.14434/josotl.v22i3.32378
  • Kent, Kristine M., William E. Pelham, Brooke S. G. Molina, Margaret H. Sibley, Daniel A. Waschbusch, Jihnhee Yu, Elizabeth M. Gnagy, Aparajita Biswas, Dara E. Babinski, and Kathryn M. Karch. 2011. “The Academic Experience of Male High School Students with ADHD.” Journal of Abnormal Child Psychology 39 (3): 451–462. doi:10.1007/s10802-010-9472-4
  • Kimball, E., R. Friedensen, and E. Silva. 2017. “Engaging Disability: Trajectories of Involvement for College Students with Disabilities.” In Disability as Diversity in Higher Education: Policies and Practices to Enhance Student Success, edited by E. Kim and K. C. Aquino. New York: Routledge.
  • Kirby, J. R., R. Silvestri, B. H. Allingham, R. Parrila, and C. B. La Fave. 2008. “Learning Strategies and Study Approaches of Postsecondary Students with Dyslexia.” Journal of Learning Disabilities 41 (1): 85–96. doi:10.1177/0022219407311040
  • Newman, L. A., J. W. Madaus, A. R. Lalor, and H. S. Javitz. 2021. “Effect of Accessing Supports on Higher Education Persistence of Students with Disabilities.” Journal of Diversity in Higher Education 14 (3): 353–363. doi:10.1037/dhe0000170
  • Powell, K. C., and C. J. Kalina. 2009. “Cognitive and Social Constructivism: Developing Tools for an Effective Classroom.” Education 130 (2): 241–250.
  • Rahmati, Z. 2015. “The Study of Academic Burnout in Students with High and Low Level of Self-Efficacy.” Procedia - Social and Behavioral Sciences 171: 49–55. doi:10.1016/j.sbspro.2015.01.087
  • Reschly, A. L., and S. L. Christenson. 2006. “Prediction of Dropout among Students with Mild Disabilities: A Case for the Inclusion of Student Engagement Variables.” Remedial and Special Education 27 (5): 276–292. doi:10.1177/07419325060270050301
  • Smith, S. G., R. English, and D. Vasek. 2002. “Student and Parent Involvement in the Transition Process for College Freshmen with Learning Disabilities.” College Student Journal 36: 491–503.
  • Spence, R., L. Kagan, S. Nunn, D. Bailey-Rodriguez, H. L. Fisher, G. M. Hosang, and A. Bifulco. 2022. “Life Events, Depression and Supportive Relationships Affect Academic Achievement in University Students.” Journal of American College Health 70 (7): 1931–1935. doi:10.1080/07448481.2020.1841776
  • Tuunila, R., and M. Pulkkinen. 2015. “Effect of Continuous Assessment on Learning Outcomes on Two Chemical Engineering Courses: Case Study.” European Journal of Engineering Education 40 (6): 671–682. doi:10.1080/03043797.2014.100181
  • Vaessen, B. E., A. van den Beemt, G. van de Watering, L. W. van Meeuwen, L. Lemmens, and P. den Brok. 2017. “Students’ Perception of Frequent Assessments and Its Relation to Motivation and Grades in a Statistics Course: A Pilot Study.” Assessment & Evaluation in Higher Education 42 (6): 872–886. doi:10.1080/02602938.2016.1204532
  • von der Embse, N., D. Jester, D. Roy, and J. Post. 2018. “Test Anxiety Effects, Predictors, and Correlates: A 30-Year Meta-Analytic Review.” Journal of Affective Disorders 227: 483–493. doi:10.1016/j.jad.2017.11.048
  • Weyandt, L. L., and G. J. DuPaul. 2013. College Students with ADHD: Current Issues and Future Directions. New York, NY: Springer.