1,428
Views
3
CrossRef citations to date
0
Altmetric
Articles

Narrowing awarding gaps: the contributory role of policy and assessment type

ORCID Icon, ORCID Icon & ORCID Icon
Pages 1665-1677 | Received 17 Aug 2022, Accepted 27 Apr 2023, Published online: 18 May 2023

ABSTRACT

Awarding gaps between various groups of students persist across the Higher Education sector, yet the responses designed to address the contributors remain localised. The sudden spread of COVID-19 led to various responses across the University sector creating an unprecedented natural experiment and offering the opportunity to compare outcomes from these measures with prior cohorts. This study seeks to investigate the effects of two COVID-19 interventions on students’ performance in the Business and Management discipline at a UK university. The specific COVID-19 measures considered here are the move to online assessments and the new grade policy to ensure the pandemic did not affect students’ outcomes adversely. We use a Kernel Propensity Score and a Quantile Difference in Differences models to estimate the treatment effect of the two COVID interventions on the treated group, namely term two students’ performances of the academic year 2019/20. Our results indicate that the effects of both COVID interventions supported the outcomes of international students, thereby narrowing the awarding gap. Findings suggest firstly that institutional policies adopted in crises should seek to address potential adverse effects on student outcomes for the period of disruption, indicating that significant care should be taken in their drafting. The policy, in this case, was found to have achieved its aim. Secondly, the move to new modes of assessment combined with detailed briefings from faculty may have served to uncover aspects of the hidden curriculum for this group, contributing to a narrowing of awarding gaps between different groups of students.

Introduction

Recent years have seen an increased policy focus on reducing and eliminating persistent undergraduate awarding gaps across the Higher Education sector. Differences in degree attainment between various ethnic groups were first drawn to the attention of the higher education community in the 1990s (Connor et al., Citation1996) and have subsequently been confirmed in a range of studies e.g. (Fielding et al. Citation2008; Richardson Citation2008). They are now an area of focus for sector policy e.g. Office for Students in the UK, and consequently institutional strategies. Whilst the international awarding gap (differences in degree attainment between students classified as ‘home’ and those classified as ‘international’ (non-EU) as per the Higher Education Statistical Agency) has received less attention from policymakers and researchers (with a few notable exceptions e.g. (Iannelli and Huang Citation2014)) it is also now a focus of higher education institutions as they work to embed more equitable curricula and expand their student intakes. Literature is now starting to draw linkages between the social justice agenda of widening participation and internationalisation activity (Gayton Citation2020). In short, it is critical to better understand and address the drivers of differential student attainment in higher education.

To date work on awarding gaps has focused on controlling for various characteristics which could affect attainment (Cotton et al. Citation2016), finding a strong association between entry grades and degree outcomes (Jones et al. Citation2017; Smith Citation2016). Studies show that despite controlling for prior attainment, age, gender, and discipline most ethnic minorities experience awarding gaps and that females outperform males (Cotton et al. Citation2016). Qualitative work associates student outcomes with feelings of belonging, and successful transition to higher education (Jones Citation2018).

Whilst existing studies allude to assessment practices’ contribution to the observed gaps (Richardson Citation2015) empirical evidence is lacking. This research, therefore, seeks to contribute to the debate by illuminating the extent to which mode of assessment is a factor as a response to Richardson’s (Citation2015) observation that ‘we do not know what aspects of teaching and assessment practices are responsible for variations in the attainment gap.’ (278)

The research focuses on students studying Business and Management subjects, which was selected because more qualifications were awarded in Business and Management than any other subject in 2019/20 (HESA Citation2021), with non-EU students comprising 27% of the 2019/20 cohort. Whilst the study is conducted in one discipline at one UK institution the findings are of interest more broadly as awarding gaps are coming into focus in other countries both within their ‘home’ cohorts (Palacios and Alvarez Citation2016) and also for ‘international’ cohorts (Crawford and Wang Citation2015). The advantage of adopting a case study approach is that it enables control of key variables, with students who have entered the discipline based on similar entry criteria, and who have had a comparable experience of teaching and assessment during their studies (Jones Citation2018).

This observational study seeks to uncover the extent to which the mode of assessment contributes to the reported awarding gaps by undertaking a statistical analysis at a modular level comparing the pre-COVID assessment regime to the wholesale change in the assessment regime in response to the COVID-19 pandemic. In the UK, many universities adopt a degree classification algorithm that includes student performance in the second and final year of the degree, albeit with variations in how they are weighted. The study offers new insights into the influence of assessment type on student outcomes at second year (level 5 (L5)) and final year (level 6 (L6)), including first attempt, deferred or referred assessments (resits), and the use of the exceptional COVID-19 policy (the‘No-Detriment policy’ (ND)) in progression and award decisions. The ND approach was used across UK universities to mitigate the potential adverse effects of the rapid adoption of online assessment during the first wave of lockdowns and the general pandemic disruption. However, institutional policy details varied significantly (Chan Citation2022). The changes implemented in response to the pandemic emergency led to focus on the following two research questions (RQ):

RQ1: To what extent did the institutional policy approach adopted in response to COVID-19 create a backstop for student grades?

RQ2: To what extent did the change in assessment modes contribute to narrowing (or otherwise) of observed awarding gaps?

Awarding gaps

The continued persistence of awarding gaps of various degrees of magnitude across different institutions and subjects that cannot be explained by entry qualification (Richardson, Mittelmeier, and Rienties Citation2020) implies that there is some degree of influence arising from the structures of teaching and assessment (Richardson Citation2015). As a result, both the institutional context and the cohort composition are likely to be important considerations.

The literature related to awarding gaps is increasingly nuanced, taking into account the complexities of intersectionality, and structural barriers among other factors (Codiroli Mcmaster Citation2021). It now spans a range of studies focusing on the student experience and its influence on attainment, often finding multiple determinants ranging from the curricula and learning environment; staff-student relationships; social, cultural, and economic capitals; and psychosocial and identity factors (HEFCE Citation2015). To investigate these determinants, further studies focus on each aspect and its impact on retention and attainment. For example, finding that Chinese students’ final degree mark was not influenced by gender, prior academic performance, prior academic qualifications or degree programme (Crawford and Wang Citation2015). The majority of studies adopt qualitative research methods, are typically of small scale, and are conducted at one institution. Recent studies have also started to engage with questions of intersectionality (Richardson, Mittelmeier, and Rienties Citation2020). To date, most studies tend to focus on how the student can change themselves rather than the university structures changing to accommodate a broader conception of students (Koutsouris, Mountford-Zimdars, and Dingwall Citation2021).

Prior studies indicate that international students can misinterpret the requirements of UK higher education and therefore overestimate their likelihood of being awarded a ‘good’ degree reflecting a lack of understanding of the system (Cotton et al. Citation2016). This misunderstanding may be partially explained by the concept of the hidden curriculum, the difference between the curriculum as designed by the academic, and how it is experienced by the learner (Barnett and Coate Citation2005) which can form a helpful construct to explain differential student outcomes. From origins in secondary education, the hidden curriculum offers explanatory potential in higher education where ‘less densely codified curricula provide a landscape of potential for multiple hidden curricula to exist’ (Cotton, Winter, and Bailey Citation2013, 193). The hidden curriculum exists in sharp contrast with the explicitly technical nature of contemporary higher education with clear learning outcomes, assessment briefings, etc., and assumed transparency of the curriculum (Orón Semper and Blasco Citation2018). Four primary meanings have been attributed to the concept of the hidden curriculum: implicit expectations relayed by educators; unintended learning outcomes; implicit messages conveyed by the structures of higher education, and students’ interpretations of how to achieve reward and success (Portelli Citation1993). The hidden curriculum is experienced variably by students as ‘Students are not simply responding to the given subject—they carry with them the totality of their experiences of learning and being assessed’ (Boud Citation2013, 39).

The hidden curriculum has been disaggregated into various components both from the perspective of the educator and the student. From an educator’s perspective three primary influences have been highlighted, firstly what to include or exclude from the curriculum; secondly the approach to teaching and the messages this conveys; and finally the physical or virtual learning environment (Cotton, Bailey, and Tosdevin Citation2020). Despite efforts to create transparency in assessment, the hidden curriculum remains likely to operate below the surface (Wicking Citation2020). Sambell and McDowell (Citation1998) observe that innovative assessments can disrupt the hidden curriculum thereby reducing the gap between the formal curriculum and the hidden. As such changing the assessment dialogue, prompted by the pandemic may have served to uncover aspects of the hidden curriculum. This comprised both a change in classroom behaviour as educators worked to prepare students for revised assessment modes along with the practical implementation of the revised assessments. To investigate the effect of assessment mode changes on student outcomes our work evaluates data at a modular level before the application of the institutional no-detriment policy.

Data

Our data set is drawn from administrative data held by the institution in question, a mid-sized research-intensive UK University. It covers all UK-domiciled and overseas (non-EU) students in the HESA Business and Management subject areas. This study includes three years of data, 2017/18, 2018/19, and 2019/20 on students’ biographical characteristics and the grades achieved on all individual modules. These characteristics are summarised in .

Table 1. Descriptive statistics: performance by year. Panel Award Board (*).

The academic data include information about their journey through the university including: which academic year they joined the university, learning needs, and other adjustments such as extenuating circumstances claims, as well as information on the degree outcomes and grades awarded. Throughout we use the terminology of the Framework for Higher Education Qualification that uses the ‘level’ codes L3, L4, L5, L6 to indicate different stages of student progression, namely Foundation Year (L3), Year 1 (L4), Year 2 (L5), and final year (L6) of a degree.

The information on module grades is very detailed. The data record each student's first attempt grade received in the module's exam session (as recorded at the Progression Award Board (PAB)) as well as the resit (referral) or sit grades (deferral) received in the August exam session if these occur. These two sets of grades are referred to as ‘First attempt’ and ‘Final’ (after resit) grades, respectively. They coincide when students either succeed at a first attempt or are not granted any re-sits or sits.

A third grade is available for the modules delivered in the second term (T2) of 2019/20. This grade reflects the application of the ND or safety net policy designed to respond to the COVID-19 emergency, and it is referred to as ‘Final after ND’ grade. The key feature of the ND considered in this study is to use the 2019/20 student's T1 average grade (i.e. the average of modules taught under non-pandemic circumstances) to identify whether their T2 grades (the COVID term) seemed unusually low and to uplift the T2 module results if this were the case. Whenever the ND policy is not binding or not applicable, the ‘Final after ND’ grades coincide with the ‘Final’ (after resit) ones.

In addition to the ND policy, the University responded to the physical restrictions imposed by the pandemic emergency by moving all on-Campus activities and all assessments online. Traditional assessments were replaced by alternative instruments such as Take-Home Paper (TAP) or online quizzes (Multiple Choice Questions (MCQs)). The granularity of the students’ sets of grades for pre-COVID and COVID teaching terms enables us to exploit the natural experiment nature of the pandemic emergency and investigate the effects of both measures - the one-off effect of the ND policy and the change in assessment mode still in place. The findings concerning changes in modes of assessment will help inform future assessment policy.

Methodology

The data available can be explored either as a pooled set of repeated cross-sections or in a longitudinal format. We started with the repeated cross-section set and used a Difference-in-Differences (DiD) treatment effect approach to examine the average differences in grades between the cohort affected by COVID and the two cohorts graduating in the two years before. We explored further the differential impact of interventions by looking at different points of the cohorts’ grade distributions, using quartile DiD estimations. The longitudinal data set was used to estimate a panel fixed effect model as a robustness check to explore if the within-individual effect rather than the within-cohort effect would confirm the findings of the original DiD estimates.

The DiD approach

All three years share similar conditions regarding fee levels and admission from an institutional policy perspective. To explore whether COVID interventions used in T2 2019/20 produced any treatment effects on award gaps, we apply a DiD treatment approach and use the 2019/20 cohort as the treated group, and their T2 performance as the treatment effect. We then compare their T1 and T2 outcomes with those of the control cohorts (2018/19) not affected by COVID interventions. We do this comparison for L6 students as well as for L5 students.

The repeated cross-section analysis exploits the difference in the cohorts of the students and requires some additional assumptions around the comparability of the overall student cohorts. Therefore, to check the robustness of our results, we also produce DiD estimates of the two cohorts not affected by COVID measures, comparing T1 and T2 outcomes of the 2017/18 cohort with those of the 2018/19 cohort.

The DiD models were estimated using robust standard errors. We took account of covariates and used the Kernel matching option to line up comparison individuals according to sufficient observable factors and to remove systematic differences in the evaluation outcome between treated and non-treated groups. The balancing properties of the DiD models were all tested.

We also apply the same DiD approach to investigate treatment effects at different quartiles of the distributions of cohorts’ performances.

Longitudinal approach

While the DiD approach allows us to compare the performances of different cohorts of students at the same level of progression, the longitudinal analysis enables us to control for individual differences, as the same person's performance is compared through time. International students from China represent around 70% of all international students in each cohort (see ), they have an unusually balanced gender distribution and almost all of them (80%) join the university in the second year of the degree study. We focus therefore only on their T1 and T2 performances at L5 and L6 so that and the same cohort of students in the panel are sampled four times.

To investigate how COVID interventions could have affected awarding gaps, we apply the longitudinal fixed effect model to two cohorts of overseas students from China, those who graduated the year before COVID and those who graduated in the COVID year. The presence of modules’ repeaters, students in placement, or studying abroad creates attrition in data that renders the panel unbalanced. The evolution of the cohort is set out in .

Table 2. Evolution of the Longitudinal Cohort. Evidence of an unbalanced cohort (International students from China).

To exploit the information on students’ experience over the two years of the degree, we will assume some degree of comparability between levels of the degree and across cohorts. We will discuss these conditions in the results sections.

Results

We present three main sets of results: the repeated cross-sectional DiD, the repeated cross-sectional DiD quantile regression, and the fixed effect panel regression.

Repeated cross-sectional DiD

For each of the DiD estimates presented below we estimated a full DiD model. An example of the repeated cross-sectional DiD for L6 is presented in Appendix (). below presents the DiD estimates for the different sets of grades using the repeated cross-sectional structure.

Table 3. Summary DiD Results 2018/19 and 2019/20.

The first two rows of highlight the effects of the ND policy and the switch to online assessments on UK domestic and international students. The changes in assessments did not produce any statistically significant impact on the UK domestic students’ grades, either in T2 or in the resit stage. However, their grades were affected by applying the ND policy, which produced similar statistically significant gains at L5 and L6. However, these gains did not change the grade classification, and the overall group average remained in the 60s interval.

The international students gained from the COVID interventions; as a result, their average final grade crossed the 60% threshold, increasing their probability of being awarded a higher degree classification. In addition, the grade increase at L5 was substantially bigger than at L6. The substantial proportion of international students entering the UK HE system directly at L5 may partly explain this result. The COVID measures have helped these Direct Entry (DE) students, mitigating their transition into the system.

The third and fourth rows of show the differential impact of the COVID interventions between the international students from China and the rest of the international students. The results show that, at L6, this latter group of students behaves similarly to the UK domestic group. For them, only the ND intervention produced a significant effect on grades, big enough to induce a change in class grades. However, at L5, the same category of students shows a different story benefitting from both COVID interventions (and crossing the 60% threshold in their final after resit grades). By contrast, the international students from China benefitted from both COVID interventions- the change in assessments in T2 and in resits, and the ND policy - and at any level of progression. However, only those at L6 crossed the 60% threshold with the application of the ND policy.

Given that the international Chinese students represent a significant proportion of overseas students, we focus on this group and report their results in the lower part of , splitting them by gender. Our findings suggest that female students from China benefitted from both COVID interventions, while male did not.

To summarise, the combination of changes in assessments, and ND policy has increased the average of all groups of students and enabled the international students, except the male Chinese group, to reach a GPA group average that crossed the threshold of the 60% (an upper second classification), and catch up with the domestic students. However, for most groups, except Chinese females, the gains are mainly to be attributed to the ND policy itself rather than to the change in assessments or the possibility of resits. Overall, our findings present a variety of heterogeneous responses and effects of COVID policies on students’ gender, ethnicity, and country of domicile.

To check for robustness, we applied the same model to compare the previous two pre-COVID cohorts, who were not affected by the emergency interventions (i.e, 2017/18 and 2018/19).

These results show that when COVID interventions were not in place, there were no differential effects between T1 and T2 grades across different cohorts at L6. The absence of any statistically significant difference between 2018/19 and 2017/18 performances indicates that in the absence of intervention that any gap in performances was constant between term 1 and term 2 (not closing or increasing) for pre-COVID cohorts at L6 (this finding is summarised in the first two columns of ). As a result, the presence of statistically significant differences between the 2018/19 and 2019/20 cohorts capture the treatment effect of COVID policies.

Table 4. Summary DiD results 2017/2018 and 2018/19.

Level 5 shows some statistically significant differences between the T2 and T1 also in the pre-COVID periods. As explained earlier, these are related to the effect of the direct entry (DE) cohorts and their T1 impact. Even taking account of this DE effect in the previous results, the female international students from China have a different response to the COVID changes than other groups of international and home students, benefitting from COVID measures.

The repeated cross-sectional DiD quantile regression

The remaining results seek to understand better these differential responses by looking at quartile distribution instead of averages. presents the DiD quantile regression results for L6 and L5 international students from China, by gender.

Table 5. DiD Quartile regressions. International Students from China ONLY. COVID cohorts.

We omit to present the pre-COVID results at L6 because, like before, we did not find any significant differences in grades. Although this is not a formal test of the common trend, which is not required in the Kernel type DiD, it does suggest in the absence of any intervention the gap in performance between the cohorts is constant from term 1 to term 2. From , we can observe three things. Firstly, the change to online assessments enabled students in the lower quartile to substantially increase their grades, moving from the 40s interval into the 50s range. This event is reported in using the bold acronym CL, which denotes an average grade change from a third into a lower second class. Finally, the application of ND reinforced this move. These results hold for both L5 and L6 international Chinese students, but with some differences between genders (mainly because female students in the lower quartile were already in the 50s range).

Secondly, finalist students (L6) in the second quartile also benefitted from the online assessment change, and their average grade moved from the 50s into the 60s range. The bold acronym CU denotes an average grade change from a lower to an upper-second grade classification, a move reinforced by the ND policy. Interestingly, for L5 students (most of whom are DE), the grade crossing classification occurred only after the resit exam period and after the ND policy. Gender differences are also present in this second quartile. For example, the average female grade crossed the 60s grade boundaries earlier than the males because female students start with a higher average than males.

Thirdly, the COVID policies did not create additional first-class degrees for the upper quartile. But both assessment changes and ND policy helped L6 male students from China move up to an average of 60, joining the female students already positioned in that grade range.

The DiD quantile regression across three different sets of grades (first attempt, final after resits, and final after ND policy) reveals the importance of stepping away from the average effects for all students and considering the distributional impact of a change in policies across student demographics and levels of student attainment.

The fixed effect panel regression

The final set of results is based on comparing performances of the same student as they progress from L5 into L6 instead of comparing different cohorts. We consider each student's T1 and T2 grades at L5 and L6 and estimate a fixed effect model. We then compare the results of the students who graduated under the first COVID lockdown with those who graduated the year before, focusing only on the international students from China. The results of the estimates are reported in and Appendix ().

Table 6. Fixed effect model. International students from China.

Looking at the students in the pre-COVID cohort and tracking them through L5 and L6 stages, we noticed an improvement from T1 to T2 for L5 students (most of them DE), as they adjust to university academic life, but a noticeable drop at L6 in their T1 and T2 grades, perhaps due to more challenging modules.

In the COVID cohort, we can also find a significant drop in grades in the first term of L6 (T1). This result is relevant because the ND was based on taking that term as a minimum grade. However, their T2 average of L6 does not remain low, but COVID measures lift it, with some differential effects. For example, students who did not invoke the ND policy showed a substantial increase in grades relative to those who used it. This fact would suggest that the change in the assessment modes was very effective for some students (those who did not need the ND policy) but less effective or ineffective for those who needed to invoke the ND policy.

The combination of the two COVID measures has thus helped move students closer to or above the 60% threshold, and the ND (by using T1 grades as a minimum grade) has offered a proper lower bound to the students it was meant to help. We believe that the ND policy was judiciously designed. It did not inflate grades for subjects in the HESA Business and Management categorisation (it would have been more generous had it included the average of level 5 grades). In addition, it helped students who invoked it keep the trajectory of T1 grades, and it remained neutral to those who did not use it, whose grades improved due to changes in assessments.

Conclusions

The study highlighted the contribution of both changes to degree awarding policies adopted at the time of the onset of the COVID-19 pandemic and the changes to modes of assessment that resulted to accommodate education continuity. It offers a rich insight into the detailed work that institutions can undertake to further understand the effect of assessment change on awarding gaps in the student body. Such changes combined with the briefings offered by faculty related to revised assessment modes may have served to uncover the previously implicit aspects of assessment for international students supporting their outcomes and leading to a reduction in the awarding gap. In addition, it offers a commentary on the effect of institutional policy adopted at the time and may inform future construction of force majeure regulations should they be required in the future.

Limitations include the generalisability from one institutional context and academic framework. Future research could undertake a comparative analysis of institutional force majeure policies and their effects on the awarding gaps at the time of the pandemic. This may help to inform sector understanding of the relationship between assessment type, institutional policy, and grade inflation.

Acknowledgments

We thank the participants of the British Educational Research Association Annual Conference 2021 and the Developments in Economics Education Conference 2021, the bi-annual The Economics Network conference, for their valuable comments and feedback. We confirm that the University’s Ethical Review Policy has been applied and ethical approval has been obtained (ER/MGC25/2). The usual disclaimers apply.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Barnett, Ronald, and Kelly Coate. 2005. Engaging the Curriculum in Higher Education. Maidenhead: Open University Press.
  • Boud, David. 2013. Enhancing Learning Through Self-Assessment. Sydney: Routledge.
  • Chan, C. 2022. “A review of the changes in higher education assessment and grading policy during covid-19.” Assessment and Evaluation in Higher Education. doi:10.1080/02602938.2022.2140780.
  • Codiroli Mcmaster, Natasha. 2021. Ethnicity Awarding Gaps in UK Higher Education in 2019/20. York: Advance HE. https://www.advance-he.ac.uk/knowledge-hub/ethnicity-awarding-gaps-uk-higher-education-201920.
  • Connor, Helen, Ivana La Valle, Nii Djan Tackey, and Sarah Perryman 1996. Ethnic Minority Graduates: Differences by Degrees. Report 309. Brighton: Institute for Employment Studies.
  • Cotton, Debby, John Bailey, and Matthew Tosdevin. 2020. “Higher Education and the Climate Emergency: Exploring the Hidden Curriculum of the Campus.” In The Hidden Curriculum of Higher Education, edited by Tim Hinchcliffe, 29–40. York: Advance HE.
  • Cotton, D. R. E., M. Joyner, R. George, and P. A. Cotton. 2016. “Understanding the Gender and Ethnicity Attainment Gap in UK Higher Education.” Innovations in Education and Teaching International 53 (5): 475–86. doi:10.1080/14703297.2015.1013145.
  • Cotton, Debby, Jennie Winter, and Ian Bailey. 2013. “Researching the Hidden Curriculum: Intentional and Unintended Messages.” Journal of Geography in Higher Education 37 (2): 192–203. doi:10.1080/03098265.2012.733684.
  • Crawford, Ian, and Zhiqi Wang. 2015. “The Impact of Individual Factors on the Academic Attainment of Chinese and UK Students in Higher Education.” Studies in Higher Education 40 (5): 902–20. doi:10.1080/03075079.2013.851182.
  • Fielding, A., C. M. J. Charlton, D. Z. Kounali, and G. B. Leckie. 2008. Degree Attainment, Ethnicity and Gender: Interactions and the Modification of Effects-A Quantitative Analysis’.
  • Gayton, Angela M. 2020. “Exploring the Widening Participation-Internationalisation Nexus: Evidence from Current Theory and Practice.” Journal of Further and Higher Education 44 (9): 1275–88. doi:10.1080/0309877X.2019.1678014.
  • HEFCE. 2015. Causes of Differences in Student Outcomes’, 141.
  • HESA. 2021. ‘Where Students Come from and Go to Study (2019/20) (SB258)’. 27 January 2021. https://www.hesa.ac.uk/news/27-01-2021/sb258-higher-education-student-statistics/location.
  • Iannelli, Cristina, and Jun Huang. 2014. “Trends in Participation and Attainment of Chinese Students in UK Higher Education.” Studies in Higher Education 39 (5): 805–22. doi:10.1080/03075079.2012.754863.
  • Jones, Steven. 2018. “Expectation vs Experience: Might Transition Gaps Predict Undergraduate Students’ Outcome Gaps?” Journal of Further and Higher Education 42 (7): 908–21. doi:10.1080/0309877X.2017.1323195.
  • Jones, Steven, Maria Pampaka, Daniel Swain, and Julian Skyrme. 2017. “Contextualising Degree-Level Achievement: An Exploration of Interactions Between Gender, Ethnicity, Socio-Economic Status and School Type at One Large UK University.” Research in Post-Compulsory Education 22 (4): 455–76. doi:10.1080/13596748.2017.1381287.
  • Koutsouris, George, Anna Mountford-Zimdars, and Kristi Dingwall. 2021. “The ‘Ideal’ Higher Education Student: Understanding the Hidden Curriculum to Enable Institutional Change.” Research in Post-Compulsory Education 26 (2): 131–47. doi:10.1080/13596748.2021.1909921.
  • Palacios, Angélica M. G., and Rafael D. Alvarez. 2016. “An Analysis of Nonfirst-Generation Community College Men of Color: Comparing GPA, Noncognitive, and Campus Ethos Differences Across Race.” Community College Journal of Research and Practice 40 (3): 180–7. doi:10.1080/10668926.2015.1112319.
  • Portelli, John P. 1993. “Exposing the Hidden Curriculum.” Journal of Curriculum Studies 25 (4): 343–58. doi:10.1080/0022027930250404.
  • Richardson, John T.E. 2008. “The Attainment of Ethnic Minority Students in UK Higher Education.” Studies in Higher Education 33 (1): 33–48. doi:10.1080/03075070701794783.
  • Richardson, John T.E. 2015. “The Under-Attainment of Ethnic Minority Students in UK Higher Education: What We Know and What We Don’t Know.” Journal of Further and Higher Education 39 (2): 278–91. doi:10.1080/0309877X.2013.858680.
  • Richardson, John T. E., Jenna Mittelmeier, and Rienties. Bart. 2020. “The Role of Gender, Social Class and Ethnicity in Participation and Academic Attainment in UK Higher Education: An Update.” Oxford Review of Education 46 (3): 346–62. doi:10.1080/03054985.2019.1702012.
  • Sambell, K., and L. Mcdowell. 1998. “The Construction of the Hidden Curriculum: messages and meanings in the assessment of student learning.” Assessment and Evaluation in Higher Education 23 (4): 391–402. doi:10.1080/0260293980230406.
  • Semper, Orón, José Víctor, and Maribel Blasco. 2018. “Revealing the Hidden Curriculum in Higher Education.” Studies in Philosophy and Education 37 (5): 481–98. doi:10.1007/s11217-018-9608-5.
  • Smith, Emma. 2016. “Can Higher Education Compensate for Society? Modelling the Determinants of Academic Success at University.” British Journal of Sociology of Education 37 (7): 970–92. doi:10.1080/01425692.2014.987728.
  • Wicking, Paul. 2020. “Formative Assessment of Students from a Confucian Heritage Culture: Insights from Japan.” Assessment & Evaluation in Higher Education 45 (2): 180–92. doi:10.1080/02602938.2019.1616672.

Appendix

Table A1. DiD Estimates. Comparing COVID and Pre-COVID cohorts of L6 students.

Table A2. DiD Estimates. Comparing COVID and Pre-COVID cohorts of L6 International students ONLY.

Table A3. Fixed Effect Model.