1,010
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Democratising assessment rubrics for international students

Abstract

Despite their widespread adoption and recognised benefits, rubrics have been critiqued for their potential misalignment with student needs. The voices of international students, who constitute a substantial portion of the higher education population, remain underrepresented. This study examines the perspectives of international undergraduate students on assessment rubrics in a UK business school. Employing a participatory research approach and focus groups, the findings reveal challenges students face related to rubric design, rubric use during and post-assessment, and prior experience with rubrics. This study concludes that enhancing the accessibility of rubrics for international students – via thoughtful design, timely introduction, focused discussions, pertinent activities and seamless integration throughout the assessment cycle – is paramount. The article advocate for more inclusive and effective rubric practices tailored to the diverse needs of the international student body.

Students struggle to understand assessment expectations when they are presented in the form of assessment rubrics (Rust, Price, and O’Donovan Citation2003; O’Donovan, Price, and Rust Citation2004). Recent discussions emphasise the need for explicit criteria (Allen and Tanner Citation2006; Balan and Jönsson Citation2018; Balloo et al. Citation2018), outline characteristics of effective rubric design (Brookhart Citation2013, Citation2018; Z. Chan and Ho Citation2019) and test interventions to support student engagement with them (Graham, Harner, and Marsham Citation2022; Mountain et al. Citation2023). However, students still struggle to meaningfully engage with rubrics without explicit guidance from staff (Orsmond, Merry, and Reiling Citation1996). They perceive rubrics to be much less helpful in making assessment transparent than the academics who produce them and often take their effectiveness for granted (Z. Chan and Ho Citation2019). The gap between students’ engagement with assessment rubrics and faculty expectations for their use raises questions about the effectiveness of written rubrics and their features that could impact academic success.

Assessment rubrics serve as both a cornerstone and a point of contention in the landscape of higher education evaluation. As systemic frameworks, rubrics strive for clarity and standardisation, aiming to ensure equity, consistency and clear delineation in the evaluation process (Jönsson and Svingby Citation2007). Their utility in communicating assessment expectations has fuelled adoption (Balan and Jönsson Citation2018). Nevertheless, debate persists concerning their design and their aptitude to deliver quality assessment and meaningful feedback to students (Dawson Citation2017; Z. Chan and Ho Citation2019). In the UK the National Student Survey chronicles discontent among undergraduates regarding assessment and feedback (Davies Citation2023). While rubrics excel in providing structure, expediting grading, facilitating self-assessment and promoting evaluative judgement (Reddy and Andrade Citation2010), they are simultaneously subject to critical inquiries regarding their real-world effectiveness in educational appraisal.

A growing interest exists in examining student perspectives on what constitutes an effective rubric (Leader and Clinton Citation2018; Z. Chan and Ho Citation2019; Kilgour et al. Citation2020). However, notable gaps in the body of research on rubrics are the scant representation of international student viewpoints (Sun et al. Citation2023) and the lack of distinction between domestic and international student status in rubric studies, despite the significance of this demographic. To illustrate, between 2020 and 2021, international students constituted 22% of the UK university student populace, approximating to 584,100 entrants (Bolton and Lewis Citation2022). Wolf and Stevens (Citation2007) highlight that rubrics are particularly helpful for international students and even recommend their use, yet variations in prior experience with assessment rubrics and challenges in understanding assessment expectations might negate the effects of assessment rubrics and inadvertently impact their assessment outcomes. Given the increasing diversity of higher education, further investigation which includes international students’ voices is critical (Boyle et al. Citation2020) if we as educators ‘are to be inclusive in our practice and give all learners maximum opportunities to reach their potential’ (McLean Citation2018, 1327). In an era where inclusivity and equitable assessment are of paramount importance, embracing a myriad of perspectives to better understand the challenges students face is vital to create rubrics that are universally effective.

This study investigates undergraduate international students’ perspectives on assessment rubrics in the business school of a UK higher education provider. To do so, it combines a participatory research approach (Bergold and Thomas Citation2012) with focus groups to amplify the student voice. The aim of the paper is to investigate the challenges first-year undergraduate international students face regarding rubrics. The primary objectives are to uncover the difficulties these students encounter with rubrics, and to propose strategies for educators and institutions to enhance the accessibility, engagement and support surrounding these assessment instruments, ultimately democratising rubrics, and improving the students’ assessment experiences. This study adds to our understanding by providing an authentic picture of student engagement with these tools, shedding light on students’ perceptions of their value to learning.

Literature review

Assessment rubric research in higher education has a deep-rooted history, experiencing a pronounced surge in recent times. Rubrics are one of the many assessment artefacts that have emerged since the 1990s (Hudson et al. Citation2017) alongside quality assurance processes in UK higher education to communicate, transfer and share knowledge of standards and expectations with stakeholders in the assessment process (Sadler Citation2014). In the context of higher education, diverse assessment instruments are provided to students to guide and evaluate their work. As a central tool in this evaluative suite, the assessment rubric is hallmarked by its evaluative criteria, quality definitions for those criteria at particular levels, and a scoring strategy (Popham Citation1997; Dawson Citation2017). Considering the inherent ambiguity surrounding the rubric’s conceptual architecture, Dawson (Citation2017) sought to refine rubric delineation, presenting a compendium of 14 design parameters that typify both the essence of rubrics and their applicative milieu. Elements including specificity, secrecy, exemplars, scoring strategy, evaluative criteria, quality levels, quality definitions, judgement complexity, users and uses, creators, quality processes, accompanying feedback information, presentation, and explanation, offer an enriched scaffold for rubric discussions and scholarly investigation.

Positioned within Dawson’s theoretical framework, analytic rubrics dissect criteria into evaluative dimensions and describe performance across a range of levels, which allows for more granular feedback and use with complex assignments (Jönsson and Svingby Citation2007). In contrast, holistic rubrics are characterised by their uni-dimensional scoring approach, making them optimal for simpler tasks needing only broad evaluation (Brookhart Citation2018). The use of points-based marking schemes is also notable, denoting the allocation of marks for the depth of insight in a student’s response and typically employed in essays, short answers and timed assessments (Santrock Citation2018).

It is widely agreed that rubrics enhance the transparency of assessment (Jönsson Citation2014; Jönsson and Prins Citation2019). Although transparency is used as a metaphor to suggest that rubrics help to make assessment criteria and expectations more visible to stakeholders (Bearman and Ajjawi Citation2021), rubrics have become a critical tool to facilitate shared communication of these aspects between stakeholders, providing explicit statements that identify how learning is evidenced in an assessment (Sadler Citation2012). By reading rubrics, students are empowered to identify desired performance levels, establish learning goals, and improve their work via self-assessment (Panadero and Romero Citation2014). Rubrics also enhance student learning by increasing the value and insightfulness of feedback (Cockett and Jackson Citation2018).

Despite their utility, challenges persist for students using assessment rubrics. For example, the very structure of some analytic rubrics could overwhelm students with an abundance of details (Panadero and Jönsson Citation2013), thus diluting their focus on the most critical aspects of the assessment. Some studies find that students perceive rubrics to contain components that instructors value but omit design and criteria facets that the students consider pertinent to them (Bell, Mladenovic, and Price Citation2013; Panadero and Romero Citation2014; C. K. Y. Chan and Luo Citation2022). To address these issues, scholars and practitioners are urged to explore the intricacies of rubric design and effectiveness (Brookhart and Chen Citation2015; Dawson Citation2017; Brookhart Citation2018) while considering their primary users’ needs (Andrade Citation2005).

The complex and specialised terminology used within rubrics may be well understood by academics but not by their students (Lea and Street Citation1998; Andrade Citation2005). These challenges may be especially pronounced for international students due to linguistic barriers and diverse educational backgrounds. Jargon-heavy rubrics can create a cognitive barrier, making it difficult for students to decipher and stymie their attempts to align their work with the intended learning outcomes. Limited linguistic proficiency and dexterity has been found to magnify international students’ assessment difficulties and increase their workload whilst reducing satisfaction with assessment and feedback (Wearring et al. Citation2015). Ammigan (Citation2019) found that fairness and transparency of assessment, and explanation of grading and assessment criteria, correlated with international students’ satisfaction with institutional experience. These findings suggest rubrics designed to support comprehension and engagement may contribute to various dimensions of international students’ satisfaction.

International students frequently enter new education systems with pre-existing mental models of academic quality that are incongruent with those promulgated by their host institutions (Boyle et al. Citation2020). Learning to navigate these new academic landscapes requires adaptation to different pedagogical and assessment practices. Adaptation also requires them to build new schemas for categorising assessment criteria, and to develop a ‘working memory’ of assessment standards to calibrate their performance and fine-tune their interpretation of feedback according to the nuances of the quality benchmarks (Sweller Citation1988; Paas, Renkl, and Sweller Citation2003). Ghalayini (Citation2014) examined the experiences of Indian international postgraduate students, finding that they were not familiar with assessment rubrics and struggled to understand how to meet the requirements of specific types of assignment. Unfamiliarity with rubrics and their application can increase cognitive load, especially when instructor-led orientation and guidance are absent (Reddy and Andrade Citation2010). Without coherent interpretative guidance, students may misconstrue expectations and perceive dissonance between what the assignment prompts and what the rubric assesses, further confounding their efforts to meet assessment expectations (Jönsson and Svingby Citation2007). Such assessment-related challenges have even been linked to decisions by some first-year undergraduates to withdraw from their courses (Krause Citation2005; Jones and Fleischer Citation2012).

This study responds to the call for additional empirical studies to better understand the challenges international students face with assessment rubrics (Macias and Dolan Citation2009; McLean Citation2018; Boyle et al. Citation2020). The overarching research question of the paper was: how does the design and use of assessment rubrics impact international students’ engagement with and comprehension of them during the assessment process?

Method

A participatory research model (Bergold and Thomas Citation2012) was employed, wherein international students collaborated in all phases of the research, from conceptualisation of the focus group protocol to leading on data collection and contributing nuanced analysis through a student-centric lens. The research team comprised one education-focused scholar who teaches at the study institution and two research assistants who were also international students and employed through the university. The research assistants received formal training in focus group methodology and thematic analysis, provided by an independent research methods consultant. According to Bergold and Thomas (Citation2012), participatory research aims to conduct studies in collaboration with those whose experiences are the subject of the research to democratise the research process and foster cooperation. Welikala and Atkin (Citation2014) posit that this collaborative approach is especially relevant in educational research, in which researchers’ interpretations of student experiences dominate.

Data collection

The study was conducted within a UK Business School that does not enforce a mandatory assessment rubric policy. While module leaders are encouraged to create tailored assessment rubrics, they are permitted to use the university’s generic assessment criteria on the Virtual Learning Environment (VLE), which outlines standards for respective study levels.

Anticipating diverse experiences and varied opinions on assessment rubrics, focus groups were chosen for the safety and supportive environment they can create for participants which fosters sharing (Bertrand, Brown, and Ward Citation1992). Relative to surveys and interviews, focus groups also allow for the capture of responses to moderators’ questions and spontaneous reactions to and elaborations on the conversation that emerges among participants (Massey Citation2011).

The university’s Student Experience team recruited international student focus group participants via email. Four focus groups lasting between 60 and 90 min each were conducted. Each group consisted of three to five students and was led by two facilitators. The study aims were explained to all participants. Participants signed consent forms giving permission to audio and video record the discussion. Recordings were anonymised upon transcription. Students received a £15 voucher for their participation.

The focus groups included 16 undergraduate international students: nine first years, four second years and three third years. For inclusivity of design, participants were included from various degrees within the business school reflecting diverse assessment practices and tools: Business and Management (n = 5), Economics and Management (n = 4), Accounting and Management (n = 5), and Joint Honour degrees (n = 2). The business school context was selected as participants took between four (n = 2) and seven (n = 14) business school modules in the first year, with four modules (introduction to management, introduction to accounting, principles of economics and principles of marketing) common to all participants’ courses and reflection. All participants were non-UK citizens who had completed primary and secondary education outside of the UK. Two participants had completed preparatory (foundation or pre-sessional) education in the UK, at the study institution.

During the focus groups, students referred to a sample of assessment tools including the generic undergraduate assessment criteria, an analytic rubric, a holistic rubric, and a marking scheme used on modules in the business school. Sample focus group questions probed approaches to assessment criteria communication in post-secondary versus higher education, engagement with rubrics, perceived utility of rubrics, timing of introduction, accessibility, and ways instructors could enhance their understanding. Collectively, the focus groups provided rich data for analysis.

Data analysis

This study employed the DEPICT model to support a partnership approach to qualitative data analysis (Flicker and Nixon Citation2015). Transcribed focus group data were analysed in six steps: (1) independent reading of a subset of the transcripts by each member of the team, (2) independent coding, (3) engaged collaborative codebook development, (4) inclusive reviewing and summarising of categories, (5) collaborative analysis of the data and (6) translation. Note that steps 2 and 3 are inversed relative to Flicker & Nixon’s (Citation2015) description of the method as this study employed inductive thematic analysis. Thematic analysis is a versatile method consisting of a range of sub-methods that requires careful planning in the research design (Braun and Clarke Citation2006). An inductive approach was used, as the research team did not start with an existing theory or framework. Rather, the lead researcher’s background in assessment research provided a theoretical understanding of assessment rubrics (Dawson Citation2017).

The codes formulated were more semantic than latent (Braun and Clarke Citation2006, Citation2023), as the goal of the research was to accurately portray what the study participants expressed, instead of uncovering deeper, hidden meanings. Open codes were generated based on words and phrases used by focus groups participants (eg hard to read, vague, never mentioned, use in class, etc.). Each member of the team coded and authored an analysis of the data. The process enhanced the fidelity of interpretation and brought validity to the process whilst the independent readings of the transcript required by the DEPICT method allowed for creative interpretations of the data (Flicker and Nixon Citation2015). Open codes were then compared and grouped to create axial codes in discussion, which established relationships between axial codes. Examples of axial codes included ‘presentation’, ‘customisation’, ‘engagement with the rubric’, etc. Extracts of data identified as perplexing, open codes and axial codes were discussed by the three members of the research team until text categorisation and appropriate code labels were agreed. It is acknowledged that these interpretations were significantly influenced by the research team’s perspective, as opposed to ‘emerging’ organically from the data (Varpio et al. Citation2017). The research team’s ontological stance of realism (Maxwell Citation2012) led to an understanding that the assessment rubric experiences of the participants are a part of their own reality, which may vary from those of other students.

Results

Data analysis yielded four key themes: rubric design, assessment preparation, post-assessment, prior experience with rubrics. Rubric design included sub-themes such as specificity, presentation and quality of descriptors. Assessment preparation included sub-themes such as engagement with the rubric and availability of past papers and model answers. The post-assessment theme centred on linking the rubric to feedback. While the degree of importance given to each of these themes in each focus group discussion varied, the themes were evident in all focus groups. The reliability of these findings was enhanced through independent coding by each member of the research team followed by collective discussion (Maxwell Citation2012).

Rubric design: interplay of rubric specificity, visual presentation and language

A recurring theme in this study relates to the concerns raised by participants about the specificity of assessment rubrics. Across the focus groups, there was a notable tension between use of the university-wide generic descriptors and tailored, task- or assessment-specific rubrics (Dawson Citation2017). Across all focus groups, students shared the experience of being directed to generic undergraduate criteria on the VLE. One student remarked that each time they ‘opened those marking criteria [they were] exactly the same as before’. Repeated exposure to what many perceived as repetitive and redundant marking criteria considerably reduced students’ motivation to fully engage with the rubric. There was a general lack of desire to scrutinise a document they had seen multiple times. One student explained they ‘don’t really read the marking criteria. I usually just check and scan it’, while another had stopped reading it altogether because they ‘didn’t fancy reading the same document nine times’. This sentiment is juxtaposed against an awareness of escalating academic standards as they progress through their studies. Despite this awareness, the monotony of generic criteria led many to believe that these generic university-level rubrics do not provide ‘useful information’ to improve performance.

Students showed a clear preference for assessment-specific rubrics, which they perceived to be more conducive to communicating individual instructors’ preferences, their value of theory versus applied knowledge and assessment expectations. The generic criteria, in contrast, left students grappling to understand the requirements of different assessment types, especially for assessments that were theoretical versus applied in nature. Consequently, many students shifted their focus to other course materials, which they felt offered clearer, more actionable guidance. This is exemplified by a student who turned to the assessment brief and intuition, stating it ‘actually says what specifically to do and what they expect’, illustrating their shift in strategy due to the limitations of the generic criteria. Despite their frustrations with generic criteria, students adapt to the use of other resources when what is provided does not meet their needs.

The visual presentation of assessment rubrics has significant implications for student engagement and interpretive ease. In the study institution and in the focus group stimuli, analytic rubrics are presented in landscape orientation to accommodate the full range of standards on a 0–100 marking scale. Students unanimously agreed that a vertically oriented rubric would not only appear ‘less overwhelming’ but also facilitate reading and scrolling, especially on digital devices like tablets. The discussions also brought forth a noteworthy observation regarding the organisation of grades which participants felt was more intuitive when higher grades are presented on the left of the page and descend to the right. Rhetorically, one student asked,

Why would I waste time scanning columns first which describe what to do for the grades lower than the one I want to achieve?

They argued that such a layout aligns better with their instinctual search pattern, facilitating quicker access to both guidance and relevant feedback.

While participants underscored the necessity of detailed grading standards within analytic rubrics, they also flagged concerns regarding the typical grouping of grades between 70 and 100 into a single column. While recognising this might stem from A4 page limitations, such an aggregation impedes their ability to discern areas of improvement within that grade range. However, the need for specificity posed a paradox as excessive textual density was identified as a barrier to engagement. One student stated that the text is:

super small and it’s not easy to digest… you don’t want to sit and read the assessment rubric.

The visual appeal, or lack thereof, of rubrics also surfaced as a determinant of student engagement. The monochromatic presentation was critiqued for its lack of inspiration. In contrast, students suggested colour as a potential mechanism to bolster attention and comprehension. Students proposed colour-coding criteria in the rubric and using corresponding colours in feedback and on marked exemplars. When asked for clarification, participants expressed disapproval of any ‘traffic light’ red, amber and green colour coding which they felt was simple but could be demotivating. Instead, they suggested rainbow colours would provide a good range of options to correspond with the five or more criteria they typically see in a rubric. This approach, they reasoned, could provide clearer insight into what assessors prioritise and help them discern what constitutes a high-quality submission by showing ‘what it’s doing to get the marks’.

Rubric descriptors were a third design factor that posed challenges. The overarching sentiment from the participants was that these descriptors often leaned towards verbosity, resulting in ambiguity rather than clarity. Participants made observations about the ‘wordiness’ of the rubric which, paradoxically, obfuscated the intended meaning. One student stated:

For a better grade, they just use more words, but they don’t really mean anything. I don’t really know exactly what they’re looking for.

Participants emphasised the need for clarity and making the rubrics ‘simpler to understand’. One participant suggested the descriptors could be presented more concisely or even as bullet points to make the criteria more digestible, which received support from the other participants.

More general discussion about the difficulties participants face with the language of descriptors in rubrics was coded for all focus groups. A common issue emerged that students who found the language of the university generic rubric inaccessible experienced the same problem on every module where this assessment rubric was used. One student stated:

Having the same rubric for every module is not clear, and using those fancy words doesn’t help me as a foreigner.

Discussions revealed that efforts to decipher unclear descriptors were sometimes unsuccessful, as exemplified by one student:

English is my second language. When [rubrics] use those very formal advanced words, I go to a translator to [translate them into] my mother language. It’s not helpful at all because I can’t really tell the difference between those adjectives or adverbs.

Students in all focus groups articulated a need for additional support to understand performance descriptors. This led to constructive discussions around potential solutions. One popular suggestion was the creation of a supplementary guidance sheet. This sheet could accompany the rubric and delve into the intricacies of the marking criteria, helping students navigate its complexities. Highlighting a practical application, one participant referenced a past module where a glossary was provided, clarifying terms in the rubric:

I like what we did for one of my modules, where the module leader had definitions for some of the words in the rubric… She was trying to basically make it easier to understand what the rubric is trying to say.

Such practical approaches were well-received, emphasising the effectiveness of contextual clarifications. However, a glossary alone was seen as a limited solution. While it might clarify certain terms like ‘critical analysis’, it did little to demystify more vague descriptors like ‘contains minor errors’. Participants expressed a desire for tangible benchmarks, seeking practical examples or numerical clarifications. They sought an explicit breakdown of what certain terms meant, calling for clarity or ‘actual explanations’ on the distinction between terms like ‘effective’ and ‘mostly effective’ or ‘minor errors’ versus ‘some errors’. More broadly, participants spotlight a difference between the intent of communication in assessment rubrics and its reception.

Assessment preparation: engagement, timely introduction and complementary tools

Engagement with the assessment rubric emerged as a central theme, highlighting its critical role in students’ understanding of assessment criteria. Participants revealed that rubrics’ complexity became less problematic when they were given opportunities to actively engage with them during the assessment preparation period. Participants indicated a range of experiences concerning how rubrics were introduced. While digital access was deemed convenient, a mere online posting without further discussion proved insufficient. As one participant pointed out, presentation of the rubric often occurred during the initial lecture or tutorial, but in other cases ‘you just need to look at it on [the VLE] on your own’. Other students encountered it by chance when preparing their assignments.

Across focus groups, participants agreed that rubrics were shown, ‘but not really in depth’. While the instructor’s direction to a digital rubric was recognised as helpful, providing printed copies resonated as a standout practice and a more effective introduction to the rubric:

The only module that really showed us the marking rubric was [module name redacted] because our module leader printed it out for us to have a look. Other tutorial leaders will tell us to ‘have a look at the rubric if you want to see the distinction between the different grades’. But it was only explicitly shown to us in one module.

The timing of the introduction of rubrics had a significant impact on engagement. One student stated:

Sometimes they talk about it in the beginning of the first lecture or the first tutorial. But sometimes they don’t talk about the coursework and rubric until like the fourth or fifth tutorial.

Participants preferred an early introduction to assessment rubrics. However, several participants reported a late introduction, often just before assessment deadlines, hindering their alignment with the criteria and lessening their motivation for adjustment as illustrated in the following comment:

If you look at the rubric at the beginning and your tutorial leader makes it clear this is what you should use… then you won’t reach the problem whereby you look at the criteria at the end and realize you actually haven’t really followed it, but you can’t be bothered to change your work.

As another student explained, late engagement meant they were ‘too tired’ to modify their work to meet the rubric standards, rather ‘accepting the work is fine’ and submitting it ‘as it is’.

The value of early and frequent reinforcement of the criteria throughout the course was stressed as well as integrating it with teaching content. Students universally emphasised the importance of in-depth discussions about the rubric during tutorials. Dialogue that breaks down the criteria, and facilitates a clear understanding of expectations, was seen as beneficial.

While the primary focus remained on rubrics, the availability of past papers and model answers emerged as a sub-theme. Participants critiqued when instructors did not provide past papers or model answers and expressed confusion about why some instructors withheld them. Although participants acknowledged reasons like reusing questions or changes in the assessment format that invalidated past papers and model answers, students recounted instances where they assumed the causes of this absence through discussions with peers in the year above who sat different assessments. The overarching feeling was that such resources greatly complement the rubrics, and that explanations should be provided in their absence.

While the preference for rubrics over the absence of any guidance was clear, participants pointed out situations – particularly in examinations covering diverse topics – where rubrics might be unwieldy. In these instances, a simpler marking scheme was considered more practical. As one student noted, extensive rubrics could become overwhelming when faced with a multitude of topics. With a rubric unlike a marking scheme, another student concluded ‘you wouldn’t know about what exactly’ to write.

Post-assessment: rubric rooted feedback

Feedback linked to the rubric allowed students to understand where they succeeded or erred in meeting the assessment criteria. Conversely, feedback provided without use of the rubric diminished the students’ perception of its utility. One student argued that sharing a rubric on a module without providing it to the students alongside the marks ‘is almost as useful as not providing one, because even if I met the criteria or not, I wouldn’t know’.

Some of the data coded under feedback linked to the rubric was also coded under assessment preparation as participants wanted to see what an exemplar was doing to achieve marks against the rubric. Across focus groups, there was a unanimous appreciation for access to high-quality work samples with a marked-up rubric for each. This highlights the significance of deploying the rubric throughout the assessment process to bolster students’ grasp of the assessment criteria and expectations.

Prior experience with assessment rubrics

Participants highlighted the disparity between their prior experience with assessment rubrics and their use in higher education. Despite varied educational backgrounds among focus group participants, a consensus emerged regarding the differing roles and purposes of rubrics in post-secondary and higher education settings. A participant from the international baccalaureate program noted that, in their prior education, rubrics primarily acted as marking schemes:

It wasn’t explicitly told to us… but everyone knew, if it’s a 6-mark question or an 8-mark question, you need to write down six or eight points to gain those marks.

Beyond the different functions of assessment rubrics, the active role teachers played in elucidating the marking criteria and guiding students on how to achieve higher scores added a layer to the sub-theme of prior experience with rubrics. Although the assessment rubrics were available, the key distinction lay in the support provided by teachers, as compared to lecturers, as one student explained:

We were given the marking criteria… But the difference was in how teachers were always present to assist us, provide examples and guide us on how to attain each band, or what you could do to go higher.

This underlines a broader perspective that while rubrics are informative, their efficacy in the eyes of students can be largely influenced by the supportive and explanatory roles played by educators. The transition from one educational setting to another, with differences in rubric use and guidance, creates a distinctive set of challenges and adaptations for international students in higher education.

Discussion and implications for practice

This study asked: ‘How does the design and use of assessment rubrics impact international students’ engagement with and comprehension of them during the assessment process?’ The data analysis unveiled four salient themes: rubric design, assessment preparation, use post-assessment and prior experience with rubrics that shed light on the complexities these students encounter.

Participants in this study consistently favoured assessment rubrics over their absence and perceived them as helpful (Reddy and Andrade Citation2010). By revealing multiple challenges posed by the use of generic compared to specific rubrics, this study adds to literature on the crucial role of rubric design on international students’ engagement with them (Dawson Citation2017; Kilgour et al. Citation2020). Students’ preference for task-specific rubrics reflects extant findings that generic ones lack detailed guidance (Panadero and Jönsson Citation2013). The finding that repeated exposure to generic criteria led some students to disengage with rubrics highlights a potential pedagogical gap that may result from widespread use of generic rubrics. International students may struggle to develop or see the progression of knowledge and skills when faced with interpretations and application of generic rubrics that differ significantly from instructor to instructor, module to module, and do not adapt to the changing demands at increasing levels of their degree course. Diminished motivation to engage with assessment criteria potentially undermines the rubric’s primary self-assessment and performance improvement purposes. Balancing task-specific rubrics with more generic criteria may offer a compromise (Cockett and Jackson Citation2018).

Students’ desire for detailed grading criteria juxtaposed against their aversion to textual density showcases a paradox as highlighted by Gezie et al. (Citation2012); rubrics need to be more detailed and clearer. This underscores a need for precise communication: where details are conveyed without overwhelming the reader, pointing towards a potential gap in current rubric designs. Students found lengthy rubrics counterproductive, particularly when descriptors for higher grading levels are longer. Instructors should offer detailed criteria for top grades and simpler descriptions for lower bands. These adverse effects of textual density, potentially inducing extraneous cognitive load (Paas, Renkl, and Sweller Citation2003), underscore the importance of a balanced rubric design that economises student cognitive resources. This finding also suggests that students do not just cognitively engage with tools like rubrics; they have emotional reactions to them. This aligns with Carless’ (Citation2006) assertion that assessment is an emotional experience for students. As design aspects can evoke feelings of being overwhelmed, impacting overall engagement, extra care should be taken in rubric development.

Predictably, and in line with prior research, ambiguous academic language impeded understanding of rubrics, especially for non-native English speakers (Jönsson Citation2014). However, students in this study expressed a desire for supplementary tools, like guidance sheets and glossaries, to clarify terminologies. Beyond advocating for instructors to improve the clarity of rubric language, these findings urge institutions to prioritise the development of rubrics and ancillary support materials within their instructional design strategies (Wolf and Stevens Citation2007; Gonsalves Citation2023).

As identified in prior research, the timing of rubric dissemination is crucial (Gezie et al. Citation2012) and the mere provision of a rubric is not sufficient (Panadero and Jönsson Citation2020). This study shows that early and intermittent rubric discussion, and active classroom engagement, are vital for a deeper understanding. Students’ uncertainty regarding the availability of past papers and model answers may be due to instructors’ concerns; that such familiarisation could hinder assessment variation and increase examination predictability (Elwood, Hopfenbeck, and Baird Citation2017). However, instructors might communicate transparently to the benefit of students when such resources are not provided. Although rubrics are known to aid feedback (Reddy and Andrade Citation2010), students perceive their value to be reduced if not used post-assessment. Rubric use throughout the assessment process is vital for better student understanding and improved assessment experience.

Finally, the distinction between some international students’ prior rubric experience – geared towards point accumulation – and their interpretive use in higher education is crucial to understanding how their experience impacts students’ approaches to learning and assessment preparation (Gezie et al. Citation2012). Institutional support should explicitly guide international students in transitioning to the sophisticated use of rubrics in higher education.

Conclusion

This study examined international undergraduate students’ challenges with assessment rubrics, addressing rubric design, use during and post-assessment, and prior experiences. While existing research targets general students (Z. Chan and Ho Citation2019; Panadero and Jönsson Citation2020), this work centres the unique experiences of international students (Boyle et al. Citation2020). Overall, this study suggests that making rubrics accessible to international students through their design, timely introduction, discussion, activities and integration throughout the assessment cycle can democratise rubrics by facilitating engagement with them. Many of these suggestions could be implemented easily to the benefit of all students. Beyond how rubrics are introduced, this study’s findings suggest that careful consideration must be given to the evolution of pedagogy and support to develop deeper understanding and assessment literacy in years 2 and 3 of an undergraduate degree. Institutions without an established rubric policy should: (1) encourage instructors to adopt these practices and (2) devise guidance to support international students’ engagement with rubrics, ultimately fostering their academic success.

This study suggests benefits from combining color-coded rubrics with feedback. Though this approach has been applied (eg Mitchell and Pereira-Edwards Citation2022), empirical research of its effects is needed. While this study is limited by its sample size, single business school focus, and focus group findings which may not be generalisable to the broader population, it offers valuable insights to a broader audience of instructors and institutions. As Cook-Sather (Citation2002) suggests, if an education system is supposedly designed to serve students, we should heed their perspectives and reconsider whose views we seek to enhance and refine the education systems we desire.

Ethical approval

Approval to conduct this non-interventional study was gained from the King’s College London Research Ethics Committee (MRA-23/24-39740).

Acknowledgements

The author would like to thank Daniel Drumm and Lucy Delobel for their contributions to the project and all research participants, whose views on rubrics were greatly appreciated. The author would also like to thank Professor Phillip Dawson and Professor Sally Everett for advice and feedback.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was funded by the UK Council for International Student Affairs (UKCISA) and the King’s College London Innovative Education Fund.

References

  • Allen, D., and K. Tanner. 2006. “Rubrics: Tools for Making Learning Goals and Evaluation Criteria Explicit for Both Teachers and Learners.” CBE Life Sciences Education 5 (3): 197–203. doi:10.1187/cbe.06-06-0168.
  • Ammigan, R. 2019. “Institutional Satisfaction and Recommendation: What Really Matters to International Students.” Journal of International Students 9 (1): 262–281. doi:10.32674/jis.v9i1.260.
  • Andrade, H.G. 2005. “Teaching with Rubrics: The Good, the Bad, and the Ugly.” College Teaching 53 (1): 27–31. doi:10.3200/CTCH.53.1.27-31.
  • Balan, A., and A. Jönsson. 2018. “Increased Explicitness of Assessment Criteria: Effects on Student Motivation and Performance.” Frontiers in Education 3 (81). doi:10.3389/feduc.2018.00081.
  • Balloo, K., C. Evans, A. Hughes, X. Zhu, and N. Winstone. 2018. “Transparency Isn’t Spoon-feeding: How a Transformative Approach to the Use of Explicit Assessment Criteria Can Support Student Self-regulation.” Frontiers in Education 3 (69). doi:10.3389/feduc.2018.00069.
  • Bearman, M., and R. Ajjawi. 2021. “Can a Rubric Do More than Be Transparent? Invitation as a New Metaphor for Assessment Criteria.” Studies in Higher Education 46 (2): 359–368. doi:10.1080/03075079.2019.1637842.
  • Bell, A., R. Mladenovic, and M. Price. 2013. “Students’ Perceptions of the Usefulness of Marking Guides, Grade Descriptors and Annotated Exemplars.” Assessment & Evaluation in Higher Education 38 (7): 769–788. doi:10.1080/02602938.2012.714738.
  • Bergold, J., and S. Thomas. 2012. “Participatory Research Methods: A Methodological Approach in Motion.” Historical Social Research/Historische Sozialforschung 38 (4): 191–222. https://www.jstor.org/stable/41756482.
  • Bertrand, J.T., J.E. Brown, and V.M. Ward. 1992. “Techniques for Analyzing Focus Group Data.” Evaluation Review 16 (2): 198–209. doi:10.1177/0193841X9201600206.
  • Bolton, P., and J. Lewis. 2022. “International Students in UK Higher Education: FAQs.” The House of Commons Library, UK Parliament. Accessed 11 December 2022. https://researchbriefings.files.parliament.uk/documents/CBP-7976/CBP-7976.pdf
  • Boyle, B., R. Mitchell, A. McDonnell, N. Sharma, K. Biswas, and S. Nicholas. 2020. “Overcoming the Challenge of “Fuzzy” Assessment and Feedback.” Education + Training 62 (5): 505–519. doi:10.1108/ET-08-2019-0183.
  • Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. doi:10.1191/1478088706qp063oa.
  • Braun, V., and V. Clarke. 2023. “Toward Good Practice in Thematic Analysis: Avoiding Common Problems and Be(Com)Ing a Knowing Researcher.” International Journal of Transgender Health 24 (1): 1–6. doi:10.1080/26895269.2022.2129597.
  • Brookhart, S.M. 2013. How to Create and Use Rubrics for Formative Assessment and Grading. Arlington: ASCD.
  • Brookhart, S.M. 2018. “Appropriate Criteria: Key to Effective Rubrics.” Frontiers in Education 3 (22): 1–12. doi:10.3389/feduc.2018.00022.
  • Brookhart, S.M., and F. Chen. 2015. “The Quality and Effectiveness of Descriptive Rubrics.” Educational Review 67 (3): 343–368. doi:10.1080/00131911.2014.929565.
  • Carless, D. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31 (2): 219–233. doi:10.1080/03075070600572132.
  • Chan, C.K.Y., and J. Luo. 2022. “Exploring Teacher Perceptions of Different Types of ‘Feedback Practices’ in Higher Education: Implications for Teacher Feedback Literacy.” Assessment & Evaluation in Higher Education 47 (1): 61–76. doi:10.1080/02602938.2021.1888074.
  • Chan, Z., and S. Ho. 2019. “Good and Bad Practices in Rubrics: The Perspectives of Students and Educators.” Assessment & Evaluation in Higher Education 44 (4): 533–545. doi:10.1080/02602938.2018.1522528.
  • Cockett, A., and C. Jackson. 2018. “The Use of Assessment Rubrics to Enhance Feedback in Higher Education: An Integrative Literature Review.” Nurse Education Today 69: 8–13. doi:10.1016/j.nedt.2018.06.022.
  • Cook-Sather, A. 2002. “Authorizing Students’ Perspectives: Toward Trust, Dialogue, and Change in Education.” Educational Researcher 31 (4): 3–14. doi:10.3102/0013189X03100400.
  • Davies, J.A. 2023. “In Search of Learning-focused Feedback Practices: A Linguistic Analysis of Higher Education Feedback Policy.” Assessment & Evaluation in Higher Education 1–15. doi:10.1080/02602938.2023.2180617.
  • Dawson, P. 2017. “Assessment Rubrics: Towards Clearer and More Replicable Design, Research and Practice.” Assessment & Evaluation in Higher Education 42 (3): 347–360. doi:10.1080/02602938.2015.1111294.
  • Elwood, J., T. Hopfenbeck, and J.-A. Baird. 2017. “Predictability in High-stakes Examinations: Students’ Perspectives on a Perennial Assessment Dilemma.” Research Papers in Education 32 (1): 1–17. doi:10.1080/02671522.2015.1086015.
  • Flicker, S., and S.A. Nixon. 2015. “The DEPICT Model for Participatory Qualitative Health Promotion Research Analysis Piloted in Canada, Zambia and South Africa.” Health Promotion International 30 (3): 616–624. doi:10.1093/heapro/dat093.
  • Gezie, A., K. Khaja, V.N. Chang, M.E. Adamek, and M.B. Johnsen. 2012. “Rubrics as a Tool for Learning and Assessment: What Do Baccalaureate Students Think?” Journal of Teaching in Social Work 32 (4): 421–437. doi:10.1080/08841233.2012.705240.
  • Ghalayini, M. 2014. Academic and Social Integration: A Narrative Study of Indian International Students’ Experience and Persistence in Post-Graduate Studies in Ontario. PhD diss. Northeastern University. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.664.3924&rep=rep1&type=pdf
  • Gonsalves, C. 2023. “Knowledge of Language in Rubric Design: A Systemic Functional Linguistics Perspective.” In Improving Learning through Assessment Rubrics: Student Awareness of What and How They Learn, edited by Chahna Gonsalves and Jayne Pearson, 190–211. IGI Global. doi: 10.4018/978-1-6684-6086-3
  • Graham, A.I., C. Harner, and S. Marsham. 2022. “Can Assessment-specific Marking Criteria and Electronic Comment Libraries Increase Student Engagement with Assessment and Feedback?” Assessment & Evaluation in Higher Education 47 (7): 1071–1086. doi:10.1080/02602938.2021.1986468.
  • Hudson, J., S. Bloxham, B. den Outer, and M. Price. 2017. “Conceptual Acrobatics: Talking about Assessment Standards in the Transparency Era.” Studies in Higher Education 42 (7): 1309–1323. doi:10.1080/03075079.2015.1092130.
  • Jones, J., and S. Fleischer. 2012. “Staying on Course: Factors Affecting First Year International Students’ Decisions to Persist or Withdraw from Degrees in a Post 1992 UK University.” Practice and Evidence of the Scholarship of Teaching and Learning in Higher Education 7 (1): 21–46.
  • Jönsson, A. 2014. “Rubrics as a Way of Providing Transparency in Assessment.” Assessment & Evaluation in Higher Education 39 (7): 840–852. doi:10.1080/02602938.2013.875117.
  • Jönsson, A., and F. Prins. 2019. “Editorial: Transparency in Assessment—Exploring the Influence of Explicit Assessment Criteria.” Frontiers in Education 3 (January): 2018–2020. doi:10.3389/feduc.2018.00119.
  • Jönsson, A., and G. Svingby. 2007. “The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences.” Educational Research Review 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.
  • Kilgour, P., M. Northcote, A. Williams, and A. Kilgour. 2020. “A Plan for the Co-construction and Collaborative Use of Rubrics for Student Learning.” Assessment & Evaluation in Higher Education 45 (1): 140–153. doi:10.1080/02602938.2019.1614523.
  • Krause, K.-L. 2005. “Serious Thoughts about Dropping out in First Year: Trends, Patterns and Implications for Higher Education.” Studies in Learning, Evaluation, Innovation and Development 2 (3): 55–68. http://handle.uws.edu.au:8081/1959.7/539737.
  • Lea, M.R., and B.V. Street. 1998. “Student Writing in Higher Education: An Academic Literacies Approach.” Studies in Higher Education 23 (2): 157–172. doi:10.1080/03075079812331380364.
  • Leader, D.C., and M.S. Clinton. 2018. “Student Perceptions of the Effectiveness of Rubrics.” Journal of Business & Educational Leadership 8 (1): 86–99.
  • Macias, I., and M. Dolan. 2009. “Motivating International Students. A Practical Guide to Aspects of Learning and Teaching.” In The Handbook for Economics Lecturers, edited by Peter Davies, 1–34. Bristol: University of Bristol, Higher Education Academy Economics Network.
  • Massey, O.T. 2011. “A Proposed Model for the Analysis and Interpretation of Focus Groups in Evaluation Research.” Evaluation and Program Planning 34 (1): 21–28. doi:10.1016/j.evalprogplan.2010.06.003.
  • Maxwell, J.A. 2012. Qualitative Research Design: An Interactive Approach. 3rd ed. Thousand Oaks: Sage Publications.
  • McLean, H. 2018. “This is the Way to Teach: Insights from Academics and Students about Assessment That Supports Learning.” Assessment & Evaluation in Higher Education 43 (8): 1228–1240. doi:10.1080/02602938.2018.1446508.
  • Mitchell, K.M., and M. Pereira-Edwards. 2022. “Exploring Integrated Threshold Concept Knowledge as a Route to Understanding the Epistemic Nature of the Evidence-based Practice Mindset.” Journal of Professional Nursing: Official Journal of the American Association of Colleges of Nursing 42: 34–45. doi:10.1016/j.profnurs.2022.05.011.
  • Mountain, K., W. Teviotdale, J. Duxbury, and J. Oldroyd. 2023. “Are They Taking Action? Accounting Undergraduates’ Engagement with Assessment Criteria and Self-regulation Development.” Accounting Education 32 (1): 34–60. doi:10.1080/09639284.2022.2030240.
  • O’Donovan, B., M. Price, and C. Rust. 2004. “Know What I Mean? Enhancing Student Understanding of Assessment Standards and Criteria.” Teaching in Higher Education 9 (3): 325–335. doi:10.1080/1356251042000216642.
  • Orsmond, P., S. Merry, and K. Reiling. 1996. “The Importance of Marking Criteria in the Use of Peer Assessment.” Assessment & Evaluation in Higher Education 21 (3): 239–250. doi:10.1080/0260293960210304.
  • Paas, F., A. Renkl, and J. Sweller. 2003. “Cognitive Load Theory and Instructional Design: Recent Developments.” Educational Psychologist 38 (1): 1–4. doi:10.1207/S15326985EP3801_1.
  • Panadero, E., and A. Jönsson. 2013. “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.” Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002.
  • Panadero, E., and A. Jönsson. 2020. “A Critical Review of the Arguments against the Use of Rubrics.” Educational Research Review 30 (100329): 100329. doi:10.1016/j.edurev.2020.100329.
  • Panadero, E., and M. Romero. 2014. “To Rubric or Not to Rubric? The Effects of Self-assessment on Self-regulation, Performance and Self-efficacy.” Assessment in Education: Principles, Policy & Practice 21 (2): 133–148. doi:10.1080/0969594X.2013.877872.
  • Popham, W.J. 1997. “What’s Wrong–and What’s Right–with Rubrics.” Educational Leadership 55 (2): 72–75.
  • Reddy, Y.M., and H. Andrade. 2010. “A Review of Rubric Use in Higher Education.” Assessment & Evaluation in Higher Education 35 (4): 435–448. doi:10.1080/02602930902862859.
  • Rust, C., M. Price, and B. O’Donovan. 2003. “Improving Students’ Learning by Developing Their Understanding of Assessment Criteria and Processes.” Assessment & Evaluation in Higher Education 28 (2): 147–164. doi:10.1080/02602930301671.
  • Sadler, D.R. 2012. “Assessment, Evaluation and Quality Assurance: Implications for Integrity in Reporting Academic Achievement in Higher Education.” Education Inquiry 3 (2): 201–216. doi:10.3402/edui.v3i2.22028.
  • Sadler, D.R. 2014. “The Futility of Attempting to Codify Academic Achievement Standards.” Higher Education 67 (3): 273–288. doi:10.1007/s10734-013-9649-1.
  • Santrock, J.W. 2018. Educational Psychology. 2nd ed. New York: McGraw-Hill Education.
  • Sun, S., X. Gao, B.D. Rahmani, P. Bose, and C. Davison. 2023. “Student Voice in Assessment and Feedback (2011–2022): A Systematic Review.” Assessment & Evaluation in Higher Education 48 (7): 1009–1024. doi:10.1080/02602938.2022.2156478.
  • Sweller, J. 1988. “Cognitive Load during Problem Solving: Effects on Learning.” Cognitive Science 12 (2): 257–285. doi:10.1016/0364-0213(88)90023-7.
  • Varpio, L., R. Ajjawi, L.V. Monrouxe, B.C. O’Brien, and C.E. Rees. 2017. “Shedding the Cobra Effect: Problematising Thematic Emergence, Triangulation, Saturation and Member Checking.” Medical Education 51 (1): 40–50. doi:10.1111/medu.13124.
  • Wearring, A., H. Le, R. Wilson, and R. Arambewela. 2015. “The International Student’s Experience: An Exploratory Study of Students from Vietnam.” International Education Journal: Comparative Perspectives 14 (1): 71–89.
  • Welikala, T., and C. Atkin. 2014. “Student Co-inquirers: The Challenges and Benefits of Inclusive Research.” International Journal of Research & Method in Education 37 (4): 390–406. doi:10.1080/1743727X.2014.909402.
  • Wolf, K., and E. Stevens. 2007. “The Role of Rubrics in Advancing and Assessing Student Learning.” Journal of Effective Teaching 7 (1): 3–14.