13,928
Views
12
CrossRef citations to date
0
Altmetric
Research Articles

How is theory used in assessment and feedback research? A critical review

ORCID Icon, ORCID Icon & ORCID Icon

Abstract

Assessment and feedback research constitutes its own ‘silo’ amidst the higher education research field. Theory has been cast as an important but absent aspect of higher education research. This may be a particular issue in empirical assessment research which often builds on the conceptualisation of assessment as objective measurement. So, how does theory feature in assessment and feedback research? We conduct a critical review of recent empirical articles (2020, N = 56) to understand how theory is engaged with in this field. We analyse the repertoire of theories and the mechanisms for putting these theories into practice. 21 studies drew explicitly on educational theory. Theories were most commonly used to explain and frame assessment. Critical theories were notably absent, and quantitative studies engaged with theory in a largely instrumental manner. We discuss the findings through the concept of reflexivity, conceptualising engagement with theory as a practice with both benefits and pitfalls. We therefore call for further reflexivity in the field of assessment and feedback research through deeper and interdisciplinary engagement with theories to avoid further siloing of the field.

Introduction

How, when and why to engage with theory is a fierce debate in educational research, but is rarely discussed in the assessment and feedback literature in higher education. The role of theory warrants investigation, since theory facilitates how a study contributes to a broader understanding of research phenomena. Higher education research has been labelled as atheoretical (Tight Citation2004, Citation2014, Citation2015; Tummons Citation2012), despite repeated exhortations for educational research to include more theoretical foundations (Biesta, Allan, and Edwards Citation2011, Citation2014). In 2012, Ashwin reviewed empirical research in higher education for theory usage and identified that studies rarely developed theories further. He called for stronger use of theory, arguing that without a more complex engagement, readers are prone to see key conceptualisations as “a matter of fashion” (Ashwin Citation2012, 941). This may be particularly true of assessment research, which can be seen as overshadowed by the ‘measurement tradition’ (Boud et al. Citation2018). That is, assessment is conceptualised as a psychometric testing of individual students’ learning outcomes with research aiming to produce and enhance valid assessment tools.

Assessment and feedback research in higher education operates as an academic community with distinctive characteristics, such as the previously mentioned measurement tradition. It can be considered as a ‘field’ in the Bourdieusian sense: a “structured social space in which an individual undertakes practice” (Malthouse, Roffey-Barentsen, and Watts Citation2014, 599). However, Daenekindt and Huisman (Citation2020) noted that much of higher education research - including assessment research - consists of isolated ‘islands’. This means the field of assessment and feedback in higher education (hereafter called ‘the field’ or ‘our field’) may be overly insular. For example, assessment and feedback research rarely intersects with equity-oriented research on racial and ethnic minorities (Daenekindt and Huisman Citation2020). While the dominant measurement paradigm has been challenged through conceptual investigations, drawing from social theories to emphasise assessment as a social, contextual and political practice (e.g. Boud et al. Citation2018; Chong Citation2021; see also Shay Citation2008), it is less clear how such work influences subsequent empirical studies. It is therefore timely to interrogate the role of theory in empirical research in our field.

In this study, we review how empirical studies have engaged with theories within assessment and feedback research in higher education to inform both the field itself and the practices that result from these research endeavours. The usage of theory - or the lack thereof - can have concrete effects in teaching and learning through research-based assessment practices. This is why engagement with theories in educational research has an ethical aspect (Gallagher Citation2014). Analysing how the field engages with theory may reveal strengths, weaknesses and opportunities for researchers in the field, and, ultimately, for teachers. Our overall aim is to describe and analyse the repertoire of theories within recent empirical studies investigating assessment and feedback.

We answer our research objective by conducting a critical literature review, building upon earlier reviews on theory use in higher education (Ashwin Citation2012; Tight Citation2004, Citation2015). Before we describe our methods and findings, we outline our own theoretical framing. In the next section, we explore different conceptualisations of theory, then define theory for the purposes of this review and suggest some useful categorisations. We follow this by introducing the notion of reflexivity to explain our own positioning toward understanding engagement with theory as a practice (Hamann and Kosmützky Citation2021) before proceeding to the review itself.

Conceptualising theory within educational research

‘Theory’ is a complex, slippery concept (Biesta, Allan, and Edwards Citation2014). Various forms of theories have been determined in educational research: ‘grand theories’ that aim to understand the human experience in its totality (Skinner Citation1985); ‘learning theories’ that categorise various understandings of learning (Thurlings et al. Citation2013); middle-range theories that explain particular research phenomenon (Merton Citation1949); and personal theories connected to our very own epistemic beliefs (Tan and Tan Citation2020). A simplistic rhetorical dichotomy of theory and practice is often constructed (Hindmarsh Citation1993); we do not regard this as helpful. The complexities of practice are continually intertwined with theories: practice itself is never acontextual or ‘free’ of politics, beliefs, ideology, and so forth.

There have been many ways to theorise theory in higher education literature. Often, theory is loosely defined. Tight (Citation2004) defined theories as something that seeks to explain and understand a certain phenomenon, while noting that theories are often used interchangeably with concepts, models and paradigms. Similarly, Ashwin (Citation2012) theorised the usage of theory in empirical higher education research, defining it ‘simply as a way of seeing or characterising a research object’ (943). In contrast, Trowler (Citation2012, 274) presents a more complex articulation: theory consists of interconnected concepts that examine the components of a certain system and their interconnection. In addition, these concepts are systematic, causally connected in order to provide predictions and these predictions are contestable. Trowler notes that theory locates social processes within wider structures, including guiding research problems, designs and methods.

For the purposes of this review, our definition of theory takes a middle ground, as we seek to broadly include many conceptualisations of theory but also distinguish theory from the notion of models or conceptual frameworks. Therefore, drawing from Nestel and Bearman (Citation2015), we define theory as a framework of ideas, which is distinguished by complexity, contestability and applicability beyond the local. Complexity requires a network of relations within the framework rather than a simple idea such as “self-assessment supports learning”. Contestability means that the ideas can be constructively criticised or falsified (Trowler Citation2012, Flyvbjerg Citation2006). Applicability beyond the local means that in our understanding, theories reach beyond personal theories (Tan and Tan Citation2020) and instead aim at explaining larger social phenomena (Ashwin Citation2009; Trowler Citation2012).

Alongside this definition, we suggest that what theories are is often less significant than what is done with them: we focus our analytical gaze on using theory (Hamann and Kosmützky Citation2021). We distinguish three broad purposes to theory, drawing from Biesta, Allan, and Edwards (Citation2014). The first purpose is for theory to provide a causal or correlational explanation. This purpose might be recognised in a study that measures if self-regulated learning interventions can reduce cheating in assessment. The second purpose for theory is to help with understanding a research object; this is often seen in both ‘how’ and ‘why’ questions. For example, the same theory, self-regulation, might be used to understand why cheating takes place in a given higher education context. The final purpose for theory is emancipatory: it is to overturn social structures from unjust or unproductive ways. For example, a critical theory might consider the social construction of cheating by examining the associated systemic injustices that might lead to cheating.

We note the association of theory with better research is not universal: educational theory might distort our view and lead us astray from what is actually important (Thomas Citation2007). Yet, there have been multiple calls in higher education research to use theory to challenge naive empiricism (Hamann and Kosmützky Citation2021). Though some hold that a field of research would automatically benefit from educational theory (e.g. Tight Citation2004, Citation2015), as deep engagement with theory enhances the maturity of research fields, this is not always the case. Trowler (Citation2012) warned about the dangers of theory, including the legitimising of certain approaches and not others. It is possible that theory "stabiliz[es] the status quo” (Thomas Citation2007, 20); prevents research fields from moving forward (Ball Citation1995); and over-determines one’s research outcomes (Ashwin Citation2012). Trowler (Citation2012, 278) also notes theory can be misused superficially through being “simply used, not engaged with”.

Many have suggested the value of reflection in order to promote this engagement (e.g. Malthouse, Roffey-Barentsen, and Watts Citation2014). We take this one step further and call for reflexivity.

Reflexivity as a means to engage with theory

Reflexivity in research can be defined as an “ongoing self-awareness during the research process which aids in making visible the practice and construction of knowledge within research” (Pillow Citation2003, 178). Reflexivity differs from ‘reflection’ as it involves a communal aspect: the ‘recognition of the other’ (Pillow Citation2003, 184). Thus, a reflexive approach goes beyond an individual researcher making sense of theory (reflecting) to consider the broader social context and how this might be influenced by both the theoretical approach and the research. Reflexivity can prevent an overemphasis on certain perspectives and methods, which again might lead to “reinventing the wheel” (Daenekindt and Huisman Citation2020, 572).

A core aspect of reflexivity is articulating how the educational theory integrates what is often called ‘worldviews’ or ‘paradigms’; the ways knowledge and reality are premised within a given research initiative. Following Lincoln and Guba (Citation2000) we describe three common worldviews that are relevant to our field. Firstly, a post-positivist worldview means that the researcher holds that reality and hence knowledge is constituted of objective truths. In assessment research, this stance is associated with measurement concepts such as reliability and validity (Boud et al. Citation2018). Secondly, an interpretivist worldview considers reality as subjective and constructed through social interactions. Therefore, interpretivist assessment research might emphasise the importance of socially situated assessment practices rather than objective measurements. Finally, a critical-transformative worldview is focussed on how knowledge and society can be re-constructed through overturning unfair social structures. For example, research taking a critical-transformative stance might be interested in the mechanisms of racism that are interwoven into assessment practices. An additional stance, inspired by the pragmatic mantra, doing ‘what works’ (Biesta Citation2010), the practical perspective, is particularly pertinent to our field. A practical perspective is not overly concerned with how knowledge and reality are constituted. Instead, it is focused on making a difference in assessment practice. In summary, these four worldviews refer to the holistic understanding of knowledge and scientific inquiry and are frequently aligned with the purposes of theory.

A key part of reflexivity is describing one’s own stance both to inform self (reflectivity) and to inform others (reflexivity). We collectively describe ourselves now, to illustrate reflexivity at work. All three of us share relatively similar backgrounds: we have come from core training in the sciences (mathematics, computer science and medicine) with its associated belief in objective truths. We have all then moved into more interpretivist and critical understandings of knowledge, education and society. We all see ourselves as eclectic, swimming in murky theoretical waters as we grapple with complex ideas of theory, stemming from multiple disciplines. We are also all committed to our research having a substantial impact on real world problems.

Methods

Constructing the dataset

To explore how higher education assessment and feedback research has engaged with theory, we conducted a critical review using systematic methods. Critical reviews provide a synthesis of a research field, often by analysing gaps in existing literature. Such an approach reaches beyond describing or summarising a set of research articles but aims for a critical synthesis (Grant and Booth Citation2009). A critical review does not seek to find answers but to pinpoint unknown grounds, contradictions and controversies. Critical reviews are not simply for identifying ‘weaknesses’ in earlier studies; instead, they provide a “launch pad for a new phase of conceptual development” (Grant and Booth Citation2009, 93).

We chose to restrict our search to 2020 to enable a close analysis of theory use in our field, as is common in critical literature reviews of this nature (e.g. Ashwin Citation2009; Tight Citation2004). Thus, rather than offering an overview of the temporal development of theory use over time, we interrogate the field in its current state. We followed Ashwin’s (Citation2012) selection of journals, but also added Assessment and Evaluation in Higher Education. For this journal, we considered all articles with a 2020 issue date, a total of 90 articles. Across the remaining journals (Higher Education, Higher Education Research and Development, Journal of Higher Education, Research in Higher Education, Review of Higher Education, Studies in Higher Education and Teaching in Higher Education) we included articles which contained assessment’ or feedback’ in the title, abstract or keywords with a 2020 issue date, resulting in 47 articles and thus a total of 137 studies. The search was conducted via the Scopus database, using journal ISSNs. No articles on assessment or feedback were returned from Review of Higher Education.

The following inclusion criteria were applied:

  • Articles in English, full text available.

  • Articles that concerned educational assessment of students’ work, artefacts, and performances in higher education (including e.g., self and peer-assessment, feedback practices).

  • Articles with assessment (both summative and formative) as the primary phenomenon. We excluded articles that were not about assessment of students (e.g., psychological metrics, student evaluation of teaching, learning analytics without a direct connection to summative and formative assessment).

  • We only included empirical articles to enable an analysis of how theories are used while interpreting data. By empirical, we mean that data is collected in some way.

  • Articles about higher education, including undergraduate and postgraduate students, were included. Studies about professional contexts, workplace and doctoral supervision were excluded.

Consensus on included and excluded articles was established through iterative research meetings. To begin, each author screened a portion of the 137 article abstracts independently; 59 were excluded at this stage. At the full text stage, the remaining 78 articles were examined in detail; 10 jointly at first through a discussion and the remaining 68 independently. After excluding 22 studies in the full text review, a dataset of 56 articles was constituted for data extraction and analysis.

Analysis

Overall, our analysis follows a narrative synthesis as described by Petticrew and Roberts (Citation2006). This approach resembles that of a meta-study and aligns with the critical review tradition. For this purpose, we conducted a coding scheme based on Ashwin’s (Citation2012) analysis of the usage of theory in higher education and Hew and colleagues’ (2019) analysis in the context of educational technology research. First, we coded the basic information about the articles (e.g. country, number of students, study design, type of assessment). An applied version of Hew et al. (Citation2019) coding scheme (that builds on Ashwin’s work) was used to extract the following categories:

  • Which theories are used. If theories are not used, which ideas/concepts were used to ‘do the work of theory’?

  • Theory advancement (explicit, vague, no evidence, no theory advancement)

Next, Ashwin’s (Citation2012) coding scheme was used to examine how theory was used in data analysis:

  • Conceptualisation of research object (implicit theory, multiple theories, competing theories, positioning of research object, no theories used)

  • Analysing the data (no account, unclear account, explicit account - original conceptualisation, explicit account - different conceptualisation, no theory used)

  • Discussing the research outcomes (no use of theory, conceptualisation of the research object is used to explore the meaning of the outcomes, interrogating the conceptualisation of the research object, developing a conceptualisation of the research object)

To supplement these earlier frameworks, we also coded the purpose of theory (causal explanation, explanation/understanding, emancipation) (Biesta, Allan, and Edwards Citation2014) and, interpreted the authors’ worldviews (Lincoln and Guba Citation2000) as represented in the study. For example, we took investigations of validity and reliability of assessment measurements to infer postpositivism; explicit notions of assessment information being relational and situational to infer interpretivism; critical theory to infer acritical-transformative perspective; and multiple pages of text concerning teaching implications to infer a practical perspective. We also sought for explicit accounts of theoretical boundary crossing (e.g. the article explicitly builds a connection between assessment and philosophical theory).

As our main research objective was understanding our field rather than individual studies, the process was kept inclusive and inductive. Any study could have been placed under many categories, and open text fields were used to enable multifaceted understanding when the predetermined categories did not provide sufficient information.

Once the dataset was established, the analysis was based on a dialogic, reflexive process amongst the team. The three authors constantly discussed the process in research meetings. The coding scheme was tested and developed with a set of studies before the full text review to ensure that all coders understood the categories accordingly. All three authors participated in all stages of the data analysis, and unclear cases were multiple-coded through collaborative discussion.

Findings

We present our findings in four parts. First, we provide an overview of the dataset of 56 studies. Second, we introduce the worldviews adopted by the studies to provide the context in which theories were used. Third, we introduce the repertoire of theories used by the 21 articles which did adopt a theoretical stance. We also describe additional ideas (e.g. concepts, models, frameworks) drawn upon to ‘do the work of theory’. Finally, we portray how theories were used, drawing on the works of Ashwin (Citation2012) and Hew and colleagues (2019).

Overview of the dataset

introduces an overview of the dataset. The studies represented a large variety of assessment and feedback practices as well as disciplines.

Table 1. Overview of the dataset (N = 56). The frequencies of studies are in brackets.

In , the research methods as described in the articles are introduced. We provide a general overview of the methods by reporting the main analysis methods, drawing on the authors’ own words. In summary, interviews and surveys/questionnaires were the main data collection methods. Thematic analyses and simplistic quantitative methods (t-testing, correlational analyses) were overrepresented in the analysis methods.

Table 2. The main data collection and analysis methods (frequencies of studies in brackets, only studies with frequencies over one are reported).

Worldviews

The findings concerning worldviews are reported in . 12 studies were categorised under two worldviews (of which eight under post-positivism and practical perspective). No studies were analysed to represent the critical-transformative worldview. In 10, worldviews were explicitly discussed; nine of these 10 studies represented interpretivism (two of these also took a practical perspective), and one practical perspective. As an example of a study in which the worldview was explicitly stated, Macleod et al. (2020) discussed how the cultural-historical activity (CHAT) theory enabled the researcher to take an interpretivist stance by understanding “individuals’ experience of assessment and feedback in the social and cultural context of their programmes of study rather than as a relatively separate linear process” (p. 963).

Table 3. The worldviews and whether they were explicitly discussed.

Which theories were used - and what other things were used to ‘do the work of theory’?

Altogether 21 studies explicitly used educational theory (37.5% of the studies). We introduce each of these theories that fulfilled the ideals of complexity, contestability and working beyond the local (Nestel and Bearman Citation2015) in .

Table 4. The theories as described in the studies (N = 21).

While 21 studies explicitly drew on theories, we identified 29 as utilising something like theories. First, multiple studies mentioned broader learning theories such as socio-cultural perspectives (Adalberon 2020) or social constructivism (Carless 2020; Kilgour et al. 2020), albeit treating such ideas lightly. As such broad approaches to how learning is conceptualised lack contestability (i.e. any learning theory is always present in a learning situation, but one could argue when it is more appropriate to draw on one of them), we did not treat them as theories. Often, the studies were guided by concepts that lacked sufficient complexity to be theories. Examples of this were feedback literacy (Han and Xu 2020; Molloy, Boud, and Henderson 2020), academic self-concept (Simonsmeier et al. 2020) and heuristics and biases (McQuade et al. 2020). Many studies used some kind of a framework in their analysis, such as Biggs’ SOLO taxonomy (Matshedisho 2020) or a categorisation for cheating behaviours (Chirumamilla 2020); yet these lacked the complexity to be treated as theories. The same held true for causal relations and hypotheses such as in, for example, Golan and colleagues’ (2020) study about the interaction hypothesis concerning assessment adjustments, or Bygren’s (2020) “theorization” (295) of bias in grading that drew only on two presented causal scenarios.

We found that some ideas were both used in a theoretically driven way in one paper, and in a more straightforward and atheoretical way in another paper. This occurred with ‘authentic assessment’. Hains-Wesson et al. (2020), in their practice-driven study that also transparently drew on interpretivism, critically discussed the concept of authenticity: this we coded as theory, as their theorisation of authenticity was complex, contestable and worked beyond the local. However, other studies about authentic assessment (e.g. Jopp 2020) did not take part in such theoretical discussion but focused on practice. For example, Ellis and colleagues (2020) used a framework to determine whether assessment tasks were authentic; what was lacking theory-wise was complexity. Another tricky distinction needed to be drawn between models and theories. We coded the dialogic feed-forward assessment cycle by Beaumont, O’Doherty, and Lee (Citation2011), as used by Hill and West (2020), as a model as it lacked theoretical complexity by focusing on designing practice (or, on “process and principles” as put in the paper [11]). On the other hand, Yan’s (2020) cyclical self-assessment model was coded as a theory; the model was used to theorise the psychological student self-assessment process in a complex, contestable and non-local way.

How were theories used?

In , we present our findings on the usage of theory in the 21 studies that explicitly drew on theory.

Table 5. How theory was used in the 21 studies.

In the analysis of the purpose of theory, four studies used theory for causal explanation (e.g. Adams et al., 2020; all these studies were either quantitative or mixed by their design). For example, Hoo et al. (2020) used ‘reflective practices’ as a theory to examine reflective journals through multiple quantitative analyses. Sixteen used theory for explanation (e.g. Merry and Orsmond 2020; Zhang and Yin 2020): these studies used theory to explain certain social phenomena regarding assessment. For example, Dai et al. (2020) explored Chinese students’ assessment experiences by explaining them through the theoretical lens of sustainable assessment. Two studies used theory for emancipatory purposes (Van Heerden 2020; Nieminen and Tuohilampi 2020). As an example, Nieminen and Tuohilampi (2020) used the theory of ecological agency not only to explain students’ experiences of self-assessment, but to explicitly strive for higher quality assessment practices in higher education: “If student agency is to be the main feature of the new generation assessment environments… there is a need to rethink teacher-led assessment practices.” (13)

In our inductive analysis of theory advancement, only one study explicitly developed theory. Zhou et al. (2020) extended Lea and Street (Citation1998) notion of academic literacies by noting that by becoming ‘literate’, students do not only become legitimate members in academic communities of practice (CoP) but rather vocational community participants. Thus, extending the idea with the modern idea of employability had a contribution that went beyond the empirical. Four studies developed the original theory vaguely (Hew et al. Citation2019). For example, Yan (2020) examined the cyclical self-assessment model in relation to self-regulation. While the original model was not developed further, a novel theoretical understanding was built about its relation to the near concept of self-regulation. Moreover, Hains-Wesson et al. (2020) discussed authenticity in the novel STEM context. This kind of boundary crossing was interpreted as a vague development of theory, as the theory of authenticity was not developed further per se but it was brought to a new context to understand its disciplinary nuances. In a similar way, Wicking (2020) did not develop the theory of Confucian heritage culture but linked it with the novel idea of peer-assessment.

We present a few key examples from the data to illustrate different ways of engaging with theory. First, the study by Van Heerden (2020) illustrates a study in which theory was used in an aligned way. This study builds on an interpretivist worldview by understanding feedback as a socio-cultural phenomenon, and meaning as mutually constructed. Legitimation code theory was introduced as a theory that aligns with this worldview and offers tools for conceptualising feedback as a social research object. In the study, the analytical process was described in detail, and the theory was used explicitly to make sense of data. Finally, feedback is reframed as a disciplinary, epistemic practice through legitimation code theory, enabling a developed understanding of the research object. The purpose of using theory is emancipatory, and we see that this study strives for better feedback practice (by indeed reframing what is meant by ‘better’).

Other studies had a less central positioning of theory, such as the one by Gaynor (2020) that introduced the self-determination theory of motivation at the beginning of the study but then drew on another conceptualisation (categorisation for feedback) in its analysis and discussion. This paper represents studies that used theory as an extra tool rather than as an aligned overall lens to understand the research object. On occasion, a more peripheral positioning of theory fulfilled this notion of theory as an extra tool rather than as an aligned overall lens to understand the research object. For instance, Ajjawi et al. (2020) used the theory of CoP in a rather light way, only discussing this concept after presenting their findings; the theory was not used in the data analysis. In this study, theory was used to cross boundaries between various fields of research (authentic assessment, work-integrated learning, constructive alignment), and we argue that CoP indeed offers fruitful tools for this as a well-known theory in higher education research (see Tight Citation2015). Perhaps a less known yet deeper theory would have been less successful in creating discussion between various research fields.

We investigated the remaining 36 studies that used ‘something alike to theories’, identifying a few patterns. First, some studies flirted with theory without deeply engaging in using them in the research design. An example is the study by Carless (2020) in which a longitudinal interview design was used to examine students’ experiences of feedback. While the study drew on a social constructivist approach, no theories were explicitly used in the inductive analysis nor were theories described to guide the study design. However, the study referred to theoretical ideas such as ‘agency’ and ‘power’ multiple times.

In addition, we identified implicit use of the ‘measurement paradigm’ in many studies that conceptualised assessment through the positivist worldview (e.g. Mercader 2020; Wang and Zhang 2020). In such studies, validity and reliability were framed as unquestionable virtues in assessment. However, these ideas were not stated upfront: they were interwoven in the overall conceptualisation of assessment.

Finally, seven studies were identified as completely atheoretical - theory, or anything similar – was not used at all (e.g. Bong and Park 2020; Pui, Yuen and Goh 2020). For example, Suri and Krishnan (2020) examined assessment hurdles (threshold or minimum pass requirement) in core first-year courses. They aptly note that literature on the use of hurdles is scarce and then analyse hurdle requirements in eight Australian universities. This paper is an example of a practical study. No theory is used at all and the study is operating on a very practical level (e.g. only descriptive statistics - namely percentages - are presented). However, the study succeeds in its goals of responding to a real and urgent existing gap in literature: “There remains an inconsistency and arbitrariness in the practice of implementing hurdle requirements in Australian universities.” (Suri and Krishnan 2020, 264) Elsewhere, theory might have strengthened the argumentation in atheoretical studies. For instance, Carroll (2020) studied how students’ self-assessment accuracy developed during two criteria-based self-assessment tasks. Educational theory might have opened novel avenues for conceptualising the very notions of ‘accuracy’ and ‘criteria’, enabling the important findings to offer a greater contribution for a research field that has been criticised for unreflexive perspectives on ‘self-assessment accuracy’ for a long time (see Boud and Falchikov Citation1989).

Discussion

We have examined how our field has engaged with educational theory by reviewing empirical studies published in 2020. Overall, 21 studies of the 56 reviewed ones drew on theories; critical-transformative theories were absent; and theory was most commonly used for explanatory purposes and for positioning the research object (). In this final section we offer two main implications for researchers in the field, calling for overall deeper engagement with theory and for the need for critical theories.

Deeper engagement with theory

Overall, our critical review has shown that a minority of recent assessment and feedback research, published in 2020, engaged with educational theory (37.5%), while 29 out of 56 studies drew on ‘something alike to theories’. When theory was used, it was used mostly for explanatory purposes. shows that when theory was used, the quality of theory work was rather high, which implies that our field is not at risk for misuse of theory (Trowler Citation2012) but for non-use, a further indication of the ‘isolation’ of our field (Daenekindt and Huisman Citation2020). When it comes to reflexivity, only 10 studies explicitly discussed their worldview. Post-positivism (N = 30) and practical perspectives (N = 23) characterised our dataset which sets the premise for the engagement with theory. Thus, on some level, we confirm Boud and colleagues’ (2018) notion about the dominance of the measurement paradigm in our field. We invite the field to engage in critical, reflexive ponderings about the absences and over-representations of theories and worldviews as identified in our analysis.

We call for a broader engagement with theory in our field, and a wider repertoire of theories in this work. This offers affordances for ‘un-siloing’ our field by connecting it theoretically with the broader sphere of social science research. Considering theory may protect researchers from reinventing the wheel. For example, while our field often focuses on student perceptions of assessment and feedback, overreliance on atheoretical and inductive approaches might end up finding the same ‘emerging themes’ again and again (see Van der Kleij and Lipnevich Citation2021). New insights and possibilities can be found in using theories rather than relying on induction alone. This notion encourages assessment and feedback researchers to search for theory from outside their imminent contexts: relevant literature might be found from, say, early education, biology education or developmental psychology. Stronger use of theory might enable our field to develop deeper methodologies, especially when it comes to traditional approaches to data collection (e.g. interviews, surveys) and analysis (e.g. coding, thematic analysis) that were overrepresented in our dataset ( and ). This requires reflexive alignment between the worldview, purposes, theories and methodologies.

As part of a broadening engagement with a repertoire of theories, we call for robust conversations about what theory might be and how it could advance the field. Only one study in the dataset developed theory further; we thus call for bolder moves in our field in theory advancement. While theory advancement may not always be possible or desirable, depending on worldview and methodology, it is a notable absence.

We note that things that do the work of theory (e.g. concepts, models) and flirting with theory may offer a valuable starting point for further theory work. These may represent evolving concepts that hold the promise of developing into complex, contestable and universal tools. For example, ‘feedback literacy’ seems to become an increasingly theoretically oriented approach to feedback in higher education (e.g. Chong Citation2021).

There is also a need to consider what kinds of theories might work best for our field. For example, recent socio-material approaches have produced assessment-specific knowledge about the processes of assessment (e.g. Tai et al. Citation2021). Strengthening our field - both in scientific contribution and practical impact - through considering and drawing on theory - is a worthy challenge and something that requires careful consideration of what is appropriate within a particular research endeavour.

Non-use of theory might also be a reflexive practice. For example, a practically oriented study about self-assessment might perfectly fulfil its purpose without having to frame self-assessment as a pedagogical device à la Bernstein or as a Foucauldian technology for subjectification. Some theories might even have harmful effects, such as the neuromyths in the ‘learning style’ theory. But these are difficult decisions, and moreover not neutral ones.

As the post-positivist worldview was dominant in our dataset (30 of the 56 studies), and a-theoretical approaches were more common than theoretical, our critical review somewhat aligns with Boud and colleagues’ (2018) characterisation of the measurement paradigm in our field. Perhaps this reflects the practical nature of assessment in higher education: no matter how social assessment is considered to be through interpretivist theories, assessment is, ultimately, about the learning and certification of individuals. Yet we think that even the measurement paradigm could benefit from reflexivity when it comes to engagement with theory. Whilst studies with strong post-positivist groundings were common in our findings, they generally referred instrumentally to their underlying theoretical foundations. In particular, we note an instrumental approach to validity: the foundational theory underpinning various kinds of measurement theory. Without a careful consideration of theories of validity, perhaps drawing on Messick’s (Citation2005) seminal work, it seemed that claims were not as robust as they might otherwise be.

A need for critical theories

Critical theories were absent in our dataset, and only two of the interpretivist studies engaged with theory in an emancipatory way: critical-transformative approaches were thus marginal. This is a worrying finding given how our field has been claimed to be unable to promote change in practice, and unwilling to take part in broader socio-political discussions about assessment, feedback and grading (Tannock Citation2017). As Boud and colleagues (2018) note, it is not easy to change assessment; but it is sometimes necessary. Since assessment is conducted in the non-perfect social world, a lack of critical perspectives might reflect, renew and even strengthen societal forms of injustice. The lack of emancipatory approaches restricts our field from having an impact on practice for social good.

We thus urge more engagement with critical theories such as racism and ableism in empirical research. For example, Shay’s (Citation2008) article investigated how quantitative and positivist approaches to assessment policy hindered the understanding of structural racism in the context of South Africa. Shay offered an ‘alternative reading’ of previously analysed research findings on anonymised marking, using critical theories to reveal what a-theoretical approaches could not see - and might have even hidden. By employing critical theories in empirical research, we might ‘see otherwise’ in a profoundly impactful way.

Critical theory is useful beyond the exploration of oppression and exclusion: it can also enable empirical assessment research to connect with its broader social, cultural and political contexts. By considering critical theory, researchers can meaningfully explore why research-based assessments and interventions might fall short in practice, in a given context. For example, it could be used to develop novel explanations for the variability in the way that students respond to feedback.

Limitations

Several limitations should be noted. As our study only concerned articles published in 2020, the findings should not be considered as a traditional review but as a snapshot of contemporary assessment research. Our definition of theory differs from many others as ‘theory’ is understood differently in various contexts and disciplines (Hamann and Kosmützky Citation2021). Others might disagree with our definition. For example, Zhou, Zheng and Tai (2020) do not explicitly call ‘respect’ a theory, yet we have coded it as such based on the work by Nestel and Bearman (Citation2015). Importantly, we have conducted this study as insiders in our field. Our dialogic approach to the research process has aimed for a high-quality analysis, but it has also been conducted through our own research positionality, outlined within the section on reflexivity. This is particularly important to mention as our review has included several studies written by our colleagues, and by ourselves. Our drive to have an impact on real world matters means we have focussed on what has been done, to establish future direction. While we have identified areas where theory could be employed more reflexively, we note that this is an open field: other researchers are likely to identify other gaps and opportunities where they can contribute to the field.

Given our own positionality as scholars who have ‘shifted’ our worldviews from natural sciences toward critical and theoretical approaches, our conclusions might also reflect our own tendency towards collaborative and interprofessional research. We note however that many assessment researchers in the field of higher education are eclectic, bringing their own disciplinary stances to the research. As authors and reviewers, we should consider this positively, rather than ‘protecting’ the field from contributions beyond measurement theories. Further conceptual work might also consider how theories not commonly used within our field could be drawn upon to enrich the field.

Final words

In our critical review we have shown that recent assessment and feedback research has engaged with educational theory rather rarely. We call for wider reflexive discussion in our field about where it wants to go, what the role of theory is in such an endeavour, and which theories might contribute to its future directions. Through a reflexive stance, we can open up assessment and feedback research in higher education to other theories, approaches and communities.

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • The 56 studies that were included in the review are marked with an asterisk.
  • * Adalberon, Erik. 2021. “Providing Assessment Feedback to Pre-Service Teachers: A Study of Examiners’ Comments.” Assessment & Evaluation in Higher Education 46 (4): 601–614. doi:10.1080/02602938.2020.1795081.
  • * Adams, Anne-Marie, Hannah Wilson, Julie Money, Susan Palmer-Conn, and Jamie Fearn. 2020. “Student Engagement with Feedback and Attainment: The Role of Academic Self-Efficacy.” Assessment & Evaluation in Higher Education 45 (2): 317–329. doi:10.1080/02602938.2019.1640184.
  • * Ajjawi, Rola, Joanna Tai, Tran Le Huu Nghia, David Boud, Liz Johnson, and Carol-Joy Patrick. 2020. “Aligning Assessment with the Needs of Work-Integrated Learning: The Challenges of Authentic Assessment in a Complex Context.” Assessment & Evaluation in Higher Education 45 (2): 304–316. doi:10.1080/02602938.2019.1639613.
  • * Akimov, Alexandr, and, Mirela Malin. 2020. “When Old Becomes New: A Case Study of Oral Examination as an Online Assessment Tool.” Assessment & Evaluation in Higher Education 45 (8): 1205–1221. doi:10.1080/02602938.2020.1730301.
  • * Amigud, Alexander, and, Thomas Lancaster. 2020. “I Will Pay Someone to Do My Assignment: An Analysis of Market Demand for Contract Cheating Services on Twitter.” Assessment & Evaluation in Higher Education 45 (4): 541–553. doi:10.1080/02602938.2019.1670780.
  • Ashwin, P. 2009. Analysing Teaching-Learning Interactions in Higher Education: Accounting for Structure and Agency. London: Continuum
  • Ashwin, P. 2012. “How Often Are Theories Developed through Empirical Research into Higher Education?” Studies in Higher Education 37 (8): 941–955. doi:10.1080/03075079.2011.557426.
  • Ball, S. J. 1995. “Intellectuals or Technicians? The Urgent Role of Theory in Educational Studies.” British Journal of Educational Studies 43 (3): 255–271. doi:10.2307/3121983.
  • Beaumont, C., M. O’Doherty, and S. Lee. 2011. “Reconceptualising Assessment Feedback: A Key to Improving Student Learning?” Studies in Higher Education 36 (6): 671–687. doi:10.1080/03075071003731135.
  • Biesta, G. 2010. “Why “What Works” Still Won’t Work: From Evidence-Based Education to Value-Based Education.” Studies in Philosophy and Education 29 (5): 491–503. doi:10.1007/s11217-010-9191-x.
  • Biesta, G., Allan, J. J., and R. Edwards. 2011. “The Theory Question in Research Capacity Building in Education: Towards an Agenda for Research and Practice.” British Journal of Educational Studies 59 (3): 225–239. doi:10.1080/00071005.2011.599793.
  • Biesta, G., J. Allan, and R. Edwards. 2014. “Introduction: The Theory Question in Education and the Education Question in Theory.” In Making a Difference in Theory, edited by G. Biesta, J. Allan, and R. Edwards, 1–9. New York: Routledge.
  • * Bong, Jiyae, and, Min Sook Park. 2020. “Peer Assessment of Contributions and Learning Processes in Group Projects: An Analysis of Information Technology Undergraduate Students’ Performance.” Assessment & Evaluation in Higher Education 45 (8): 1155–1168. doi:10.1080/02602938.2020.1727413.
  • Boud, D., and N. Falchikov. 1989. “Quantitative Studies of Student Self-Assessment in Higher Education: A Critical Analysis of Findings.” Higher Education 18 (5): 529–549. doi:10.1007/BF00138746.
  • Boud, D., P. Dawson, M. Bearman, S. Bennett, G. Joughin, and E. Molloy. 2018. “Reframing Assessment Research: Through a Practice Perspective.” Studies in Higher Education 43 (7): 1107–1118. doi:10.1080/03075079.2016.1202913.
  • * Bygren, Magnus. 2020. “Biased Grades? Changes in Grading after a Blinding of Examinations Reform.” Assessment & Evaluation in Higher Education 45 (2): 292–303. doi:10.1080/02602938.2019.1638885.
  • * Carless, David. 2020. “Longitudinal Perspectives on Students’ Experiences of Feedback: A Need for Teacher–student Partnerships.” Higher Education Research & Development 39 (3): 425–438. doi:10.1080/07294360.2019.1684455.
  • * Carroll, Danny. 2020. “Observations of Student Accuracy in Criteria-Based Self-Assessment.” Assessment & Evaluation in Higher Education 45 (8): 1088–1105. doi:10.1080/02602938.2020.1727411.
  • * Chirumamilla, Aparna, Guttorm Sindre, and Anh Nguyen-Duc. 2020. “Cheating in e-Exams and Paper Exams: The Perceptions of Engineering Students and Teachers in Norway.” Assessment & Evaluation in Higher Education 45 (7): 940–957. doi:10.1080/02602938.2020.1719975.
  • Chong, S. W. 2021. “Reconsidering Student Feedback Literacy from an Ecological Perspective.” Assessment & Evaluation in Higher Education 46 (1): 92–104. doi:10.1080/02602938.2020.1730765.
  • Daenekindt, S., and J. Huisman. 2020. “Mapping the Scattered Field of Research on Higher Education. A Correlated Topic Model of 17,000 Articles, 1991–2018.” Higher Education 1–17. doi:10.1007/s10734-020-00500-x.
  • * Dai, Kun., Kelly E Matthews, and Vicente Reyes. 2020. “Chinese Students’ Assessment and Learning Experiences in a Transnational Higher Education Programme.” Assessment & Evaluation in Higher Education 45 (1): 70–81. doi:10.1080/02602938.2019.1608907.
  • * Dawson, Phillip, Wendy Sutherland-Smith, and Mark Ricksen. 2020. “Can Software Improve Marker Accuracy at Detecting Contract Cheating? a Pilot Study of the Turnitin Authorship Investigate Alpha.” Assessment & Evaluation in Higher Education 45 (4): 473–482. doi:10.1080/02602938.2019.1662884.
  • * Ellis, Cath, Karen Van Haeringen, Rowena Harper, Tracey Bretag, Ian Zucker, Scott McBride, Pearl Rozenberg, Phil Newton, and Sonia Saddiqui. 2020. “Does Authentic Assessment Assure Academic Integrity? Evidence from Contract Cheating Data.” Higher Education Research & Development 39 (3): 454–469. doi:10.1080/07294360.2019.1680956.
  • * Flores, Maria Assunção, Gavin Brown, Diana Pereira, Clara Coutinho, Patrícia Santos, and Cláudia Pinheiro. 2020. “Portuguese University Students’ Conceptions of Assessment: taking Responsibility for Achievement.” Higher Education 79 (3): 377–394. doi:10.1007/s10734-019-00415-2.
  • Flyvbjerg, B. 2006. “Five Misunderstandings About Case-Study Research.” Qualitative Inquiry 12 (2): 219–245. doi:10.1177/1077800405284363.
  • * Fortun, Jenny, and, Helen Tempest. 2020. “A Case for Written Examinations in Undergraduate Medical Education: experiences with Modified Essay Examinations.” Assessment & Evaluation in Higher Education 45 (7): 926–939. doi:10.1080/02602938.2020.1714543.
  • * Fraser, Stuart T. 2020. “A New Frontier: developing an Undergraduate Assessment Task Aimed at Improving the Representation of Biomedical Scientific Information on Wikipedia.” Studies in Higher Education 45 (5): 972–983. doi:10.1080/03075079.2020.1749794.
  • Gallagher, D. 2014. “Theories Have Consequences, Don’t They? On the Moral Nature of Educational Theory and Research.” In Making a Difference in Theory: The Theory Question in Education and the Education Question in Theory, edited by G. Biesta, J. Allan, and R. Edwards, 85–99. New York: Routledge.
  • * Gaynor, J. W. 2020. “Peer Review in the Classroom: student Perceptions, Peer Feedback Quality and the Role of Assessment.” Assessment & Evaluation in Higher Education 45 (5): 758–775. doi:10.1080/02602938.2019.1697424.
  • * Golan, Maya, Gonen Singer, Neta Rabin, and Dvir Kleper. 2020. “Integrating Actual Time Usage into the Assessment of Examination Time Extensions Provided to Disabled College Engineering Students.” Assessment & Evaluation in Higher Education 45 (7): 988–1000. doi:10.1080/02602938.2020.1717434.
  • * Grainger, Peter. 2020. “How Do Pre-Service Teacher Education Students Respond to Assessment Feedback?.” Assessment & Evaluation in Higher Education 45 (7): 913–925. doi:10.1080/02602938.2015.1096322.
  • Grant, M. J., and A. Booth. 2009. “A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies.” Health Information and Libraries Journal 26 (2): 91–108. doi:10.1111/j.1471-1842.2009.00848.x.
  • * Hains-Wesson, Rachael, Vikki Pollard, Friederika Kaider, and Karen Young. 2020. “STEM Academic Teachers’ Experiences of Undertaking Authentic Assessment-Led Reform: A Mixed Method Approach.” Studies in Higher Education 45 (9): 1797–1808. doi:10.1080/03075079.2019.1593350.
  • Hamann, J., and A. Kosmützky. 2021. “Does Higher Education Research Have a Theory Deficit? Explorations on Theory Work.” European Journal of Higher Education 11 (sup1): 421–468. doi:10.1080/21568235.2021.2003715.
  • * Han, Ye, and Yueting Xu. 2020. “The Development of Student Feedback Literacy: The Influences of Teacher Feedback on Peer Feedback.” Assessment & Evaluation in Higher Education 45 (5): 680–696. doi:10.1080/02602938.2019.1689545.
  • Hew, K. F., M. Lan, Y. Tang, C. Jia, and C. K. Lo. 2019. “Where is the “Theory” Within the Field of Educational Technology Research?” British Journal of Educational Technology 50 (3): 956–971. doi:10.1111/bjet.12770.
  • * Hill, Jennifer, and, Harry West. 2020. “Improving the Student Learning Experience through Dialogic Feed-Forward Assessment.” Assessment & Evaluation in Higher Education 45 (1): 82–97. doi:10.1080/02602938.2019.1608908.
  • Hindmarsh, J. H. 1993. “Tensions and Dichotomies between Theory and Practice: A Study of Alternative Formulations.” International Journal of Lifelong Education 12 (2): 101–115. doi:10.1080/0260137930120203.
  • * Hoo, Hui-Teng, Kelvin Tan, and Christopher Deneen. 2020. “Negotiating Self- and Peer-Feedback with the Use of Reflective Journals: An Analysis of Undergraduates’ Engagement with Feedback.” Assessment & Evaluation in Higher Education 45 (3): 431–446. doi:10.1080/02602938.2019.1665166.
  • * Ibarra-Sáiz, María Soledad, Gregorio Rodríguez-Gómez, and David Boud. 2020. “Developing Student Competence through Peer Assessment: The Role of Feedback, Self-Regulation and Evaluative Judgement.” Higher Education 80 (1): 137–156. doi:10.1007/s10734-019-00469-2.
  • * Jopp, Ryan. 2020. “A Case Study of a Technology Enhanced Learning Initiative That Supports Authentic Assessment.” Teaching in Higher Education 25 (8): 942–958. doi:10.1080/13562517.2019.1613637.
  • * Kilgour, Peter, Maria Northcote, Anthony Williams, and Andrew Kilgour. 2020. “A Plan for the Co-Construction and Collaborative Use of Rubrics for Student Learning.” Assessment & Evaluation in Higher Education 45 (1): 140–153. doi:10.1080/02602938.2019.1614523.
  • Lea, M. R., and B. V. Street. 1998. “Student Writing in Higher Education: An Academic Literacies Approach.” Studies in Higher Education 23 (2): 157–172. doi:10.1080/03075079812331380364.
  • * Lim, Lisa-Angelique, Shane Dawson, Dragan Gašević, Srecko Joksimović, Abelardo Pardo, Anthea Fudge, and Sheridan Gentili. 2021. “Students’ Perceptions of, and Emotional Responses to, Personalised Learning Analytics-Based Feedback: An Exploratory Study of Four Courses.” Assessment & Evaluation in Higher Education 46 (3): 339–359. doi:10.1080/02602938.2020.1782831.
  • Lincoln, Y., and E. Guba. 2000. “Paradigmatic Controversies, Contradictions, and Emerging Confluences.” In Handbook of Qualitative Research. 2nd ed., edited by N. Denzin and Y. Lincoln, 163–188. Thousand Oaks: Sage.
  • * Macleod, Gale, Neil Lent, Xiaomeng Tian, Yunying Liang, Meredith Moore, and Shrawani Sen. 2020. “Balancing Supportive Relationships and Developing Independence: An Activity Theory Approach to Understanding Feedback in Context for Master’s Students.” Assessment & Evaluation in Higher Education 45 (7): 958–972. doi:10.1080/02602938.2020.1719976.
  • Malthouse, R., J. Roffey-Barentsen, and M. Watts. 2014. “Reflectivity, Reflexivity and Situated Reflective Practice.” Professional Development in Education 40 (4): 597–609. doi:10.1080/19415257.2014.907195.
  • * Matshedisho, Knowledge Rajohane. 2020. “Straddling Rows and Columns: Students’ (Mis)Conceptions of an Assessment Rubric.” Assessment & Evaluation in Higher Education 45 (2): 169–179. doi:10.1080/02602938.2019.1616671.
  • * McQuade, Richard, Simon Kometa, Jeremy Brown, Debra Bevitt, and Judith Hall. 2020. “Research Project Assessments and Supervisor Marking: maintaining Academic Rigour through Robust Reconciliation Processes.” Assessment & Evaluation in Higher Education 45 (8): 1181–1191. doi:10.1080/02602938.2020.1726284.
  • * Mercader, Cristina, Georgeta Ion, and Anna Díaz-Vicario. 2020. “Factors Influencing Students’ Peer Feedback Uptake: instructional Design Matters.” Assessment & Evaluation in Higher Education 45 (8): 1169–1180. doi:10.1080/02602938.2020.1726283.
  • * Merry, Stephen, and, Paul Orsmond. 2020. “Peer Assessment: The Role of Relational Learning through Communities of Practice.” Studies in Higher Education 45 (7): 1312–1322. doi:10.1080/03075079.2018.1544236.
  • Merton, R. 1949. Social Theory and Social Structure. New York: Free Press.
  • Messick, S. 2005. “Standards of Validity and the Validity of Standards in Performance Assessment.” Educational Measurement: Issues and Practice 14 (4): 5–8. doi:10.1111/j.1745-3992.1995.tb00881.x.
  • * Molloy, Elizabeth, David Boud, and Michael Henderson. 2020. “Developing a Learning-Centred Framework for Feedback Literacy.” Assessment & Evaluation in Higher Education 45 (4): 527–540. doi:10.1080/02602938.2019.1667955.
  • * Myyry, Liisa, Terhi Karaharju-Suvanto, Marjo Vesalainen, Anna-Maija Virtala, Marja Raekallio, Outi Salminen, Katariina Vuorensola, and Anne Nevgi. 2020. “Experienced Academics’ Emotions Related to Assessment.” Assessment & Evaluation in Higher Education 45 (1): 1–13. doi:10.1080/02602938.2019.1601158.
  • Nestel, D., and M. Bearman. 2015. “Theory and Simulation-Based Education: Definitions, Worldviews and Applications.” Clinical Simulation in Nursing 11 (8): 349–354. doi:10.1016/j.ecns.2015.05.013.
  • * Nieminen, Juuso Henrik, and, Laura Tuohilampi. 2020. “Finally Studying for Myself’ – Examining Student Agency in Summative and Formative Self-Assessment Models.” Assessment & Evaluation in Higher Education 45 (7): 1031–1045. doi:10.1080/02602938.2020.1720595.
  • * Olave-Encina, Karen, Karen Moni, and Peter Renshaw. 2021. “Exploring the Emotions of International Students about Their Feedback Experiences.” Higher Education Research & Development 40 (4): 810–824. doi:10.1080/07294360.2020.1786020.
  • Petticrew, M., and Roberts, H. 2006. Systematic Reviews in the Social Sciences: A Practical Guide. Oxford: Blackwell.
  • Pillow, W. 2003. “Confession, Catharsis, or Cure? Rethinking the Uses of Reflexivity as Methodological Power in Qualitative Research.” International Journal of Qualitative Studies in Education 16 (2): 175–196. doi:10.1080/0951839032000060635.
  • * Preston, Robyn, Monica Gratani, Kimberley Owens, Poornima Roche, Monika Zimanyi, and Bunmi Malau-Aduli. 2020. “Exploring the Impact of Assessment on Medical Students’ Learning.” Assessment & Evaluation in Higher Education 45 (1): 109–124. doi:10.1080/02602938.2019.1614145.
  • * Pui, Priscillia, Brenda Yuen, and Happy Goh. 2021. “Using a Criterion-Referenced Rubric to Enhance Student Learning: A Case Study in a Critical Thinking and Writing Module.” Higher Education Research & Development 40 (5): 1056–1069. doi:10.1080/07294360.2020.1795811.
  • Shay, S. 2008. “Researching Assessment as Social Practice: Implications for Research Methodology.” International Journal of Educational Research 47 (3): 159–164. doi:10.1016/j.ijer.2008.01.003.
  • * Simonsmeier, Bianca AHenrike Peiffer, Maja Flaig, and Michael Schneider. 2020. “Peer Feedback Improves Students’ Academic Self-Concept in Higher Education.” Research in Higher Education 61 (6): 706–724. doi:10.1007/s11162-020-09591-y.
  • * Simper, Natalie. 2020. “Assessment Thresholds for Academic Staff: constructive Alignment and Differentiation of Standards.” Assessment & Evaluation in Higher Education 45 (7): 1016–1030. doi:10.1080/02602938.2020.1718600.
  • Skinner, Q. 1985. The Return of Grand Theory in the Human Sciences. Cambridge: Cambridge University Press.
  • * Sotardi, Valerie A, and Erik Brogt. 2020. “Influences of Learning Strategies on Assessment Experiences and Outcomes during the Transition to University.” Studies in Higher Education 45 (9): 1973–1985. doi:10.1080/03075079.2019.1647411.
  • * Suri, Harsh, and, Siva Krishnan. 2020. “Assessment Hurdles in Core First Year Courses in Australian Universities: Are We Trying to Catch out Students?.” Assessment & Evaluation in Higher Education 45 (2): 251–265. doi:10.1080/02602938.2019.1632795.
  • Tai, J., M. Bearman, K. Gravett, and E. Molloy. 2021. “Exploring the Notion of Teacher Feedback Literacies through the Theory of Practice Architectures.” Assessment & Evaluation in Higher Education : 1–13. doi:10.1080/02602938.2021.1948967.
  • Tan, Y. H., and S. C. Tan. 2020. “Understanding Personal Epistemology.” In Conceptions of Knowledge Creation, Knowledge and Knowing, edited by Y. H. Tan, and S. C. Tan, 35–45. Singapore: Springer.
  • Tannock, S. 2017. “No Grades in Higher Education Now! Revisiting the Place of Graded Assessment in the Reimagination of the Public University.” Studies in Higher Education 42 (8): 1345–1357. doi:10.1080/03075079.2015.1092131.
  • * Tempelaar, Dirk. 2020. “Supporting the Less-Adaptive Student: The Role of Learning Analytics, Formative Assessment and Blended Learning.” Assessment & Evaluation in Higher Education 45 (4): 579–593. doi:10.1080/02602938.2019.1677855.
  • Thomas, G. 2007. Education and Theory: Strangers in Paradigms. Maidenhead: Open University Press.
  • Thurlings, M., M. Vermeulen, T. Bastiaens, and S. Stijnen. 2013. “Understanding Feedback: A Learning Theory Perspective.” Educational Research Review 9: 1–15. doi:10.1016/j.edurev.2012.11.004.
  • Tight, M. 2004. “Research into Higher Education: An a‐Theoretical Community of Practice?” Higher Education Research & Development 23 (4): 395–411.
  • Tight, M. 2014. “Discipline and Theory in Higher Education Research.” Research Papers in Education 29 (1): 93–110. doi:10.1080/02671522.2012.729080.
  • Tight, M. 2015. “Theory Application in Higher Education Research: The Case of Communities of Practice.” European Journal of Higher Education 5 (2): 111–126. doi:10.1080/21568235.2014.997266.
  • Trowler, P. 2012. “Wicked Issues in Situating Theory in Close-Up Research.” Higher Education Research & Development 31 (3): 273–284. doi:10.1080/07294360.2011.631515.
  • Tummons, J. 2012. “Theoretical Trajectories Within Communities of Practice in Higher Education Research.” Higher Education Research & Development 31 (3): 299–310. doi:10.1080/07294360.2011.631516.
  • Van der Kleij, F. M., and A. A. Lipnevich. 2021. “Student Perceptions of Assessment Feedback: A Critical Scoping Review and Call for Research.” Educational Assessment, Evaluation and Accountability 33 (2): 345–373. doi:10.1007/s11092-020-09331-x.
  • * Van Heerden, Martina. 2020. “It Has a Purpose beyond Justifying a Mark’: examining the Alignment between the Purpose and Practice of Feedback.” Assessment & Evaluation in Higher Education 45 (3): 359–371. doi:10.1080/02602938.2019.1644602.
  • * Van Woezik, Tamara, Jur Koksma, Rob Reuzel, Debbie Jaarsma, and Gert Jan Van Der Wilt. 2020. “How to Encourage a Lifelong Learner? the Complex Relation between Learning Strategies and Assessment in a Medical Curriculum.” Assessment & Evaluation in Higher Education 45 (4): 513–526. doi:10.1080/02602938.2019.1667954.
  • * Wang, Shutao, and, Demei Zhang. 2020. “Perceived Teacher Feedback and Academic Performance: The Mediating Effect of Learning Engagement and Moderating Effect of Assessment Characteristics.” Assessment & Evaluation in Higher Education 45 (7): 973–987. doi:10.1080/02602938.2020.1718599.
  • * Wang, Jiandong, Ruiqin Gao, Xiuyan Guo, and Jin Liu. 2020. “Factors Associated with Students’ Attitude Change in Online Peer Assessment – A Mixed Methods Study in a Graduate-Level Course.” Assessment & Evaluation in Higher Education 45 (5): 714–727. doi:10.1080/02602938.2019.1693493.
  • * Weis, Robert, and, Esther L Beauchemin. 2020. “Are Separate Room Test Accommodations Effective for College Students with Disabilities?.” Assessment & Evaluation in Higher Education 45 (5): 794–809. doi:10.1080/02602938.2019.1702922.
  • * Wicking, Paul. 2020. “Formative Assessment of Students from a Confucian Heritage Culture: Insights from Japan.” Assessment & Evaluation in Higher Education 45 (2): 180–192. doi:10.1080/02602938.2019.1616672.
  • * Yan, Zi. 2020. “Self-Assessment in the Process of Self-Regulated Learning and Its Relationship with Academic Achievement.” Assessment & Evaluation in Higher Education 45 (2): 224–238. doi:10.1080/02602938.2019.1629390.
  • * Youde, Andrew. 2020. “I Don’t Need Peer Support: effective Tutoring in Blended Learning Environments for Part-Time, Adult Learners.” Higher Education Research & Development 39 (5): 1040–1054. doi:10.1080/07294360.2019.1704692.
  • * Zhan, Ying. 2020. “Motivated or Informed? Chinese Undergraduates’ Beliefs about the Functions of Continuous Assessment in Their College English Course.” Higher Education Research & Development 39 (5): 1055–1069. doi:10.1080/07294360.2019.1699029.
  • * Zhang, Fuhui, Christian Schunn, Wentao Li, and Miyin Long. 2020. “Changes in the Reliability and Validity of Peer Assessment across the College Years.” Assessment & Evaluation in Higher Education 45 (8): 1073–1087. doi:10.1080/02602938.2020.1724260.
  • * Zhang, Yinxia, and, Hongbiao Yin. 2020. “Collaborative Cheating among Chinese College Students: The Effects of Peer Influence and Individualism-Collectivism Orientations.” Assessment & Evaluation in Higher Education 45 (1): 54–69. doi:10.1080/02602938.2019.1608504.
  • * Zhou, Jiming, Ke Zhao, and Phillip Dawson. 2020. “How First-Year Students Perceive and Experience Assessment of Academic Literacies.” Assessment & Evaluation in Higher Education 45 (2): 266–278. doi:10.1080/02602938.2019.1637513.
  • * Zhou, Jiming, Yongyan Zheng, and Joanna Hong-Meng Tai. 2020. “Grudges and Gratitude: The Social-Affective Impacts of Peer Assessment.” Assessment & Evaluation in Higher Education 45 (3): 345–358. doi:10.1080/02602938.2019.1643449.