2,496
Views
0
CrossRef citations to date
0
Altmetric
Original Article

Students’ experiences of assessment and feedback engagement in digital contexts: a mixed-methods case study in upper secondary school

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

This study examined students’ experiences of assessment and feedback engagement in digital contexts in upper secondary school through an explanatory sequential mixed-methods case study. The data material consisted of 435 survey responses and 16 individual interviews. The results indicated that the use of digital feedback was crucial for students’ experiences of digital feedback engagement. The final model of the path analyses suggested that students’ experiences with digital feedback engagement were dependent on several predictive and mediating variables. Multiple regression analyses suggested gender differences regarding variables predicting digital feedback engagement. Whereas a deep approach and learning from examinations were important for male students, clear goals and standards were more important for female students’ digital feedback engagement. Thematic analyses of the interview data identified three themes: Grades and digital feedback; dialogic feedback interactions; and a performance-oriented assessment culture. Grades tended to reduce the relevance of digital feedback when provided at the same time and in separate learning management systems (LMSs). Opportunity for dialogic feedback interactions was considered essential to students’ feedback engagement and criteria orientation, but rarely offered in digital contexts. A performance-oriented assessment culture risked outweighing focus on learning in digital contexts for some students.

1. Introduction

Whilst there is strong evidence for feedback as a central driver of students’ learning (Black & Wiliam, Citation1998; Hattie & Timperley, Citation2007; Sadler, Citation2010), teachers’ facilitation of assessment in digital contexts appears to change the conditions for students’ engagement with feedback (van der Kleij, Feskens, & Eggen, Citation2015; Winstone, Bourne, Medland, Niculescu, & Rees, Citation2021). Digital contexts have been found to cause both engagement and disengagement in students’ learning processes (Bergdahl, Nouri, Fors, & Knutsson, Citation2020). A major challenge for effective feedback in digital contexts has been the physical distance between students, peers, and teachers, causing solitude and isolation (Bardach et al., Citation2021; Jensen, Bearman, & Boud, Citation2021; Yuan & Kim, Citation2015). In the aftermath of the outbreak of the COVID-19 pandemic, distance learning and home schooling have become common features to students’ engagement with assessment and feedback (Blikstad-Balas, Roe, Dalland, & Klette, Citation2022). In the early start of the pandemic, students reported lack of student involvement and solitude in distance learning settings (Sandvik et al., Citation2021). Even though there is a growing body of self-report studies documenting students’ experiences of assessment, there has been addressed a need to study the relationship between feedback and other aspects of students’ learning (Harks, Rakoczy, Hattie, Besser, & Klieme, Citation2014), which is the quest of the present study.

There has been a persistent paradox that new digital technology has merely reinforced obsolete assessment practices (Selwyn, Citation2016), resulting in new layouts for old ways of accounting for students’ knowledge (Winstone et al., Citation2021). Although assessment has been reluctant to change, learning is still being transformed in digital contexts (Bearman, Boud, & Ajjawi, Citation2020). There is a risk that feedback in digital contexts contributes to increase the gap between assessment and learning. Thus, an important task for teachers is to teach students how to self-regulate their learning in digital contexts and prepare them how to learn on their own (Zimmerman, Citation2002). To promote students’ active engagement in digital contexts, emotionally and motivationally supportive assessment contexts have been identified as particularly important (Schrader & Grassinger, Citation2021; Silvola, Näykki, Kaveri, & Muukkonen, Citation2021).

Previous research has examined different assessment designs for the advancement of students’ digital learning (Bearman et al., Citation2020; Yuan & Kim, Citation2015). Focusing on students’ digital feedback engagement is important as students who disengage in earlier schooling risk initiating a downward spiral (Bergdahl et al., Citation2020; Oinas, Vainikainen, & Hotulainen, Citation2017). Internationally, students have also tended to perceive teachers’ assessment practices in upper secondary school as primarily summative (e.g. Jónsson, Smith, & Geirsdóttir, Citation2018; Mäkipää & Hildén, Citation2021).

Feedback in digital contexts has often been perceived as generic and less supportive by students (e.g. Oinas et al., Citation2017; Schrader & Grassinger, Citation2021; Winstone et al., Citation2021), and there is a need to understand more of the complexity of digital feedback and assessment in upper secondary school. Awareness of learning goals and criteria has also been underlined as fundamental to students’ engagement and activation of feedback (e.g. Balloo, Evans, Hughes, Zhu, & Winstone, Citation2018; Wyatt-Smith & Adie, Citation2019), as well as self-monitoring and self-regulated learning (Sadler, Citation1998.; Zimmerman, Citation2002). However, less research has examined the interplay of goals and standards with other variables related to digital feedback.

Digital contexts have sometimes created a spatial barrier and increased physical distance between teachers and students when organised with remote characteristics (Bardach et al., Citation2021; Jensen et al., Citation2021). Whilst previous research has identified the digital contextual feature of spatial separation between learning management systems (LMSs) in a higher-educational setting (Winstone et al., Citation2021), less is known how digital feedback affects students’ learning processes at the upper secondary level. Further, feedback to female students has often been more concerned with ability as a fixed trait, whilst feedback for male students has been more concerned with student effort and hard work (Dweck, Citation1986; Hattie & Timperley, Citation2007). Consequently, there is need to examine gender differences for feedback in digital contexts in upper secondary school.

The purpose of the study is to examine students’ experiences of assessment and feedback in a broad sense, as well as studying the relationship between variables related to students’ experiences of digital feedback engagement. The research question is: “What are students' experiences of assessment and feedback engagement in digital contexts?”.

1.1. Feedback in digital contexts and student learning

There has been a move towards conceptualising feedback as a process (Carless & Winstone, Citation2020), which has highlighted the relevance of students’ active engagement in feedback processes (Gamlem & Smith, Citation2013; van der Kleij, Adie, & Cumming, Citation2019). The present article adopts a social constructivist view of feedback as a process and dialogue in which the primary focus is the student role (Carless & Winstone, Citation2020; van der Kleij et al., Citation2019). The emergent concept of students’ feedback literacy, which involves students seeking, generating, and using feedback, highlights the joint responsibility-sharing in feedback processes (Carless & Boud, Citation2018; Carless & Winstone, Citation2020). Recent research on feedback literacy has stressed the role of students’ feedback histories in shaping engagement with feedback in new contexts (Malecka, Boud, Tai, & Ajjawi, Citation2022), and the need to shift the conceptualisation of feedback literacy from universal to context specific (Rovagnati, Pitt, & Winstone, Citation2022).

Feedback in digital contexts has been notoriously difficult to define due to differing and sometimes contradictory conceptualisations (Jensen et al., Citation2021). Digital feedback can be conceptualised within a process of subsequent feedback loops in which digital technology is used at some point in the process (Whittle & Campbell, Citation2019). Further, an important conceptual characteristic related to digital feedback relates to feedback coming from teachers or artificial intelligence (Hwang, Xie, Wah, & Gašević, Citation2020). The present study focuses on teachers’ facilitation and provision of digital feedback to support student learning in teacher-student interactions without artificial intelligence.

Digital feedback can be framed in a theoretical framework of self-regulated learning (Zimmerman, Citation2002). Students’ agency in feedback processes lies at the heart of a self-regulated learning perspective on feedback engagement (Andrade & Brookhart, Citation2020; Bandura, Citation1986; Gamlem, Kvinge, Smith, & Engelsen, Citation2019; Vattøy, Gamlem, & Rogne, Citation2021). In cases where digital contexts increase the distance between human beings, self-regulatory mechanisms within the learner become even more important, along with alternative co-regulators, such as artificial intelligence or peer feedback. Due to self-regulatory mechanisms, students accept or reject feedback as a form of assessment compliance or resistance (Gamlem & Smith, Citation2013; Harris, Brown, & Dargusch, Citation2018). Consequently, the most detailed and personalised feedback fails when students are not able to actively receive, digest, and act upon feedback information (Carless & Winstone, Citation2020; Sadler, Citation2010).

1.2. Assessment in upper secondary school in Norway

Norway has been unique in its explicit focus on statutorily grounding Assessment for Learning (AfL) in primary and secondary education (Hopfenbeck, Flórez Petour, & Tolo, Citation2015). However, the role of final grading has shifted away from a formative tradition, due to the decline of process orientation in which teachers’ grading practices to a lesser extent encompass students’ effort, participation, and involvement (Prøitz, Citation2013). In upper secondary school in Norway, students receive grades (1–6) in randomly assigned subjects in central, end-of-year examinations (European Commission, Citation2021). The grade, 6, signals an outstanding competence in the subject, whereas the grade, 2, is the last passing grade. In addition, students are awarded a final grade for their overall achievement in all subjects by their subject teachers. As such, subject teachers have a lot of responsibility and influence over student grades (Norwegian Directorate for Education and Training, Citation2019; Prøitz, Citation2013). Thus, a potential conflict relates to national regulations which require assessments to be documented for accountability purposes, whilst at the same time promoting learning. As such, there has been identified two contesting assessment paradigms in Norway – one explicit of AfL and one hidden of increasing testing and accountability (Birenbaum et al., Citation2015).

The presence of two competing paradigms is particularly important in upper secondary school since subject teachers must give a final grade, and students thus receive grades in high-stakes assessments for certification and entry to higher education (European Commission, Citation2021). With the explicit focus on AfL in Norway, the education system has influenced teachers’ feedback practices and students’ feedback engagement through the practice of assessing students’ performance according to predefined goals and standards (Hopfenbeck et al., Citation2015; Prøitz, Citation2013).

2. Method

The present study was conducted in an upper secondary school situated in a suburb with 507 students. The school offered pre-study programmes for entry to higher education, and class sizes held on average 24 students. The upper secondary school specialised in academic subjects that prepare students for universities with assessments for certification and diplomas at the end. The school used two LMSs for grades and digital feedback in the present study: “LMS 1” (primarily for accountability purposes); and “LMS 2” (primarily for learning purposes). The use of two LMSs was a practice imposed by the district county for documentation purposes.

A mixed methods case study was performed to examine the unique context of a purposive sample of students from the upper secondary school. A case study design is useful when the purpose is to understand a contemporary phenomenon within its real-life context (Yin, Citation2018). An important characteristic of digital contexts for this study is that all students in Norwegian upper secondary schools have access to a personal computer in classroom teaching. The school was selected as a case based on the following criteria: a) All students in the school had access to a personal computer during teaching hours; b) Student grades at the school corresponded to a national mean score.

Since a recommendation for case studies is using multiple sources of evidence (Yin, Citation2018), a sequential explanatory mixed-methods research design was performed to have the qualitative data expand upon initial quantitative results (Teddlie & Tashakkori, Citation2008). The study consisted of an initial quantitative phase focused on student survey responses, followed by a qualitative phase with individual student interviews. The interview guide questions corresponded with the survey items, which allowed for triangulation of the data and findings. Multiple regression analyses and path analyses were carried out to examine the relationship between variables predicting digital feedback engagement. Several analyses were performed to examine how variables predicted or mediated digital feedback engagement.

The study is approved by the Norwegian Centre for Research Data. Students received information about the study and signed written consent forms prior to the study. Ethical considerations were made to inform students in knowing their rights to confidentiality, anonymity, and withdrawal at any time without any consequences.

2.1. The quantitative phase

2.1.1. Quantitative samples and procedure

The survey samples consisted of 435 students (Mage = 16.90; SD = 1.13; range: 15–24) in one upper secondary school in Norway. The response rate for the survey was 86%. The survey was administered as a paper-and-pencil questionnaire in the middle of the autumn term to make sure that students and teachers had been engaged in a variety of assessment practices. All classes participated, which constituted seven classes from each year level (n = 21 classes) with a total of 162 first-, 142 second- and 131 final-year students. 430 students reported being enrolled in Specialisation in General Studies (98.8%), whereas five students were enrolled in the International Baccalaureate Programme (1.1%). The students reported a generally high mean score for grades at the most recent Norwegian examination (Mgrade = 4.78; SD = 1.06; range: 2–6) and English as a Foreign Language examination (Mgrade = 4.82; SD = 1.07; range: 2–6). The gender distribution was 174 male (40%) and 261 female (60%) students. 88.5% of the students reported that they had been born in Norway, whilst 87.2% reported to have lived in Norway all their life.

2.1.2. Quantitative measure

An adapted version of the AEQ version 3.3 was used to measure students’ experiences of assessment and feedback rated on a five-point scale (1 = strongly disagree, 2 = disagree, 3 = unsure, 4 = agree, and 5 = strongly agree). The AEQ measures various conditions of students’ experiences with assessment and learning in a programme of study (Gibbs & Dunbar‐Goddet, Citation2007). Validated items for the adapted Norwegian version of the AEQ (i.e. N-AEQ) were used as basis for adaptations to the AEQ 3.3 (Vattøy et al., Citation2021). The selected AEQ 3.3 questionnaire items comprised 18 items across six subscales – Use of Digital Feedback; Quantity and Quality of Digital Feedback; Appropriate Assessment; Clear Goals and Standards; Deep Approach; and Learning from the Examination – with one overall satisfaction item, Satisfaction with Digital Teaching. Three of the subscales were adapted to fit the focus on digital feedback in the present study: a) Quantity and Quality of Digital Feedback; b) Use of Digital Feedback; and Satisfaction with Digital Teaching. Additionally, the Digital Feedback Engagement Scale, consisting of five items, was adapted from items examining responsive pedagogy (Gamlem et al., Citation2019). One additional subscale, Digital Resources in Teaching, was supplemented (Daus, Aamodt, & Tømte, Citation2019).

2.1.3. Quantitative analysis

The quantitative data analyses consisted of descriptive analyses, confirmatory factor analyses, multiple regression analyses, and structural equation modelling (SEM) through path analyses. SPSS was used to carry out descriptive and confirmatory factor analyses, whilst SPSS Amos was used to perform path analyses. Skewness and Kurtosis values were examined and controlled based on an acceptable range of ± 1.96 (Field, Citation2009). For the student survey data, confirmatory factor analyses confirmed the validity of the subscales. In terms of inter-item reliability, Cronbach’s alpha (α) was calculated for each scale (see Appendix A). The estimated models’ goodness of fit was evaluated using the following two absolute goodness-of-fit indices: the chi-square test (χ2) and root mean square error of approximation (RMSEA). In addition, two comparative goodness-of-fit indices were used to evaluate model fit: the comparative fit index (CFI) and Tucker-Lewis Index (TLI).

The number of missing values in the survey was extremely small (Students: Mall items = 1%). To deal with the missing data, maximum likelihood estimation was used to estimate the parameters of the measurement models in the path analyses, which is an optimal way of dealing with missing data (Allison, Citation2003; Little & Rubin, Citation2019).

2.2. The qualitative phase

2.2.1. Qualitative samples

For the 16 student interviews (Mage = 17.44; SD = .96; range: 16–19), the gender distribution was 50% female students, and the participants were first (n = 4), second- (n = 8), and third-year students (n = 4) from the group of 435 students. A contact person at the school was asked to invite at least four students from each of the three year-levels. The students were invited based on levels of performance, subjects, and gender. Therefore, the chosen participants reflected high-, mid- and low-performing students who participated in a variety of subjects. An aim was to reflect an even gender balance as shown in the distribution above.

2.2.2. Qualitative measures and procedure

A semi-structured interview guide, The Digital Feedback and Assessment Experience Interview Guide, was used for the individual student interviews (see Appendix B). The interview guide is an adapted version of the Assessment Experience Interview Guide (Gibbs & Dunbar‐Goddet, Citation2007; Vattøy et al., Citation2021), which consisted of seven overarching themes across fifteen questions. As such, the interview guide was designed and adapted to expand on the survey results with overarching themes mirroring the subscales used in the AEQ. In addition, the interview guide was based on literature on feedback barriers (Winstone, Nash, Parker, & Rowntree, Citation2017) and maladaptive assessment agency (Harris et al., Citation2018). The length of interviews was on average 22.32 minutes (SD = 8.25; range: 11–49). The student interviews were conducted individually, and sound recorded before they were transcribed. Follow-up questions from the survey were asked to support students in elaborating upon their experiences.

2.2.3. Qualitative data analyses

Verbatim transcriptions were performed to capture every single word of the audio recordings in text. The transcriptions were re-read carefully by the interviewers while listening to the audio recordings to control for accuracy and clarity. Thematic analyses were carried out to analyse all interview data as a whole and in relation to the research question (Braun & Clarke, Citation2006). Subthemes arose inductively through vertical and horizontal analyses using matrices. First, subthemes for each interview were extracted through careful condensation and vertical analyses. Second, subthemes were analysed horizontally for cross-comparisons. Finally, overarching themes were identified after gathering, condensing, and cross comparing the subthemes. To control for common understanding of the qualitative data, conversations about the data and coding were made by the researchers in the project.

3. Results

3.1. Descriptive statistics

The descriptive statistics for all subscales with items are presented in Appendix A. The mean scores for Digital Feedback Engagement indicated that the students were to some extent agreeing that digital feedback helped them learn better and improve in their learning work. However, the descriptive statistics for the items in Digital Feedback Engagement also showed that students were divided as to the usefulness of teachers’ digital feedback, which suggested a variation in the student group. The same tendencies applied for students’ experiences of Use of Digital Feedback. For Quantity and Quality of Digital Feedback, the mean scores tended to fall towards the lower extent of usefulness. These tendencies suggested both that digital feedback sometimes came late and that some students were not able to understand the digital feedback from teachers.

Pearson’s r product-moment correlations for the student survey data are presented in . The strongest, significant correlations were between: a) Digital Feedback Engagement and Use of Digital Feedback (r = .50, p < .01); b) Digital Feedback Engagement and Clear Goals and Standards (r = .43, p < .01); and Digital Feedback Engagement and Satisfaction with Digital Teaching (r = .43, p < .01).

Table 1. Pearson’s r correlations between subscales.

3.2. Multiple regression analysis

A multiple linear regression was calculated to predict Digital Feedback Engagement based on seven variables for all students (Satisfaction with Digital Teaching was excluded due to one-item only; see Appendix C). A significant regression equation was found: F7, 389 = 42.68, p < .001, with an R2 of .43.

A second multiple linear regression was calculated for male students using the same seven independent variables, and a significant regression equation was found: F7, 150 = 23.50, p < .001, with an R2 of .52. In the model for male students, however, Appropriate Assessment, Clear Goals and Standards, and Digital Resources in Teaching were not significant contributions.

A third multiple linear regression was calculated for female students using the same seven independent variables, and a significant regression equation was found: F7, 231 = 22.78, p < .001, with an R2 of .41. In the model for the female students, however, Quantity and Quality of Digital Feedback, Deep Approach, and Learning from the Examination were not significant contributions.

The results from the three multiple regression analyses indicated differences among significant variables for students’ experiences of Digital Feedback Engagement. The first model showed that Use of Digital Feedback (β = .29, p < .001) was the strongest predictor of Digital Feedback Engagement for all students. This result aligned with the strongest predictor for male students which was also Use of Digital Feedback (β = .40, p < .001). Deep Approach (β = .20, p < .01) and Learning from the Examination (β = .23, p < .001) were also important predictors for male students. For female students, Clear Goals and Standards was the most important predictor (β = .24, p < .001). The results from the multiple regression analyses provided a basis for the subsequent path analyses.

3.3. Path analysis

Several path analyses were performed to examine the relationship between the eight latent variables examined in the multiple regression analyses with Digital Feedback Engagement as the dependent variable. The final model was calculated by performing a path analysis with four predictor variables (i.e. Clear Goals and Standards, Appropriate Assessment, Deep Approach, and Digital Resources in Teaching), mediated by three variables (i.e. Use of Digital Feedback, Quantity and Quality of Digital Feedback, and Learning from the Examination), with Digital Feedback Engagement as dependent variable. The results showed a good fit with the empirical data: χ2 (12) = 25.56, p = .01, CFI = .98, TLI = .93, and RMSEA = .05. Model fit was improved by correlating the error terms of Use of Digital Feedback and Digital Feedback Engagement. shows the relationship between the eight latent variables. The strongest standardised beta coefficient was the mediating effect of Use of Digital Feedback on Digital Feedback Engagement (β = .78, p < .001).

Figure 1. Path analysis of variables predicting and mediating digital feedback engagement. Standardised beta coefficients are provided. ***p < .001

Figure 1. Path analysis of variables predicting and mediating digital feedback engagement. Standardised beta coefficients are provided. ***p < .001

The final model suggests that students’ experiences of digital feedback engagement improve when they have a clear understanding of goals and standards in their learning processes and find themselves in an assessment culture with focus on higher-order thinking (rather than rote learning). The model further supports the notion of students understanding deeply the meaning of what they learn and purposeful use of digital resources in their teaching. Clear Goals and Standards have an important predictive effect on Learning from the Examination (β = .33, p < .001), whilst Appropriate Assessment is particularly relevant for Quantity and Quality of Digital Feedback (β = .29, p < .001). Both Deep Approach (β = .29, p < .001) and Digital Resources in Teaching (β = .27, p < .001) are important predictors of Use of Digital Feedback. The final model shows the important double mediating effect of Quantity and Quality of Digital Feedback on Use of Digital Feedback (β = .26, p < .001). Whereas most paths go through Use of Digital Feedback in the final model, there is one single path from Clear Goals and Standards via Learning from the Examination to Digital Feedback Engagement which seems to suggest that students need a thorough understanding of goals and standards to benefit from their examination work when engaging with digital feedback.

3.4. Results from the interviews

The results from the thematic analyses centred around the following over-arching themes: Grades and digital feedback; dialogic feedback interactions; and a performance-oriented assessment culture.

3.4.1. Grades and digital feedback

A central issue with grades and digital feedback provided simultaneously was grades stealing away too much of the focus. Further, students did not feel that they could improve their grades based on the digital feedback. When students had to correct their work with no opportunity of improving the grades and not necessarily understanding how to improve, their self-efficacy was affected. Students preferred the feedback practice of teachers who provided feedback and opportunity for dialogue before giving the grade:

When we receive feedback before we receive the grade, we have the opportunity of asking follow-up questions. This is useful because then the teacher can see whether there is something that needs to be adjusted or discussed more with the student. (Student 16, male)

Yet, students mentioned that it was difficult for teachers to postpone the awarding of grades in a grade focused culture when there was no clear purpose for doing so. Students reported that an assessment practice used by some teachers was to wait with handing out grades until the end of a lesson. In such cases, students felt unable to concentrate and work because of emotions, such as, excitement, nervousness, or anticipation, before receiving the grade.

Several students emphasised that the formative potential of feedback comments was reduced when grades and feedback were posted in separate LMSs. The spatial separation made the feedback more difficult to access with grades as the centre of students’ attention. Students received grades in one LMS and feedback, if any, in another. In the quote below, LMS 1 refers to the LMS used for documentation, whereas LMS 2 refers to the LMS used for learning activities.

It varies between teachers. Some provide a short feedback comment in [LMS 1] about the test or assignment, while other teachers provide [feedback] in [LMS 2]. There is not one single place where I receive it, so it is quite difficult to keep track of the process when they do not notify us. (Student 13, female)

When students received feedback in the LMS used for grades, it was predominantly comments meant for justifying the grades: “When I receive feedback on a test, it’s very much of a ‘use-and-discard mentality’. You log in and check the grade and don’t care too much after that” (Student 2, male).

3.4.2. Dialogic feedback interactions

One of the characteristics of students’ engagement with digital feedback was the perceived lack of dialogic feedback interactions. The lack of opportunity to have dialogue with the teacher created a barrier for students and was detrimental to the emotional connection between students and the teacher: “When I talk to my teacher, I feel that the relationship improves and that the teacher sees my work and cares about my improvement” (Student 8, female). Consequently, students often experienced being passive recipients of feedback rather than engaging in feedback dialogues:

The biggest problem with digital feedback and the reason why I am not a fan of it is that you are unable to have a dialogue. Of course, you can send a text to the teacher about the feedback you have received, but it is not as simple as it would be if you could do it face-to-face then and there. (Student 14, male)

Students emphasised that dialogic interaction varied among teachers with some teachers engaging in feedback dialogues with their students. When students had the opportunity of engaging in dialogic feedback interactions with teachers, they felt more comfortable asking follow-up questions and seeking feedback.

Another issue regarded teachers’ awareness-making and re-orientation of assessment criteria in dialogic feedback interactions. Students typically reported that they were made aware of goals and criteria at the start of a new topic or period. However, students tended to struggle with figuring out the correspondence between the criteria and what was really expected of them in assessment situations.

We often receive a criteria sheet before a test related to what is being weighted and assessed. But often the teachers put weight on what they say in lessons. And then it is not what it says on the criteria sheet that is important, but rather what the teacher feels is most important in class. (Student 9, male)

The interconnection between assessment criteria and feedback dialogues addressed a potential for goals and criteria to be better integrated in teachers’ dialogic feedback interactions in classroom settings.

3.4.3. A performance-oriented assessment culture

Students described a performance-oriented assessment culture with focus on high-stakes testing and examinations. Cramming for tests made it difficult for students to work for learning and comprehension. For many students, cramming the material led to not really understanding the material:

I notice that there is quite a focus on receiving good grades, which sometimes comes at the cost of the learning aspect. At least when we get feedback with the grade, it is so easy to just focus on the grade and become disappointed if you get a bad grade. (Student 11, female)

A common consequence of a performance-oriented assessment culture for students was stress-related behaviour: “We have to constantly stay vigilant for new times and deadlines for tests, which is stressful” (Student 8, female). High frequency of tests was a characteristic that reduced the perceived timeliness of digital feedback with students experiencing less use of assessments because they were already preparing for a new test. Some students were content with the grading culture, emphasising that they were ambitious and hardworking. However, the frequency of testing situations and constantly having to perform often emphasised the relevance of short-term performance. When some students worked hard without improving their grades, they experienced negative emotions, such as frustration and resignation.

Further, students pointed at a double communication related to the conflicting paradigms of grades and learning with a heavy emphasis on grades and short-term performances at exams:

I become very stressed when there is an oral situation, and teachers say: ‘Life is more than school and grades’. They say it, but I know that they do not mean it. They are good at adapting to each student, but if you belong to a skilled class, it is so easy to ignore the low-achieving students. And there is so much focus on exams, like: ‘This is important for the exam’. (Student 13, female)

Students experienced that teachers talked of emphasising learning in the rhetoric, but often stressed teaching points that were relevant for exams in practice. In this manner, the pursuit of grades threatened to neglect attention to students’ learning processes and emotional well-being.

4. Discussion

The aim of this study was to examine the relationships between variables related to students’ experiences of digital feedback engagement and assessment in upper secondary school. The most prominent result was the influence of Use of Digital Feedback in predicting Digital Feedback Engagement, which showed that the perceived usefulness of digital feedback was critical for students’ engagement with feedback and learning. The descriptive statistics identified tendencies that students were to some extent agreeing that they found teachers’ digital feedback useful and relevant for their digital feedback engagement. However, the variation in the student responses seemed to indicate that students were divided in their perceptions with some students having less use of digital feedback than others. The relevance of use of digital feedback was strengthened by the student interviews in which digital feedback was perceived as more useful when it served a formative purpose. Although a formative goal for students’ use of digital feedback has been suggested by previous research studies (e.g. Batten, Jessop, & Birch, Citation2019; Oinas et al., Citation2017; Schrader & Grassinger, Citation2021; Winstone et al., Citation2021), the present study found that students’ experiences of digital feedback tended to emphasise a practice in which formative assessment purposes were undermined. Lack of personal and relational feedback to students was evident when teachers copied and pasted the same generic information to students, thereby reducing students’ motivation to engage with feedback. As such, the results suggested that students’ experiences of assessment were dependent on teachers’ digital feedback practices.

This study identified gender differences regarding students’ experiences of digital feedback engagement. For male students, Use of Digital Feedback was the most important predictor for their experiences of Digital Feedback Engagement. Some of the male students elaborated on this result in the interviews, underlining that digital feedback was less useful when students had moved on to new tasks. Two other predictors were important for male students: Deep Approach and Learning from the Examination. First, the relevance of adopting a deep approach to learning for digital feedback engagement was more central to male students. Some of the male interview participants stressed forgetfulness and laziness as barriers to using digital feedback, which pointed to the lack of self-regulation, suggesting an extra need for supporting male students’ self-regulated learning, which could be possible through eliciting male students’ internal feedback processes through metacognitive dialogues (Andrade & Brookhart, Citation2020; Sadler, Citation1998; Zimmerman, Citation2002). Second, the result that male students experienced Learning from the Examination as important to their Digital Feedback Engagement, might signify that male students were more competitive and prone to engage with digital feedback when they found it useful for examination preparations. For female students, Clear Goals and Standards were the most important predictor for experiences of Digital Feedback Engagement. Some of the female interview participants elaborated on these results by emphasising the usefulness of the increased clarity of explicit assessment criteria sheets in terms of where to invest energy and efforts. Focusing on clear goals and standards for female students might be a way to counter trends that feedback to female students is more concerned with ability as a fixed trait (Dweck, Citation1986; Hattie & Timperley, Citation2007). Male interview students valued criteria sheets to a less extent, highlighting for example that teachers’ subjective criteria were equally important.

The thematic analyses of the student interviews indicated that grades tended to dominate the focus of attention. Students often neglected digital feedback provided by teachers when grades were awarded simultaneously, consistent with previous studies (e.g. Mäkipää & Hildén, Citation2021; Winstone et al. Citation2021). There were several reasons why digital feedback was disregarded. First, when grades and digital feedback were provided in the same LMS, the digital feedback was more of a justification of the grade, rather than formative feedback, which coincided with results from Mäkipää and Hildén (Citation2021) who uncovered the unanticipated finding that upper secondary school students did not perceive feedback to be an essential part of their teachers’ assessment practices. In the present study, the students often received grades only. Second, this study identified the assessment practice of providing grades and digital feedback in separate LMSs, which accorded with findings from Winstone et al. (Citation2021) who identified a spatial separation in which students had to open a separate file to access the feedback comment. For the present study, a similar spatial separation was often evident when students received grades and feedback comments in different LMSs. Whereas previous research has highlighted solitude due to physical distance in students’ digital learning (e.g. Bardach et al., Citation2021; Jensen et al., Citation2021), the present study found that this solitude was embedded in students’ and teachers’ uses of LMSs. This was particularly prominent when teachers provided grades or feedback in the LMS without notifying students. A review by Jonsson (Citation2013) found that grades were problematic because they led to students’ feedback compliance without questioning teachers’ comments.

A central characteristic of feedback in digital contexts in upper secondary school was students’ wish to have more opportunity for dialogic feedback interactions, which accorded with studies where students have appreciated dialogue with their teacher but rarely experienced it (Jónsson et al., Citation2018; Mäkipää & Hildén, Citation2021). Digital assessments tended to create a spatial barrier and increased perceived physical distance between teachers and students in the present study, which also has been identified in previous research (e.g. Bardach et al., Citation2021; Yuan & Kim, Citation2015). In this study, students felt more able to ask follow-up questions and seek feedback from the teacher when they were closer to the teacher. As such, perceived closeness enhanced the emotional connection and relationship to the teacher. This result agreed with studies that have stressed the importance of feedback that emotionally supports students (Harley, et al., Citation2019; Oinas et al., Citation2017; Schrader & Grassinger; Citation2021; Silvola et al., Citation2021).

This study found that students’ experiences of teachers’ assessment and feedback in digital contexts were embedded in an assessment culture focused on testing and examinations, which suggested potential washback effects of assessment. The findings regarding high-frequency testing indicated how testing might influence teaching and wash back on learning. However, high-frequency testing might be understood within a broader notion of washback effects of assessment as a socially situated and negotiated construct within intricate webs of agents and contexts (Tsang & Isaacs, Citation2022). The perceived mismatch between assessment criteria and what was being weighted in classroom teaching was another potential washback effect in the present study. Students reported stress-related behaviour when checking and updating for new tests in the LMSs. Sometimes deadlines and test dates would be changed on short notice, causing stress and negative emotions. Although some of the students were extrinsically motivated by the performance culture, other students experienced that testing outweighed learning. This accorded with previous studies that have found that summative assessment cultures tend to dominate formative feedback intentions (e.g. Hopfenbeck et al., Citation2015; Jónsson et al., Citation2018; Mäkipää & Hildén, Citation2021). However, the present study further indicated that such assessment cultures might cause digital feedback barriers and maladaptive feedback agency for students’ learning.

The final model of the path analyses indicated important conditions for assessment to support learning with emphasis on students’ experiences of engagement with digital feedback. The path analyses amplified the results from the multiple regression analyses by showing the important mediating influence of Use of Digital Feedback on Digital Feedback Engagement. Clear Goals and Standards, Appropriate Assessment, Deep Approach, and Digital Resources in Teaching were important predictors for Digital Feedback Engagement. Use of Digital Feedback, Quantity and Quality of Digital Feedback, and Learning from the Examination were important mediators. As such, the path model expanded on previous identified interconnections between feedback quality, clarity of purpose and criteria, and deep approach to learning (Gibbs, Citation2019). Therefore, students’ engagement with digital feedback should not be considered as an isolated event, but as an integrated understanding of the different learning conditions that apply to teachers’ digital feedback practices. This implies facilitating for feedback interactions related to assessment criteria during learning activities. The path model also supported activating students in a deep approach to learning through digital feedback.

5. Limitations

Limitations should be addressed regarding the interpretations of the present study. The sample of this mixed methods case study is focused on one upper secondary school. As a single case, the study should not be generalised to account for larger populations but understood within its context (Yin, Citation2018). However, the high response rate of the student surveys and the selection of student interviews from all year levels contribute to a detailed and deep understanding of what assessment looks like in a specific upper secondary school context.

A limitation of low Cronbach’s alpha values for some of the subscales in the AEQ 3.3 should also be addressed. Our study found that Quantity and Quality of Feedback had a relatively low Cronbach’s alpha value. This coincided with the criticism of Batten et al. (Citation2019) that Quantity and Quality of Feedback caused confusion with its double agenda to address both quantity and quality in one single subscale, which indicates that the Quantity and Quality of Feedback scale should be interpreted with caution.

6. Implications

There are several important practical implications related to the results of the present study. First, the perceived usefulness of digital feedback is crucial for students’ engagement with digital feedback. Teachers should provide feedback in a context in which students experience a perceived usefulness, as the experience supports students in adopting an active role in their digital learning. This is in alignment with a review by Jonsson (Citation2013) who also noted the recurrent conflict between the feedback students prefer and the feedback that is likely to foster to productive learning. Therefore, digital feedback should be personal and formative to support students’ learning within a feedback design that provides students with the opportunity of using the feedback they receive. Second, teachers need to be aware of gender differences in assessment and feedback. This study found that male students had a particular need to feel that they had use of the digital feedback they received. Male students sometimes need help in their self-regulated learning, such as help with time management and study strategies. This could be something as simple as remembering to access digital feedback in LMSs. At the upper secondary level, teachers might expect more autonomy from students, but there is the potential pitfall of neglecting to support students with help to access feedback. The implication for female students’ preference for criteria sheets and awareness of clear goals and standards appears to be another reminder for upper secondary school teachers. Awareness of gender differences can support teachers in how to exercise sensitivity in learning situations in upper secondary school.

A structural condition for teachers’ digital feedback practices in this study was the use of LMSs. Students tended to check their grades and neglected engaging with feedback, which highlights an inherent accountability system in the design of the LMS and accompanying assessment practices. The present study supports previous research that has emphasised the need for an emotionally and motivationally supportive assessment environment for students’ digital learning (e.g. Schrader & Grassinger, Citation2021; Silvola et al., Citation2021). The emotional connection between teachers and students might be reduced for digital feedback. An emotional filter is created in digital feedback interactions with increased perceived physical distance for students. A consequence of assessment practices in LMSs is that schools appear to spend time and energy on systems where the potential for feedback engagement for learning and emotional support is lost. Another consequence of increased perceived physical distance in students’ feedback engagement relates to negative student emotions, such as, despair and frustration.

An assessment culture characterised by performance-orientation in upper secondary school might be legitimised in relation to the high-stakes nature of assessments and certification process to higher education. An element of competition might further support motivation when all students have opportunities for success. However, a highly competitive assessment culture also risks reinforcing negative emotions associated with students’ feedback engagement and assessment. For example, students who are unable to improve their results despite their very best efforts experience a loss in self-esteem and self-efficacy when failing repeatedly. As such, washback effects of assessment might perpetuate and reinforce social inequality (Tsang & Isaacs, Citation2022). A performance-oriented assessment culture might also lead to a strong emphasis on grades in terms of students’ feedback preferences, which has been identified as an impediment for using feedback productively (Jonsson, Citation2013). To counteract washback effects of assessment, the present study suggests reducing the frequency of testing to allow for more focus on students’ learning and well-being. In digital contexts, performance-orientation might put an additional strain on students’ learning processes as the physical distance risks causing alienation and solitude.

Despite national and international initiatives to support teachers’ assessment practices in a formative direction, teachers are often left alone to figure out for themselves how to improve their own professional development. More specific and systematic training programmes for teachers might be beneficial to support them in the complexities of providing and facilitating for formative assessment. However, if regulations for accountability are imposed, this might strangle the formative potential of teachers’ digital feedback practices for students’ learning.

Acknowledgments

We are grateful to Volda University College and DeKomp (Møre and Romsdal County) for providing financial support to our research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Volda University College.

Notes on contributors

Kim-Daniel Vattøy

Kim-Daniel Vattøy PhD, is an Associate Professor at Volda University College, Norway. His research focuses on feedback and assessment as related to students’ learning in a variety of educational contexts. Vattøy has professional experience working in schools and teacher education.

Siv M. Gamlem

Siv M. Gamlem PhD, is a Professor at Volda University College, Norway. Her research focuses on feedback, assessment, and learning processes. Gamlem has professional experience working in schools and teacher education.

Lina Rebekka Kobberstad

Lina Rebekka Kobberstad is a Lecturer at Volda University College, Norway. Her research focuses on feedback, learning processes, and digital learning environments. Kobberstad has professional experience working in teacher education.

Wenke Mork Rogne

Wenke Mork Rogne PhD, is an Associate Professor at Volda University College, Norway. Her research focuses on literacy, learning processes, feedback, and assessment. Rogne has professional experience working in schools and teacher education.

References

  • Allison, P. D. (2003). Missing data techniques for structural equation modeling. Journal of Abnormal Psychology, 112(4), 545–557. doi:10.1037/0021-843X.112.4.545.
  • Andrade, H. L., & Brookhart, S. M. (2020). Classroom assessment as the co-regulation of learning. Assessment in Education: Principles, Policy & Practice, 27(4), 350–372. doi:10.1080/0969594X.2019.1571992.
  • Balloo, K., Evans, C., Hughes, A., Zhu, X., & Winstone, N. (2018). Transparency isn’t spoon-feeding: How a transformative approach to the use of explicit assessment criteria can support student self-regulation. Frontiers in Education, 3(69). doi:10.3389/feduc.2018.00069
  • Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
  • Bardach, L., Klassen, R. M., Durksen, T. L., Rushby, J. V., Bostwick, K. C. P., & Sheridan, L. (2021). The power of feedback and reflection: Testing an online scenario-based learning intervention for student teachers. Computers & Education, 104194. doi:10.1016/j.compedu.2021.104194
  • Batten, J., Jessop, T., & Birch, P. (2019). Doing what it says on the tin? A psychometric evaluation of the assessment experience questionnaire. Assessment & Evaluation in Higher Education, 44(2), 309–320. doi:10.1080/02602938.2018.1499867.
  • Bearman, M., Boud, D., & Ajjawi, R. (2020). New directions for assessment in a digital world. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining university assessment in a digital world (pp. 7–18). Cham: Springer.
  • Bergdahl, N., Nouri, J., Fors, U., & Knutsson, O. (2020). Engagement, disengagement and performance when learning with technologies in upper secondary school. Computers & Education, 149, 103783. doi:10.1016/j.compedu.2019.103783.
  • Birenbaum, M., DeLuca, C., Earl, L. M., Heritage, M., Klenowski, V., Looney, A., and Wyatt-Smith, C. (2015). International trends in the implementation of assessment for learning: Implications for policy and practice. Policy Futures in Education, 13(1), 117–140. doi:10.1177/1478210314566733.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. doi:10.1080/0969595980050102.
  • Blikstad-Balas, M., Roe, A., Dalland, C. P., & Klette, K. (2022). Homeschooling in Norway during the pandemic-digital learning with unequal access to qualified help at home and unequal learning opportunities provided by the school. In F. M. Reimers (Ed.), Primary and secondary education during Covid-19: Disruptions to educational opportunity during a pandemic (pp. 177–201). Cham: Springer. doi:10.1007/978-3-030-81500-4_7
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. doi:10.1191/1478088706qp063oa.
  • Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. doi:10.1080/02602938.2018.1463354.
  • Carless, D., & Winstone, N. (2020). Teacher feedback literacy and its interplay with student feedback literacy. Teaching in Higher Education, 81(1), 1–14. doi:10.1080/13562517.2020.1782372.
  • Daus, S., Aamodt, P. O., & Tømte, C. E. (2019). Professional digital competence in teacher education: Examination of condition, attitudes and skills in five teacher education programmes (NIFU Report 3/2019). http://hdl.handle.net/11250/2602702
  • Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41(10), 1040–1048. doi:10.1037/0003-066X.41.10.1040.
  • European Commission. (2021). 6.3 Assessment in general upper secondary education. National Education Systems: Norway, https://eacea.ec.europa.eu/national-policies/eurydice/content/norway_en
  • Field, A. P. (2009). Discovering statistics using SPSS. London: Sage.
  • Gamlem, S. M., Kvinge, L. M., Smith, K., & Engelsen, K. S. (2019). Developing teachers’ responsive pedagogy in mathematics, does it lead to short-term effects on student learning? Cogent Education, 6(1), 1676568. doi:10.1080/2331186X.2019.1676568.
  • Gamlem, S. M., & Smith, K. (2013). Student perceptions of classroom feedback. Assessment in Education: Principles, Policy & Practice, 20(2), 150–169. doi:10.1080/0969594X.2012.749212.
  • Gibbs, G. (2019). How assessment frames student learning. In C. Bryan & K. Clegg (Eds.), Innovative assessment in higher education: A handbook for academic practitioners (2nd ed., pp. 22–35). London: Routledge.
  • Gibbs, G., & Dunbar‐Goddet, H. (2007). The effects of programme assessment environments on student learning. Oxfordshire, York: Higher Education Academy.
  • Harks, B., Rakoczy, K., Hattie, J., Besser, M., & Klieme, E. (2014). The effects of feedback on achievement, interest and self-evaluation: The role of feedback’s perceived usefulness. Educational Psychology, 34(3), 269–290 doi:10.1080/01443410.2013.785384.
  • Harley, J. M., Pekrun, R., Taxer, J. L., & Gross, J. J. (2019). Emotion Regulation in Achievement Situations: An Integrated Model. Educational Psychologist, 54(2), 106–126. 10.1080/00461520.2019.1587297.
  • Harris, L. R., Brown, G. T. L., & Dargusch, J. (2018). Not playing the game: Student assessment resistance as a form of agency. The Australian Educational Researcher, 45(1), 125–140. doi:10.1007/s13384-018-0264-0.
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. doi:10.3102/003465430298487.
  • Hopfenbeck, T. N., Flórez Petour, M. T., & Tolo, A. (2015). Balancing tensions in educational policy reforms: Large-scale implementation of assessment for learning in Norway. Assessment in Education: Principles, Policy & Practice, 22(1), 44–60. doi:10.1080/0969594X.2014.996524.
  • Hwang, G.-J., Xie, H., Wah, B. W., & Gašević, D. (2020). Vision, challenges, roles and research issues of artificial intelligence in education. Computers and Education: Artificial Intelligence, 1, 100001. doi:10.1016/j.caeai.2020.100001.
  • Jensen, L. X., Bearman, M., & Boud, D. (2021). Understanding feedback in online learning – A critical review and metaphor analysis. Computers & Education, 173, 104271. doi:10.1016/j.compedu.2021.104271.
  • Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63–76. doi:10.1177/1469787412467125.
  • Jónsson, Í. R., Smith, K., & Geirsdóttir, G. (2018). Shared language of feedback and assessment. Perception of teachers and students in three Icelandic secondary schools. Studies in Educational Evaluation, 56, 52–58. doi:10.1016/j.stueduc.2017.11.003.
  • Little, R. J. A., & Rubin, D. B. (2019). Statistical analysis with missing data (3rd ed.). Hoboken, NJ: Wiley.
  • Mäkipää, T., & Hildén, R. (2021). What kind of feedback is perceived as encouraging by Finnish general upper secondary school students? Education Sciences, 11(1), 12. doi:10.3390/educsci11010012.
  • Malecka, B., Boud, D., Tai, J., & Ajjawi, R. (2022). Navigating feedback practices across learning contexts: Implications for feedback literacy. Assessment & Evaluation in Higher Education, 1–15. doi:10.1080/02602938.2022.2041544
  • Norwegian Directorate for Education and Training. (2019). Knowledge base for evaluation of the examination system. Ministry of Education and Research. https://www.udir.no/tall-og-forskning/finn-forskning/rapporter/Kunnskapsgrunnlag-for-evaluering-av-eksamensordningen/
  • Oinas, S., Vainikainen, M.-P., & Hotulainen, R. (2017). Technology-enhanced feedback for pupils and parents in Finnish basic education. Computers & Education, 108, 59–70. doi:10.1016/j.compedu.2017.01.012.
  • Prøitz, T. S. (2013). Variations in grading practice – Subjects matter. Education Inquiry, 4(3), 22629. doi:10.3402/edui.v4i3.22629.
  • Rovagnati, V., Pitt, E., & Winstone, N. (2022). Feedback cultures, histories and literacies: International postgraduate students’ experiences. Assessment & Evaluation in Higher Education, 47(3), 347–359. doi:10.1080/02602938.2021.1916431.
  • Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education: Principles, Policy & Practice, 5(1), 77–84. doi:10.1080/0969595980050104.
  • Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550. doi:10.1080/02602930903541015.
  • Sandvik, L. V., Smith, K., Strømme, A., Svendsen, B., Aasmundstad Sommervold, O., & Aarønes Angvik, S. (2021). Students’ perceptions of assessment practices in upper secondary school during COVID-19. Teachers and Teaching, 1–14. doi:10.1080/13540602.2021.1982692
  • Schrader, C., & Grassinger, R. (2021). Tell me that I can do it better. The effect of attributional feedback from a learning technology on achievement emotions and performance and the moderating role of individual adaptive reactions to errors. Computers & Education, 161, 104028. doi:10.1016/j.compedu.2020.104028.
  • Selwyn, N. (2016). Is technology good for education?. Cambridge: Polity.
  • Silvola, A., Näykki, P., Kaveri, A., & Muukkonen, H. (2021). Expectations for supporting student engagement with learning analytics: An academic path perspective. Computers & Education, 168, 104192. doi:10.1016/j.compedu.2021.104192.
  • Teddlie, C., & Tashakkori, A. (2008). Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences (1st ed.). Thousand Oaks, CA: Sage.
  • Tsang, C. L., & Isaacs, T. (2022). Hong Kong secondary students’ perspectives on selecting test difficulty level and learner washback: Effects of a graded approach to assessment. Language Testing, 39(2), 212–238. doi:10.1177/02655322211050600.
  • van der Kleij, F. M., Adie, L. E., & Cumming, J. J. (2019). A meta-review of the student role in feedback. International Journal of Educational Research, 98, 303–323. doi:10.1016/j.ijer.2019.09.005.
  • van der Kleij, F. M., Feskens, R. C. W., & Eggen, T. J. H. M. (2015). Effects of feedback in a computer-based learning environment on students’ learning outcomes: A meta-analysis. Review of Educational Research, 85(4), 475–511. doi:10.3102/0034654314564881.
  • Vattøy, K.-D., Gamlem, S. M., & Rogne, W. M. (2021). Examining students’ feedback engagement and assessment experiences: A mixed study. Studies in Higher Education, 46(11), 2325–2337. doi:10.1080/03075079.2020.1723523.
  • Whittle, D., & Campbell, M. (2019). A guide to digital feedback loops: An approach to strengthening program outcomes through data for decision making. USAID. Accessed 12 August 2022. https://www.usaid.gov/digital-development/digital-feedback-loops
  • Winstone, N., Bourne, J., Medland, E., Niculescu, I., & Rees, R. (2021). “Check the grade, log out”: Students’ engagement with feedback in learning management systems. Assessment & Evaluation in Higher Education, 46(4), 631–643. doi:10.1080/02602938.2020.1787331.
  • Winstone, N., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. doi:10.1080/00461520.2016.1207538.
  • Wyatt-Smith, C., & Adie, L. (2019). The development of students’ evaluative expertise: Enabling conditions for integrating criteria into pedagogic practice. Journal of Curriculum Studies, 1–21. doi:10.1080/00220272.2019.1624831
  • Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). Thousand Oaks, CA: Sage.
  • Yuan, J., & Kim, C. (2015). Effective feedback design using free technologies. Journal of Educational Computing Research, 52(3), 408–434. doi:10.1177/0735633115571929.
  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory Into Practice, 41(2), 64–70. doi:10.1207/s15430421tip4102_2.

Appendix A

Table A1. Descriptive statistics, factor loadings, and reliability.

Appendix B

Table B1. The digital feedback and assessment experience interview guide.

Appendix C

Table C1. Multiple linear regression with digital feedback engagement as the dependent variable.