392
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Students’ perceptions of the assessment programme’s impact on self-regulated learning: a multiple-case study

ORCID Icon, , ORCID Icon, , &

Abstract

It is assumed that a programmatic approach to assessment supports students’ self-regulated learning (SRL). This study investigated students’ perceptions of this assumed support. Using a multiple-case study design, this study examined students’ perceptions in two distinct study programmes. In each case, first-year students were enrolled in an assessment programme that was designed in accordance with the principles of programmatic assessment, which integrate both assessment purposes of assessment for learning (AfL) and assessment of learning (AoL). The second-year students in each case were enrolled in an assessment programme with a traditional AoL assessment approach. The findings suggest that in both cases first-year students perceived their assessment programme as more positive for their learning than second-year students. Through a cross-case analysis of students’ perceptions of the two assessment programmes, three themes were identified that are essential to support students’ SRL: 1) assessment design; 2) assessment as a dialogue; and 3) assessment as an information source. Key aspects of the themes that support or hinder SRL are discussed. The findings of this study highlight the importance of programme-level assessment design to support students in their SRL.

Introduction

The process of self-regulation has been found to be a central feature of learning (Panadero et al. Citation2018). Self-regulated learning (SRL) emphasises the role of students as active participants in their learning and highlights their ability to regulate their motivation, cognition, and behaviour. To be effective self-regulated learners, students must be aware of their abilities, monitor their progress, and decide on their subsequent learning activities (Zimmerman Citation2002). SRL can be affected by the design and administration of the assessment (Gibbs and Simpson Citation2005; Baird et al. Citation2017) and can be stimulated when students actively participate in the assessment (Boud and Molloy Citation2013). Both classroom assessments (i.e. single assessment tasks within a module) and the assessment programme as a whole can impact students’ SRL by influencing the nature of students’ regulation activities (Cilliers et al. Citation2012; Andrade and Brookhart Citation2016). Much research on assessment and SRL has been focused on classroom assessment (Andrade and Brookhart Citation2016; Brandmo et al. Citation2020; Chen and Bonner Citation2020). Empirical research on how the assessment programme can support students’ SRL is scarce. By exploring the impact of the assessment programme on students’ ability to self-regulate their learning, the current study seeks to contribute to our understanding of the relationship between assessment and learning on a programme level.

In a study on classroom assessment and SRL, Hawe and Dixon (Citation2017) concluded that, while various assessment tasks contributed to students’ SRL, the full impact of assessment on SRL was realised in the cumulative and recursive effect these tasks had on students’ learning. This can be realised by designing assessment and feedback practices at the programme level, as this allows a comprehensive view of students’ learning trajectories and how best to support them (Boud and Falchikov Citation2006; Jessop and Tomas Citation2017). Several scholars highlight the importance of achieving a programme-level perspective on assessment to optimize the supportive role of assessment in student learning (Jessop Citation2019). Planning assessment tasks at a programme level can help to ensure that students experience a cohesive programme of learning that builds towards programme-level outcomes (Charlton et al. Citation2022).

An example of planning assessment holistically at the programme level has been referred to in the literature as ‘programmatic assessment’ (Van der Vleuten et al. Citation2012, p. 205). In programmatic assessment (PA), the assessment programme is regarded as a combination of various assessment tasks that are purposefully designed and tailored to a curriculum’s aims, content and structure. In PA, emphasis is placed on the student’s learning process by collecting information and feedback about the student’s longitudinal learning development. The decision of whether a student meets or fails the programme outcomes is based on the integration of assessment information gathered from multiple observations over a longer period of time (Van der Vleuten et al. Citation2012; Bok et al. Citation2013; Schut et al. Citation2021). The PA’s assessment programme integrates the two purposes of assessment: 1) assessment for learning (AfL) to inform and support learning, and 2) assessment of learning (AoL) to inform decision-making and report on student learning. PA contrasts with the more traditional modular assessment programme that underpins most assessment practices (Van der Vleuten et al. Citation2019). In the modular assessment programme structure, each module is taught over brief semesters (e.g. 8–10 wk), and contains, on average, two graded assessments that result in a pass/fail decision for module attainment. The focus is on AoL because the emphasis is on grading and certifying whether students have acquired sufficient knowledge and skills in a particular module, and less on opportunities built throughout the curriculum to learn from and adjust experiences (Jessop et al. Citation2014).

Although a programmatic approach to assessment sounds promising to support student’s learning, the impact of an assessment on learning depends on how students perceive it (Heeneman et al. Citation2015). Students’ responses to assessments are shaped by inferences they make about the perceived value placed on the assessment (Watling and Lingard Citation2012). For instance, Gerritsen-van Leeuwenkamp et al. (Citation2019) examined students’ perceptions of the relationship between the effect of assessment on learning and their learning approach. The authors found that when students perceive the effects of assessment on learning as positive, students may exert a deep learning approach, and that more negative perceptions of the effect of assessment on learning relate to a more superficial approach to learning. This study seeks to examine how students in various assessment programmes perceive the effect of assessment on learning. Furthermore, the study aims to understand more thoroughly how the assessment programme influences students’ SRL. The following research questions were central: 1) To what extent do students enrolled in an assessment programme with a programmatic assessment approach (PA assessment programme) and in an assessment programme with a focus on AoL (AoL assessment programme) differ in how they perceive the effect of the assessment programme on learning? and 2) How does the assessment programme support or hinder SRL, according to students?

Conceptual framework SRL and assessment

Multiple models of SRL have been described in the literature (Panadero Citation2017). These models provide insights into the fundamental cognitive and metacognitive processes that occur during the learning process. Based on Zimmerman’s model of self-regulation (2002), models of SRL generally consist of three cyclical phases in which students are actively involved during their learning: 1) the preparatory phase; 2 the performance phase; and 3) the appraisal phase (Puustinen and Pulkkinen Citation2001; Panadero Citation2017). In the preparatory phase, students engage in analysing the task, planning, and setting goals. In the performance phase, students engage in monitoring their goals and their progress of performance while executing the task. In the appraisal phase, students engage in reflecting on how they have performed the task and in formulating actions for future performance (Puustinen and Pulkkinen Citation2001; Zimmerman Citation2002). The cyclical model of self-regulation offers a comprehensive perspective on the learning process, while also providing educators a framework for implementing specific strategies aimed at enhancing students’ SRL.

Several aspects of the assessment programme have been found to hold promise to support students in the phases of SRL (Heeneman et al. Citation2015; Andrade and Brookhart Citation2016; Zhang Citation2017; Braund and DeLuca Citation2018). These include the structure of the assessment programme, assessment tasks, feedback, and teacher guidance. The following sections will describe how these aspects contribute to SRL.

Structure of the assessment programme

A well-designed assessment programme structure can facilitate SRL by making learning explicit throughout the curriculum (Nicol and Macfarlane‐Dick Citation2006; Boud and Molloy Citation2013). By providing students with insight into the intended learning goals and how they can be achieved (preparation phase), students gain an understanding of what is expected of them. When assessments are planned at the programme level, students can make connections between their current performance and tasks in subsequent modules (appraisal phase) (Winstone et al. Citation2017). In a review on PA conducted by Schut et al. (Citation2021), it was found that a PA structure that facilitated the continuous flow of assessment information encouraged students to track their progress (performance phase) and improved identification of their strengths and weaknesses (appraisal phase). In contrast, assessments that are designed only at the module level might focus students’ attention on overcoming the hurdle to pass the modular assessment without awareness of their learning beyond that period (Tan Citation2013).

Assessment tasks

Single assessment tasks are part of the assessment programme—that is, assessment tasks within a module or integrated and aligned throughout the programme. The design and format of assessment tasks affect what students value (preparation phase), how they comprehend and engage with tasks (performance phase), and how they apply these insights to future learning (appraisal phase) (Gibbs and Simpson Citation2005; Boud and Molloy Citation2013). SRL can be stimulated when the design of assessment tasks relates to students’ interests and needs (Zhang Citation2017). Then, students are more likely to feel motivated to set goals and get started (preparation phase). This interest can be supported, for instance, by including authentic assessments that connect theoretical assignments to practice (Babayigit and Guven Citation2020), or by allowing students to select tasks that match their abilities so they feel a sense of ownership (Alkharusi Citation2009). Students’ SRL can be hindered when the assessment tasks are too much controlled by teachers and students do not have choices (Alkharusi et al. Citation2013).

Assessment formats that are thought to increase students’ SRL are self- and peer assessment (Clark Citation2012; Braund and DeLuca Citation2018). Through self- and peer assessment, students gain a better understanding of the assessment criteria that can be used to set learning goals (preparation phase) and monitor their own learning based on these goals (performance phase) (Nicol and Macfarlane‐Dick Citation2006). Self- and peer assessment also enables students to take ownership of their learning (Zhang Citation2017; Schut et al. Citation2018). A study by Zhang (Citation2017) found that while self-assessment increased SRL most effectively, peer assessment reduced SRL. An explanation for this finding was that the students did not engage in an authentic peer assessment task because the design of the task was more mechanical (i.e. checking each other’s answers) and did not promote ownership.

Feedback

SRL depends in part on information gathered from assessments about students’ learning and achievement (Andrade and Brookhart Citation2016). Teacher and peer feedback are sources for students to inform their learning and are essential for the three phases of SRL (Butler and Winne Citation1995). The appraisal phase is affected by opportunities teachers give students to use feedback and decisions students make based on that feedback (Andrade and Brookhart Citation2016; Panadero et al. Citation2018). In addition, some students use all forms of feedback, including grades, to enhance their learning (Andrade and Brookhart Citation2016). Fischer et al. (Citation2023) found, for instance, that graded AoL tasks played a significant role in prompting students on how to engage in learning activities (preparation phase) and the amount of effort students put into a task to prepare for an assessment (performance phase). If students are to use grades to evaluate their performance (appraisal phase), then grades must reflect meaningful learning standards and students’ progress towards them (Andrade and Brookhart Citation2016).

Teacher guidance

Teachers have an important role in developing self-regulation skills among their students (Schut et al. Citation2018; Greene Citation2020). Teachers monitor and reflect on their students’ progress and understanding and provide the students with opportunities to discuss the feedback (performance and appraisal phase). To engage students in SRL, teachers and students require frequent interaction and time spent together (Clark Citation2012). Interaction can be facilitated in classroom discussions and questioning, and by specific assessment formats such as oral assessments. Oral assessments, for instance, can have a direct learning effect on students’ SRL due to the immediate feedback students receive on their performance and the sense of ownership students experience during the assessment (Heeneman et al. Citation2015). In higher education, a specific teacher guidance role that may support students in their SRL is that of a mentor, whose role is to support students in their personal development (Tise et al. Citation2023).

Methods

Research design

Using a multiple-case study design (Yin Citation2018), we examined the perceptions of students in two Bachelor’s degree study programmes. A case study permits an in-depth analysis of a case, and including multiple cases enables cross-case analysis to identify patterns across the cases (Yin Citation2018). Each study programme constituted one case. In each case, a new curriculum was implemented in study year 1, which was designed following the principles of PA and emphasised both AfL and AoL purposes of assessment (PA assessment programme). The revised curriculum for study year 1 differed from the curriculum used in study year 2 and the previous year 1. It included more substantial modules and a reduced number of high-stakes decisions. It also prioritised the provision of ongoing feedback and reflective activities. For instance, by maintaining a portfolio and an intensive mentoring programme. Study year 2 of each case constituted the old curriculum in which the assessment programme was designed according to a conventional modular structure that emphasised AoL purposes of assessment (AoL assessment programme). An overview of the assessment programme’s structure is displayed in . Comprehensive information regarding the selection of study programmes and the assessment programmes’ features can be accessed in the supplemental materials available online. A mixed-methods approach was employed by integrating quantitative data (questionnaires) and qualitative data (semi-structured interviews).

Table 1. Overview of the assessment programme’s structure for study years 1 and 2 in study programmes A and B.

Data collection

Data were collected at the completion of the 2021–2022 academic year, from the end of June to the start of September 2022. Students were asked to reflect on the assessments in their programme that were offered in the academic year 2021–2022 when answering the questions. For each Bachelor programme, a coordinator was involved who provided documents with information about the assessment programmes. The coordinator helped with both quantitative and qualitative data collection by informing students and teachers about the research and providing students’ email addresses. The research was approved by the Ethics Committee of the Ethics Review Board of the Faculty of Social and Behavioural Sciences at Utrecht University (study approval number 22-0106). Informed consent was obtained by all participants prior to data collection.

Quantitative data collection

The study programme’s coordinator scheduled a time for students to fill in the questionnaire during a physical meeting by sharing an online link. Students could complete the questionnaire on their laptop or mobile device. Students could provide informed consent by checking a box. Before distributing the questionnaire, all students received an information letter by email from the study coordinator to inform them of the purposes of the study.

Instruments

To measure students’ perceptions of the assessment programme on their learning we used the scale ‘Effects of Assessment on Learning’ from the Students’ Perceptions of Assessment Quality Questionnaire (SPAQQ) (Gerritsen-van Leeuwenkamp et al. Citation2018). The scale originally contained 11 items that relate to feedback, motivation, and SRL. Because we wanted to measure SRL as the dependent variable in a distinct questionnaire we performed a principal components analysis (PCA) to examine the underlying structure of the scale’s items. The PCA revealed the presence of two components. Items that correlated high on factor 1 related to perceptions of a positive assessment learning environment (7 items). Items that correlated high on factor 2 related to aspects of self-regulation (4 items) and were left for further analysis as SRL was measured by the Self-Regulated Online Learning Questionnaire (see below). The pattern and structure matrices for the PCA are presented in the online supplementary resources. The items were scored on a seven-point Likert scale, ranging from 1 (completely disagree) to 7 (completely agree).

To measure students’ perceptions of SRL, we used the Self-Regulated Online Learning Questionnaire (SOL-Q-R) (Jansen et al. Citation2017). For the purposes of this study, we used the scales that measured: a) activities in the preparation phase (n = 7); b) activities in the performance phase (n = 7); and c) activities in the appraisal phase (n = 6). Items are scored on a seven-point Likert-type scale ranging from 1 (not at all true of me) to 7 (very true of me). The items are converted from an online (‘in this online course’) to a physical learning environment (‘in the course’). Both instruments were created and validated for a Dutch context and were provided by the authors for use in the current study (Gerritsen-van Leeuwenkamp et al. Citation2018; Jansen et al. Citation2017). The scale items of the questionnaire are illustrated in the online supplementary resources.

Qualitative data collection

The coordinators of the Bachelor programme provided the first author (LS) with students’ names and email addresses, which were randomly sampled by selecting every tenth student from the list. The first author (LS) approached participants through email with information about the study and an invitation to participate. Students received a voucher of eight euros when they participated in the interview. We intended to conduct at least ten interviews per cohort per case. Students were invited in four stages, with 15 students per cohort approached in each round. From the students who were approached, six to nine students were willing to participate in each cohort. All interviews were conducted by the first author (LS) with individual students. Before conducting an interview, informed consent was given by the participants. The interviews were held by a video call and were audio recorded. Interviews lasted between 15 and 40 min and were subsequently transcribed verbatim. An interview guide was developed to direct the semi-structured interviews. The interview guide is presented in the online supplementary materials.

Data analysis

Quantitative data analysis

Firstly, data were checked on informed consent, outliers and missing data. For all scales, sum scores were calculated. Missing data within a scale were removed via listwise deletion. After excluding twelve participants who did not provide informed consent or did not fill in the questionnaire, analyses were performed with a total sample of 276 students (n = 93 cohort A1; n = 53 cohort A2; n = 58 cohort B1; n = 72 cohort B2). The study’s research design and the number of participants for each cohort are provided in . Secondly, we tested the normality of the data by examining Kolmogorov–Smirnov tests of normality, plots, skewness and kurtosis statistics (Field Citation2013). Across both cases, some variables were not normally distributed. Therefore, non-parametric tests were performed for quantitative analyses (Pallant Citation2011).

Table 2. Design of the research study and number of participants for the questionnaire and the interviews.

Thirdly, we examined the estimated reliability of the scales for the entire sample. For all scales used in this study, we found good estimated internal consistency with Cronbach’s alpha above 0.7 (Pallant Citation2011). The Cronbach’s alpha coefficients for the whole sample and the two cases are displayed in . Next, to examine whether students enrolled in a PA or AoL assessment programme differed in their perceptions of their assessment programme (research question 1), we used the sum scores of ‘Effects of Assessment on Learning’ scale to conduct a Mann-Whitney U test (Pallant Citation2011). Finally, to examine how students perceived that their assessment programme supported their SRL (research question 2), Spearman’s rank order correlation (rho) was calculated (Pallant Citation2011).

Table 3. Reliability analyses of the scales in the questionnaire.

Qualitative data analysis

The analyses of the interview data were performed in NVivo (Version 12). We analysed the data using a template analysis that distinguished a priori themes (Brooks et al. Citation2015). The conceptual framework of assessment and SRL as presented in the introduction served as a basis for creating the a priori themes. We distinguished independent variables and dependent variables in the coding. Aspects related to students’ perceptions of the assessment programme were coded as independent variables (i.e. structure of the assessment programme, assessment task, feedback, and teacher guidance). Aspects related to students’ perceptions of SRL were coded as dependent variables (i.e. preparatory phase, performance phase, and appraisal phase).

To ensure the trustworthiness of the coding, three authors (LS, MvdS and HB) first analysed two transcripts independently to get acquainted with the data and to suggest input for the initial template (Brooks et al. Citation2015). To keep in mind the context of participants’ experiences, it was decided that the participant’s answer to a question was considered as one coding segment. When both variables that related to the assessment programme and to SRL were coded within a segment, an inductive open coding line was created that attributed the relationship between the variables (i.e. whether aspects of the assessment programme supported or hindered SRL). In the online supplementary materials, an example of an open coding line is provided. Secondly, all three authors coded two transcripts using the adjusted template and by assigning open coding. Again, the findings were discussed, and the template was adjusted and refined in more detail. For example, because of the cyclical phase of SRL, we decided that the SRL phase of preparation was coded when a student talked about setting goals for a new module, while the SRL phase of appraisal was coded when the student was looking back at what was learned and set new goals from there. Thirdly, the first author (LS) coded all interviews and selected twelve coded segments of all subcases to discuss and agree on the coding. Fourthly, to ensure consistency and coverage of the coding, all interviews were re-read and re-coded by the first author (LS). For each research question, a case report was written based on the relationships found between the variables. Authors LS, MvdS and HB compared the different individual case reports to draw conclusions by using the analytical technique of pattern matching across cases (Yin Citation2018).

Results

Firstly, descriptive statistics of students’ perceptions of the effect of their assessment programmes on learning and the phases of SRL are reported in .

Table 4. Descriptive statistics of students’ perceptions of the assessment programme on learning and self-regulated learning.

RQ1: students’ perceptions of the effect of the assessment programme on learning

In the quantitative analyses, we examined for each case whether first-year students (PA assessment programme) and second-year students (AoL assessment programme) differed in how they perceived the effect of the assessment programme on learning. In both study programmes, first-year students enrolled in a PA assessment programme perceived the assessment programme as significantly more positive for learning than second-year students enrolled in an AoL assessment programme (Mann-Whitney U: study programme A: U = 983, p < 0.001, r = 0.18 (small effect); study programme B: U = 1057, p = 0.002, r = 0.08 (small effect).

In the semi-structured interviews, students were asked how they perceived the influence of the assessment programme with regard to their learning. For study programme A, first-year students (PA assessment programme) felt that their learning was at the centre of the assessment programme:

It is really about the learning process and the student’s development. (A1-5)

Students experienced continuous engagement with assignments, feedback and the processing of feedback, which reduced the anxiety associated with a particular assessment moment.

Second-year students from study programme A (AoL assessment programme) experienced their assessment programme in a variety of ways:

With one test, I thought, ‘This is what I’m really doing it for’, and with another test, I thought, ‘Anyone could have done this’. (A2-9)

Assessments focusing on basic understanding, such as multiple-choice knowledge tests, were considered ineffective because no insight or practice-related topics were assessed. Students further indicated that they received numerous assessment tasks:

Sometimes you have five, six, or seven assessments in a week, … therefore, you must decide what to study for. (A2-3)

The assessment programme included many one-credit courses, and students reported that failing such a course induced a great deal of anxiety. According to second-year students, assessment of learning is emphasised in assessment:

I’m not doing this to master the theory, but to pass the test. (A2-5)

For study programme B, most of the first-year students (PA assessment programme) perceived the assessment programme as supportive for their learning:

They [teachers] pay close attention to your process and your progress within it …. It does not matter if you make numerous errors in the beginning. What matters are the actions you take and the growth you experience. (B1-8)

According to first-year students, the assessment programme emphasises the application of knowledge in projects:

Normally, I learn something from a book and then forget it, but now that you have to apply it, it sticks much better, so it’s a pleasant way of testing. (B1-6)

All interviewed second-year students from study programme B (AoL assessment programme) perceived the assessments as appropriate for the material covered. However, the majority of students also stated that the assessments were (too) easy and lacked depth:

The assessment tasks could be a bit more difficult, a bit more professional. (B2-4)

Like cohort A2 students, cohort B2 students asserted that their assessment programme emphasised assessment of learning.

RQ2: students’ perceptions of how the assessment programme supported or hindered SRL

In the quantitative analysis, we examined for each case whether there was a relationship between how students perceived their assessment programme and the phases of SRL. displays correlations between students’ perceptions of the assessment programme and SRL. For study programme A, first-year students (PA assessment programme) showed a moderate to strong positive correlation between their perception of the assessment programme and all three phases of SRL. This suggests that when students perceive the effect of the assessment programme as positive for learning, they perceive that they are more likely to self-regulate their learning. For second-year students (AoL assessment programme), only a moderately positive correlation was found between the assessment programme and the appraisal phase of SRL.

Table 5. Correlation between students’ perceptions of the assessment programme on learning and perceived SRL in the preparation, performance and appraisal phases.

For study programme B, a comparable pattern was found. First-year students (PA assessment programme) exhibited a moderate to strong positive correlation between the assessment programme and the three phases of SRL. However, for second-year students (AoL assessment programme), no significant correlations were found.

In the semi-structured interviews, we asked students what elements of the assessment programme supported or hindered students’ SRL. Using a cross-case comparison of the various assessment programmes, three themes emerged that are essential for SRL: 1) assessment design; 2) assessment as a dialogue; and 3) assessment as an information source.

Theme 1) assessment design

This theme includes the following subthemes: authentic tasks, sense of ownership, transparency of standards, and alignment of the assessment programme.

Authentic tasks

Students from both the PA and modular AoL assessment programmes perceived that authentic, practice-related assessments supported their SRL—for instance, assignments in which students apply theory to practice, projects, and internships. These assessment tasks felt relevant for students’ future professions and motivated them to set learning goals (preparation phase). Authentic tasks also helped students to gain insight into their progress (performance phase):

My internship has helped me identify areas in which I can develop. It is the most important assessment for me, given my aspiration to become a teacher. (A2-4)

Students’ SRL was hindered when the content of the assessment programme did not match their interests or was perceived as too easy—for example, because basic knowledge was tested.

Sense of ownership

For students in both the PA and AoL assessment programmes, an assessment design that included a certain degree of ownership promoted SRL. Students were motivated to get started (preparation phase) when there was a balance between the teacher’s instructions and the opportunity for students to contribute their own ideas. This allowed students to direct their own development:

Then you start choosing for yourself, what am I going to learn more about, what do I find interesting? (B2-2)

In both PA assessment programmes, ownership was promoted through the use of specific assessment formats, such as compiling a portfolio and self-assessment. For students in cohort A1, the portfolio aimed at collecting and reflecting on feedback and other evidence to demonstrate their competence development. This encouraged students actively to seek feedback and provided them with insight into their development (performance phase) and in determining what areas required improvement (appraisal phase). In cohort B1, the portfolio was an instrument to document and demonstrate performance, with a greater emphasis on the student’s contribution to the final product and less emphasis on individual growth throughout the project. As a result, the portfolio assisted students more in the SRL phase of performance than in the phase of appraisal.

Transparency of standards

In both the PA and AoL assessment programmes, clear assessment criteria helped students to create goals and decide what to focus their attention on (preparation phase). However, in both PA assessment programmes, the majority of students reported that assessment criteria were initially unclear and students did not comprehend what was expected of them. Students struggled to comprehend how the newly implemented assessment programme worked and how they could meet the requirements:

I found the assessment of my development and level of competence somewhat more difficult … I had difficulty seeing the big picture of what you had to be able to do and should demonstrate. (A1-4)

These ambiguities prevented students from adequately preparing assessment tasks (preparation phase); consequently, parts of the portfolio were missing (performance phase).

Alignment assessment programme

Students from both PA assessment programmes perceived that the alignment of the assessment programme, in which subjects build on one another and certain aspects reappear in subsequent learning tasks, enabled them to reflect on feedback received and to apply it to subsequent learning tasks (appraisal phase):

Previous assignments are reviewed in the next assignment. So, often you can do something with the feedback to improve … so, for every assignment, I use the feedback I received as a benchmark. (A1-2)

Students from both AoL assessment programmes generally perceived that their assessments were less aligned throughout the programme:

It [the assessment programme] is not very structured. All the courses that are scheduled at the end of the study year could also have been arranged earlier. … It is very random. (A2-6)

Students reported that they only acted upon feedback when a modifiable (midterm) report accompanied it. The feedback students received at the end of a module was not taken to the subsequent module (appraisal phase).

Theme 2) assessment as a dialogue

Students’ interactions with teachers supported the students’ SRL. For instance, in classroom conversations, the teacher encouraged students actively to reflect on what was learned (appraisal phase). For students in both PA assessment programmes, students’ performance on the authentic assessment tasks was discussed with a mentor weekly. The mentor assisted students in setting goals (preparation phase) and identifying their strengths and weaknesses (performance phase). These interactions engaged students to examine their actions and their consequences consciously (appraisal phase):

Sometimes you want to learn something but don’t know how; discussing with a mentor makes the goal more manageable. (B1-5)

Furthermore, cohort B1 students’ SLR was supported through engagement in oral assessments of the portfolio’s content to evaluate students’ performance in the project. These interactive assessments provided the students with direct insight into what had been mastered (performance phase) and prompted students to reflect more thoroughly on their learning (appraisal phase):

During an oral assessment, I really started to reflect more deeply … You engage in a more critical thinking process during a dialogue rather than simply turning it [portfolio] in. (B1-6)

For students from both AoL assessment programmes, a mentor programme was included in the curriculum to reflect on their development during the study year. Conversations with a mentor were scheduled two or three times per year. Cohort A2 students perceived the conversations with a mentor as beneficial for their SRL in supporting them to reflect on what they had accomplished and learned (appraisal phase). However, cohort B2 students did not mention the mentor’s role in relation to their SRL. A possible explanation is that the mentoring programme was not integrated into the curriculum design and that its content did not challenge students to reflect on their learning. One student, for instance, stated:

It’s a course you put together quickly because you’ll get a sufficient grade anyway. (B2-2)

Theme 3) assessment as an information source

Students from both the PA and AoL assessment programmes perceived constructive teacher and peer feedback as essential sources to inform their learning. Feedback provided the student with insight into what went well, what needed improvement (performance phase), and what the student could work on in the future (appraisal phase). In addition to feedback, cohort A1 students (PA assessment programme) perceived the progress test—a non-graded multiple-choice knowledge test—as a useful assessment task to gain insight into which subjects they had mastered (performance phase) and what to focus on (appraisal phase):

If you only answer a few questions correctly, you know you need to work on that. (A1-1)

Cohort B1 (PA assessment programme) and A2 and B2 (AoL assessment programme) students reported that their mastery of the material (performance phase) was informed by graded assessments. However, these students also reported that they just reviewed the assessment results in the event of failing an assessment:

If you receive a low grade … the assignment must be turned in again. You do learn from that, because then you are going to examine what you did incorrectly. If you receive a sufficient grade, you do not consider how you could have performed better. (1B-3)

This may hinder students’ SRL:

With a grade that is just sufficient … there are still many areas for improvement, but you don’t know which ones. (A2-6)

Conclusion and discussion

This study sought to explore students’ perceptions of how their assessment programme impacted their SRL (i.e. preparation, performance, and appraisal phases) through a multiple case study design. We examined students’ perceptions within two distinct Bachelor’s degree programmes, each considered as a case. In each case, study year 1 constituted a PA assessment programme in which AfL and AoL were integrated. In contrast, study year 2 of each case constituted a traditional modular structure, emphasising AoL and grading. Two research questions were central: 1) Do students enrolled in a PA assessment programme and in an AoL assessment programme differ in how they perceive the effect of the assessment programme on learning? and 2) How does the assessment programme support or hinder self-regulated learning, according to students?

Regarding research question 1, this study found that first-year students from a PA assessment programme perceived their assessment programme as more positive for their learning than second-year students from an AoL assessment programme. In general, second-year students in both AoL assessment programmes perceived that the emphasis in assessment was on assessment of learning; students encountered many graded assessments which they needed to pass to earn credits. In both PA assessment programmes, students perceived that their learning was central in the assessment programme. However, many students initially struggled with the new assessment approach, which differed from their previous education. In a review study on PA, Schut et al. (Citation2021) concluded that students may have difficulty adapting to a new assessment approach. Consequently, the authors emphasise the importance of developing a shared understanding of (programmatic) assessment purposes and practices.

Regarding research question 2, three themes emerged that supported students’ SRL. Theme 1, assessment design, refers to design elements of individual assessment tasks and the assessment programme. Overall, authentic assessment tasks, a sense of ownership in assessment task performance, transparent assessment criteria, and an aligned assessment programme were regarded as conducive to students’ SRL. Theme 2, assessment as a dialogue, refers to assessments in which students interact with their teacher, assessor or peers. These dialogues prompted students to self-regulate their learning. In the literature, these interactions are also referred to as ‘dialogic assessment’ (Braund and DeLuca Citation2018), i.e. ‘opportunities for students to exchange ideas about where they are in their learning and where they need to go including strategies to get there’ (p. 77). Theme 3, assessment as an information source, refers to the information richness of an assessment. In all assessment programmes, students perceived feedback as a valuable information source to support their SRL. This is consistent with the literature on classroom assessment, in which AfL practices such as feedback, self- and peer assessment are regarded as catalysts for SRL (Hawe and Dixon Citation2017). In the current study, graded assessments were also perceived as sources of information that can assist students in informing their learning. We found, however, that students only used this information when they failed an assessment, whereas a pass grade prevented them from examining feedback. As a result, assigning grades might encourage a focus on outcomes rather than a focus on continuous improvement (Rust Citation2002; Harrison et al. Citation2016).

The comparison of assessment programmes in two different study programmes suggests that a PA assessment programme may promote students’ self-regulation more effectively than a traditional modular AoL assessment programme. In both PA assessment programmes, the quantitative analyses showed significant relationships between student’s perceptions of the effect of the assessment programme on learning and all three phases of SRL; these relationships were not found in either AoL assessment programme. A possible explanation may be the role of the mentoring programme, which was a regularly scheduled component in both PA assessment programmes. The ongoing dialogues with their mentor supported students in the preparation, performance and appraisal phases of SRL. Although both second-year AoL assessment programmes included a mentorship component, it was perceived as less time- and content-intensive. Furthermore, the design of the PA assessment programmes enabled students to act on feedback received in subsequent tasks, thus supporting the appraisal phase of SRL.

With this in mind, we would like to emphasise the importance of a programme-level approach to assessment, of which PA is a well-explained example that has been implemented in several study programmes (Schut et al. Citation2021). Another example of a programme-level approach to assessment is programme-focused assessment (Hartley and Whitfield Citation2012; Whitfield and Hartley Citation2019), which seeks to concentrate on programme learning outcomes to encourage an integrative assessment approach to support knowledge and skill acquisition in line with the goals of the programme. Nevertheless, designing assessment on a programme level not only requires structural changes to the curriculum but also a cultural shift (Roberts et al. Citation2022), as it necessitates a change in thinking about assessment at all organisational levels for all stakeholders involved (Whitfield and Hartley Citation2019). This is a long-term structural change that should be guided and supported by management and ongoing development programmes (Torre et al. Citation2021; Charlton et al. Citation2022).

This study has a number of limitations. Firstly, while the design of both PA assessment programmes was based on PA principles, the programmes made diverse design choices to implement PA in practice. Programme B1, for example, comprised a greater number of small modules featuring graded assessments compared to programme A1. Given that a single assessment (i.e. data point) should not be used to inform high-stakes decisions (Van der Vleuten et al. Citation2012), this may indicate a deficiency in programme B1’s PA design. Comparable design deficiencies were found in the research of Baartman et al. (Citation2022), and these may affect students’ perceptions by impeding learning opportunities (Baartman et al. Citation2022; Schut et al. Citation2021). Secondly, in the quantitative analysis, we made comparisons between various academic years without considering possible discrepancies in the demographics and assessment experiences of the students. However, we maintain the view that these analyses served as a helpful basis for attributing significance to the qualitative findings. Thirdly, we assessed students’ perceptions of SRL at one time at the end of the academic year. Research shows that self-regulation develops over time with practice and feedback (Zimmerman and Kitsantas Citation2005). Therefore, we recommend that future research adopt a longitudinal approach by measuring the self-regulation of the same group of students at different times. This would provide insight into how the design of the assessment programme influences the progressive development of SRL.

In conclusion, by comparing multiple cases and assessment programmes, this study has provided insight into how students perceive that their assessment programme supports their SRL. Several principles in the design of an assessment programme influence SRL, including sequencing assessment tasks throughout the programme, offering assessment tasks that align with students’ interests, providing students with opportunities for autonomy and dialogic assessments, and viewing both AfL and AoL assessments as sources that can inform students’ learning. By examining how students perceive their assessment programme, this study contributes to current knowledge about assessment and SRL. Educators can use this information when (re)designing assessment programmes to create a coherent programme centred on students’ SRL.

Supplemental material

Supplemental Material

Download MS Word (31.7 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Lonneke H. Schellekens

Lonneke H. Schellekens is a PhD student at the Faculty of Veterinary Medicine at Utrecht University. The central themes of research are assessment for learning and assessment quality.

Marieke F. Van der Schaaf

Marieke F. Van der Schaaf is a professor of Research and Development of Health Professions Education and director of the Utrecht Center for Research and Development of Health Professions Education at the Education Center at University Medical Center Utrecht. Her area of specialisation is in educational innovations, performance assessments and professionals’ expertise development.

Liesbeth K. J. Baartman

Liesbeth K. J. Baartman is a professor working at the research group Vocational Education at Utrecht University of Applied Sciences. Her area of specialization is assessment in vocational and professional education, with a focus on programmatic and formative assessment.

Cees P. M. Van der Vleuten

Cees P. M. Van der Vleuten is a professor of Education and chair of the Department of Educational Development and Research in the Faculty of Health, Medicine and Life Sciences at Maastricht University. His area of expertise is in evaluation, assessment, and programmatic assessment.

Wim D. J. Kremer

Wim D. J. Kremer is a professor of Agricultural Animal Health in education at the Faculty of Veterinary Medicine of Utrecht University. He is also initiator of the interdisciplinary Bachelor Care, Health and Society. In his research, the focus is on programmatic assessment in the field of veterinary education.

Harold G. J. Bok

Harold G. J. Bok is a professor at the Faculty of Veterinary Medicine at Utrecht University. He chairs the research group ‘Educational Scholarship in Veterinary Medicine’. Since June 2022 Harold is Vice Dean for Education at the Faculty of Veterinary Medicine. In his research, the focus is on academic research and innovation in the field of veterinary education.

References

  • Alkharusi, H. 2009. “Classroom Assessment Environment, Self-efficacy, and Mastery Goal Orientation: A Causal Model.” INTI Journal: Special Issue on Teaching and Learning, 104–116. http://eprints.intimal.edu.my/id/eprint/409
  • Alkharusi, H., S. Aldhafri, H. Alnabhani, and M. Alkalbani. 2013. “The Impact of Students’ Perceptions of Assessment Tasks on Self-Eefficacy and Perception of Task Value: A Path Analysis.” Social Behavior and Personality: An International Journal 41 (10): 1681–1692. doi:10.2224/sbp.2013.41.10.1681.
  • Andrade, H., and S. M. Brookhart. 2016. “The Role of Classroom Assessment in Supporting Self-Regulated Learning.” In Assessment for Learning: Meeting the Challenge of Implementation, edited by D. Laveault & L. Allal, 293–309. Heidelberg: Springer International Publishing. doi:10.1007/978-3-319-39211-0_17.
  • Baartman, L. K. J., H. Baukema, and F. Prins. 2022. “Exploring Students’ Feedback Seeking Behavior in the Context of Programmatic Assessment.” Assessment & Evaluation in Higher Education 48 (5): 598–612. doi:10.1080/02602938.2022.2100875.
  • Babayigit, B. B., and M. Guven. 2020. “Self-Regulated Learning Skills of Undergraduate Students and the Role of Higher Education in Promoting Self-Regulation.” Eurasian Journal of Educational Research 20 (89): 1–24. doi:10.14689/ejer.2020.89.3.
  • Baird, J., D. Andrich, T. N. Hopfenbeck, and G. Stobart. 2017. “Assessment and Learning: Fields apart?” Assessment in Education: Principles, Policy & Practice 24 (3): 317–350. doi:10.1080/0969594X.2017.1319337.
  • Bok, H. G., P. W. Teunissen, R. P. Favier, N. J. Rietbroek, L. F. Theyse, H. Brommer, J. C. Haarhuis, P. van Beukelen, C. P. van der Vleuten, and D. A. Jaarsma. 2013. “Programmatic Assessment of Competency-Based Workplace Learning: When Theory Meets Practice.” BMC Medical Education 13 (1): 123. doi:10.1186/1472-6920-13-123.
  • Boud, D., and E. Molloy. 2013. “Rethinking Models of Feedback for Learning: The Challenge of Design.” Assessment & Evaluation in Higher Education 38 (6): 698–712. doi:10.1080/02602938.2012.691462.
  • Boud, D., and N. Falchikov. 2006. “Aligning Assessment with Long‐Term Learning.” Assessment & Evaluation in Higher Education 31 (4): 399–413. doi:10.1080/02602930600679050.
  • Brandmo, C., E. Panadero, and T. N. Hopfenbeck. 2020. “Bridging Classroom Assessment and Self-Regulated Learning.” Assessment in Education: Principles, Policy & Practice 27 (4): 319–331. doi:10.1080/0969594X.2020.1803589.
  • Braund, H., and C. DeLuca. 2018. “Elementary Students as Active Agents in Their Learning: An Empirical Study of the Connections between Assessment Practices and Student Metacognition.” The Australian Educational Researcher 45 (1): 65–85. doi:10.1007/s13384-018-0265-z.
  • Brooks, J., S. McCluskey, E. Turley, and N. King. 2015. “The Utility of Template Analysis in Qualitative Psychology Research.” Qualitative Research in Psychology 12 (2): 202–222. doi:10.1080/14780887.2014.955224.
  • Butler, D. L., and P. H. Winne. 1995. “Feedback and Self-Regulated Learning: A Theoretical Synthesis.” Review of Educational Research 65 (3): 245–281. https://www.jstor.org/stable/1170684. doi:10.2307/1170684.
  • Charlton, N., K. Weir, and R. Newsham-West. 2022. “Assessment Planning at the Program-Level: A Higher Education Policy Review in Australia.” Assessment & Evaluation in Higher Education 47 (8): 1475–1488. doi:10.1080/02602938.2022.2061911.
  • Chen, P. P., and S. M. Bonner. 2020. “A Framework for Classroom Assessment, Learning, and Self-Regulation.” Assessment in Education: Principles, Policy & Practice 27 (4): 373–393. doi:10.1080/0969594X.2019.1619515.
  • Cilliers, F. J., L. W. Schuwirth, N. Herman, H. J. Adendorff, and C. P. van der Vleuten. 2012. “A Model of the Pre-Assessment Learning Effects of Summative Assessment in Medical Education.” Advances in Health Sciences Education: Theory and Practice 17 (1): 39–53. doi:10.1007/s10459-011-9292-5.
  • Clark, I. 2012. “Formative Assessment: Assessment is for Self-Regulated Learning.” Educational Psychology Review 24 (2): 205–249. doi:10.1007/s10648-011-9191-6.
  • Field, A. 2013. Discovering Statistics Using IBM SPSS Statistics. 4th ed. London: Sage Publications.
  • Fischer, J., M. Bearman, D. Boud, and J. Tai. 2023. “How Does Assessment Drive Learning? A Focus on Students’ Development of Evaluative Judgement.” Assessment & Evaluation in Higher Education 49 (2): 233–245. doi:10.1080/02602938.2023.2206986.
  • Gerritsen-van Leeuwenkamp, K. J., D. Joosten-ten Brinke, and L. Kester. 2018. “Developing Questionnaires to Measure Students’ Expectations and Perceptions of Assessment Quality.” Cogent Education 5 (1): 1464425. doi:10.1080/2331186X.2018.1464425.
  • Gerritsen-van Leeuwenkamp, K. J., D. Joosten-Ten Brinke, and L. Kester. 2019. “Students’ Perceptions of Assessment Quality Related to Their Learning Approaches and Learning Outcomes.” Studies in Educational Evaluation 63: 72–82. doi:10.1016/j.stueduc.2019.07.005.
  • Gibbs, G., and C. Simpson. 2005. “Conditions under Which Assessment Supports Students’ Learning.” Learning and Teaching in Higher Education 1: 3–31. http://eprints.glos.ac.uk/id/eprint/3609.
  • Greene, J. A. 2020. “Building upon Synergies among Self-Regulated Learning and Formative Assessment Research and Practice.” Assessment in Education: Principles, Policy & Practice 27 (4): 463–476. doi:10.1080/0969594X.2020.1802225.
  • Harrison, C. J., K. D. Könings, E. F. Dannefer, L. W. Schuwirth, V. Wass, and C. P. van der Vleuten. 2016. “Factors Influencing Students’ Receptivity to Formative Feedback Emerging from Different Assessment Cultures.” Perspectives on Medical Education 5 (5): 276–284. doi:10.1007/s40037-016-0297-x.
  • Hartley, P., and R. Whitfield. 2012. “Programme Assessment Strategies (PASS) Final Report.” https://www.brad.ac.uk/pass/about/PASS_evaluation_final_report.pdf
  • Hawe, E., and H. Dixon. 2017. “Assessment for Learning: A Catalyst for Student Self-Regulation.” Assessment & Evaluation in Higher Education 42 (8): 1181–1192. doi:10.1080/02602938.2016.1236360.
  • Heeneman, S., A. Oudkerk Pool, L. W. Schuwirth, C. P. Vleuten, and E. W. Driessen. 2015. “The Impact of Programmatic Assessment on Student Learning: Theory versus Practice.” Medical Education 49 (5): 487–498. doi:10.1111/medu.12645.
  • Jansen, R. S., A. Van Leeuwen, J. Janssen, L. Kester, and M. Kalz. 2017. “Validation of the Revised Self-Regulated Online Learning Questionnaire.” Journal of Computing in Higher Education 29 (1): 6–27. doi:10.1007/s12528-016-9125-x.
  • Jessop, T. 2019. “Changing the Narrative: A Programme Approach to Assessment through TESTA.” In Innovative Assessment in Higher Education, edited by C. Bryan & K. Clegg. 2nd ed., 36–49. London: Routledge.
  • Jessop, T., and C. Tomas. 2017. “The Implications of Programme Assessment Patterns for Student Learning.” Assessment & Evaluation in Higher Education 42 (6): 990–999. doi:10.1080/02602938.2016.1217501.
  • Jessop, T., Y. El Hakim, and G. Gibbs. 2014. “The Whole is Greater than the Sum of Its Parts: A Large-Scale Study of Students’ Learning in Response to Different Programme Assessment Patterns.” Assessment & Evaluation in Higher Education 39 (1): 73–88. doi:10.1080/02602938.2013.792108.
  • Nicol, D. J., and D. Macfarlane‐Dick. 2006. “Formative Assessment and Self‐Regulated Learning: A Model and Seven Principles of Good Feedback Practice.” Studies in Higher Education 31 (2): 199–218. doi:10.1080/03075070600572090.
  • Pallant, J. 2011. SPSS Survival Manual. 4th ed. Berkshire: McGraw-Hill House.
  • Panadero, E. 2017. “A Review of Self-Regulated Learning: Six Models and Four Directions for Research.” Frontiers in Psychology 8: 422. doi:10.3389/fpsyg.2017.00422.
  • Panadero, E., H. Andrade, and S. Brookhart. 2018. “Fusing Self-Regulated Learning and Formative Assessment: A Roadmap of Where We Are, How We Got Here, and Where We Are Going.” The Australian Educational Researcher 45 (1): 13–31. doi:10.1007/s13384-018-0258-y.
  • Puustinen, M., and L. Pulkkinen. 2001. “Models of Self-Regulated Learning: A Review.” Scandinavian Journal of Educational Research 45 (3): 269–286. doi:10.1080/00313830120074206.
  • Roberts, C., P. Khanna, J. Bleasel, S. Lane, A. Burgess, K. Charles, R. Howard, D. O’Mara, I. Haq, and T. Rutzou. 2022. “Student Perspectives on Programmatic Assessment in a Large Medical Programme: A Critical Realist Analysis.” Medical Education 56 (9): 901–914. doi:10.1111/medu.14807.
  • Rust, C. 2002. “The Impact of Assessment on Student Learning.” Active Learning in Higher Education 3 (2): 145–158. doi:10.1177/1469787402003002004.
  • Schut, S., E. Driessen, J. Van Tartwijk, C. P. M. Van der Vleuten, and S. Heeneman. 2018. “Stakes in the Eye of the Beholder: An International Study of Learners’ Perceptions within Programmatic Assessment.” Medical Education 52 (6): 654–663. doi:10.1111/medu.13532.
  • Schut, S., L. A. Maggio, S. Heeneman, J. van Tartwijk, C. van der Vleuten, and E. Driessen. 2021. “Where the Rubber Meets the road - An Integrative Review of Programmatic Assessment in Health Care Professions Education.” Perspectives on Medical Education 10 (1): 6–13. doi:10.1007/s40037-020-00625-w.
  • Tan, K. 2013. “A Framework for Assessment for Learning: Implications for Feedback Practices within and beyond the Gap.” ISRN Education 2013: 1–6. doi:10.1155/2013/640609.
  • Tise, J. C., P. R. Hernandez, and P. W. Schultz. 2023. “Mentoring Underrepresented Students for Success: Self-Regulated Learning Strategies as a Critical Link between Mentor Support and Educational Attainment.” Contemporary Educational Psychology 75: 102233. doi:10.1016/j.cedpsych.2023.102233.
  • Torre, Dario, Neil E. Rice, Anna Ryan, Harold Bok, Luke J. Dawson, Beth Bierer, Tim J. Wilkinson, et al. 2021. “Ottawa 2020 Consensus Statements for Programmatic Assessment–2. Implementation and Practice.” Medical Teacher 43 (10): 1149–1160. doi:10.1080/0142159X.2021.1956681.
  • Van der Vleuten, C. P., L. W. Schuwirth, E. W. Driessen, J. Dijkstra, D. Tigelaar, L. K. Baartman, and J. van Tartwijk. 2012. “A Model for Programmatic Assessment Fit for Purpose.” Medical Teacher 34 (3): 205–214. doi:10.3109/0142159X.2012.652239.
  • Van der Vleuten, C., S. Heeneman, and S. Schut. 2019. “Programmatic Assessment: An Avenue to a Different Assessment Culture.” In Assessment in Health Professions Education, edited by R. Yudkowsky, Y. Soo Park, & S. Downing. New York: Routledge.
  • Watling, C. J., and L. Lingard. 2012. “Toward Meaningful Evaluation of Medical Trainees: The Influence of Participants’ Perceptions of the Process.” Advances in Health Sciences Education: Theory and Practice 17 (2): 183–194. doi:10.1007/s10459-010-9223-x.
  • Whitfield, R., and P. Hartley. 2019. “Assessment Strategy: Enhancement of Student Learning through a Programme Focus.” In Employability via Higher Education: Sustainability as Scholarship, edited by A. Diver, 237–253. Cham: Springer. doi:10.1007/978-3-030-26342-3.
  • Winstone, N. E., R. A. Nash, M. Parker, and J. Rowntree. 2017. “Supporting Learners’ Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes.” Educational Psychologist 52 (1): 17–37. doi:10.1080/00461520.2016.1207538.
  • Yin, R. K. 2018. Case Study Research and Applications: Design and Methods. 6th ed. Thousand Oaks, CA: SAGE.
  • Zhang, W. 2017. “Using Classroom Assessment to Promote Self-Regulated Learning and the Factors Influencing Its (in) Effectiveness.” Frontiers of Education in China 12 (2): 261–295. doi:10.1007/s11516-017-0019-0.
  • Zimmerman, B. J. 2002. “Becoming a Self-Regulated Learner: An Overview.” Theory Into Practice 41 (2): 64–70. doi:10.1207/s15430421tip4102_2.
  • Zimmerman, B. J., and A. Kitsantas. 2005. “The Hidden Dimension of Personal Competence: Self-Regulated Learning and Practice.” In Handbook of Competence and Motivation, edited by A. J. Elliot & C. S. Dweck, 509–526. New York: Guilford Press.