6,832
Views
13
CrossRef citations to date
0
Altmetric
Articles

Assessment policies and academic performance within a single course: the role of motivation and self-regulation

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

Abstract

Despite the frequently reported association of characteristics of assessment policies with academic performance, the mechanisms through which these policies affect performance are largely unknown. Therefore, the current research investigated performance, motivation and self-regulation for two groups of students following the same statistics course, but under two assessment policies: education and child studies (ECS) students studied under an assessment policy with relatively higher stakes, a higher performance standard and a lower resit standard, compared with Psychology students. Results show similar initial performance, but more use of resits and higher final performance (post-resit) under the ECS policy compared with the psychology policy. In terms of motivation and self-regulation, under the ECS policy significantly higher minimum grade goals, performance self-efficacy, task value, time and study environment management, and test anxiety were observed, but there were no significant differences in aimed grade goals, academic self-efficacy and effort regulation. The relations of motivational and self-regulatory factors with academic performance were similar between both assessment policies. Thus, educators should be keenly aware of how characteristics of assessment policies are related to students’ motivation, self-regulation and academic performance.

Introduction

When trying to encourage people to jump higher, a sensible option is to raise the bar. Analogously, the educational literature has consistently shown that assessment policies with higher standards are associated with better academic performance (Cole and Osterlind Citation2008; Elikai and Schuhmann Citation2010; Kickert et al. Citation2018). For instance, students perform better on knowledge assessments when a higher percentage of correct answers is required to obtain the same grade (Johnson and Beck Citation1988; Elikai and Schuhmann Citation2010). However, little is known about the mechanisms underlying the association between assessment policies and academic performance.

In exploring the association between assessment policies and academic performance, we used motivation and self-regulation as a conceptual framework. Motivational and self-regulatory factors are among the most important correlates of academic performance (Richardson, Abraham, and Bond Citation2012; Schneider and Preckel Citation2017). In addition, motivation and self-regulation have the advantage of being relatively alterable, compared to more stable student factors such as conscientiousness (Poropat Citation2009), high school grade point average (Sawyer Citation2013) and socioeconomic status (Sirin Citation2005). For instance, the motivational factor self-efficacy (Bandura Citation1982), is ‘deemed to be modifiable at a relatively low cost’ (Richardson, Abraham, and Bond Citation2012, 375). As such, motivational and self-regulatory factors are likely candidates to be affected by assessment policies.

However, earlier research on assessment policies (Cole and Osterlind Citation2008; Elikai and Schuhmann Citation2010) failed to include some of the most important motivational and self-regulatory factors that are associated with academic performance (Richardson, Abraham, and Bond Citation2012), such as performance self-efficacy and effort regulation. Moreover, our recent study, which did take several of these factors into consideration, merely involved medical students (Kickert et al. Citation2018). Therefore, a first aim of this study was to replicate earlier findings (Johnson and Beck Citation1988; Elikai and Schuhmann Citation2010; Kickert et al. Citation2018) on the association of assessment policies with academic performance, in a real-life setting with higher education social science students. Secondly, we extended earlier research by incorporating the most important motivational and self-regulatory factors (Richardson, Abraham, and Bond Citation2012; Schneider and Preckel Citation2017) in our investigation of the relationship between assessment policies and academic performance.

Assessment policies

In this study, we compared two assessment policies that differed in three respects: (i) the stakes, (ii) the performance standard and (iii) the resit standard. The stakes are the consequence of failing one or more assessments. Higher stakes have repeatedly been associated with higher performance (Wolf and Smith Citation1995; Sundre and Kitsantas Citation2004; Cole and Osterlind Citation2008).

The performance standard is determined by the minimum grade required on the assessment of a course, in order to obtain the course credits. Higher performance standards have been associated with higher academic performance in diverse course programmes such as accounting (Elikai and Schuhmann Citation2010), psychology (Johnson and Beck Citation1988), and medicine (Kickert et al. Citation2018).

The resit standard refers to the number of permitted resit opportunities. There are several reasons for limiting the number of resits that a student is allowed to take. Firstly, providing more resit opportunities has been associated with lower performance on the initial assessment, although more resit opportunities were not associated with differences in final grades (Grabe Citation1994). Secondly, a resit is an extra opportunity to pass an assessment by chance (Yocarini et al. Citation2018). Thirdly, resits may offer an unfair advantage to the resit students, for instance due to additional practice opportunities (Pell, Boursicot, and Roberts Citation2009). However, promoting additional practice can also be viewed as a purpose of resits (Proud Citation2015). Fourthly, there are concerns about the negative effects resits may have on student learning, such as a reliance on second chances (Scott Citation2012), or lower investment of study time (Nijenkamp et al. Citation2016).

Factors associated with academic performance

In a meta-analysis, Richardson, Abraham, and Bond (Citation2012) identified the motivational and self-regulatory factors most strongly associated with academic performance. We firstly examined the relationship between assessment policies and academic performance in terms of changes in these factors (e.g. students’ motivation may be boosted by higher performance standards). Additionally, we examined changes in the relations between motivational and self-regulatory factors and performance (e.g. the association between students’ motivation and performance may be moderated by the performance standards). We will first describe the four most important motivational factors that are associated with performance, and then turn to self-regulatory factors of academic performance.

Motivational factors

The four motivational factors that show the strongest association with academic performance are academic self-efficacy, performance self-efficacy, grade goals and task value (Richardson, Abraham, and Bond Citation2012). The first factor, academic self-efficacy, refers to students’ general perceptions of their academic capability (Richardson, Abraham, and Bond Citation2012). Differences in academic self-efficacy have been associated with differences in stakes and in performance standards, but there is empirical evidence that the relation between academic self-efficacy and performance is similar under different assessment policies (Kickert et al. Citation2018).

The second motivational factor, performance self-efficacy, which is also referred to as grade expectation (Maskey Citation2012), is the specific grade students expect to obtain (Vancouver and Kendall Citation2006). Hence, whereas academic self-efficacy is a relatively general measure of expectations concerning successful learning and performance, performance self-efficacy is more specific, focusing on the expected grade. Although performance self-efficacy is the strongest predictor of academic performance (Richardson, Abraham, and Bond Citation2012), to the best of our knowledge there is no research on performance self-efficacy under different assessment policies.

A similar gap in the literature exists concerning the third motivational factor, students’ grade goals under different assessment policies. The grade goal is a student’s level of aspired grade (Locke and Bryan Citation1968). Good grades are a primary focus for most students (Gaultney and Cann Citation2001). As the assessment policies determine which grades are sufficient to pass a course, these policies also partially determine what students consider to be a good grade. Therefore, student grade goals are likely to be related to the assessment policies.

The fourth motivational factor is task value, which refers to a student’s self-motivation for and enjoyment of academic learning and tasks (Richardson, Abraham, and Bond Citation2012). Previous research has shown higher task value under higher stakes and performance standards, and similar relations between task value and academic performance under different assessment policies (Kickert et al. Citation2018). These results can be explained because setting specific difficult goals can be motivating, as long as these goals are deemed attainable (Locke and Latham Citation2002). However, there have been concerns about the impact of external motivators, such as assessment, on students’ intrinsic motivation (Deci, Koestner, and Ryan Citation1999; Harlen and Crick Citation2003). Therefore, a replication of earlier findings concerning task value under different assessment policies would be useful.

In terms of the magnitude of the associations (Cohen Citation1992), performance self-efficacy showed a large correlation with academic performance; the correlation with academic performance was medium-sized for grade goals and academic self-efficacy, and small-sized for task value (Richardson, Abraham, and Bond Citation2012). Performance self-efficacy and grade goals were not included in previous investigations of the consequences of differences in assessment policies. These two motivational factors are important predictors of academic performance and are intuitively likely to be influenced by assessment policies. Therefore – next to academic self-efficacy and task value – performance self-efficacy and grade goals are important factors to take into account in order to understand the relationship between assessment policies and academic performance.

Self-regulatory factors

In addition to motivational factors, self-regulatory factors are important to consider when investigating academic performance (Richardson, Abraham, and Bond Citation2012). Self-regulation entails that students are “metacognitively, motivationally, and behaviorally active participants in their own learning process” (Zimmerman Citation1986, 308). A first self-regulatory factor, effort regulation, can be defined as persistence and effort when faced with academic challenges (Richardson, Abraham, and Bond Citation2012). Given that most students will at some point in their academic career encounter subjects that they deem less interesting (Uttl, White, and Morin Citation2013) or even anxiety-provoking (Onwuegbuzie and Wilson Citation2003), the ability to sustain attention and effort in the face of distractions or uninteresting tasks seems to be a key factor in achieving academic success (Komarraju and Nadler Citation2013).

A second important self-regulatory factor is time and study environment management, which refers to the capacity to plan study time and activities (Richardson, Abraham, and Bond Citation2012). Time and study environment management has been found to be associated with academic performance, independent of intellectual correlates of performance, such as Scholastic Aptitude Test scores (Britton and Tesser Citation1991). Effort regulation and time and study environment management have been shown to be higher under higher stakes and performance standards, although the association of both factors with academic performance is similar under different assessment policies (Kickert et al. Citation2018).

A third self-regulatory factor is test anxiety, which is considered to be the affective component of self-regulated learning (Pintrich Citation2004). Test anxiety is the experience of negative emotions during test-taking situations, and is negatively related to intrinsic motivation, effort regulation and academic performance (Pekrun et al. Citation2011). Test anxiety is especially salient during statistics courses (Onwuegbuzie and Wilson Citation2003). As the current research took place during a statistics course, we included test anxiety in this study.

The correlation between effort regulation and academic performance is medium-sized, whereas time and study environment management, and test anxiety show a small-sized association with performance (Richardson, Abraham, and Bond Citation2012). To the best of our knowledge, test anxiety was not taken into account in previous research into consequences of altered assessment policies.

Research questions and hypotheses

The first research question (RQ1) was whether we could replicate the earlier reported finding that academic performance is superior under more difficult assessment policies (Cole and Osterlind Citation2008; Elikai and Schuhmann Citation2010; Kickert et al. Citation2018). In the current research, we hypothesized this difference in performance to be present as well (H1).

Furthermore, we extended prior research by investigating the relationship between assessment policies and academic performance (RQ2). We therefore compared the most important motivational and self-regulatory constructs (Richardson, Abraham, and Bond Citation2012) under two assessment policies that differed in terms of the stakes, performance standard and resit standard (i.e. RQ2a). On the basis of earlier research (Kickert et al. Citation2018), our hypothesis was that academic self-efficacy, task value, effort regulation and time and study environment management are higher under more difficult assessment policies (H2a). The current study extended previous research by including performance self-efficacy, grade goals and test anxiety.

Finally, we investigated whether the associations of these motivational and self-regulatory factors with academic performance are different under different assessment policies (i.e. RQ2b). On the basis of earlier findings (Kickert et al. Citation2018), we hypothesized that the associations of motivation and self-regulation with academic performance are similar under different assessment policies (H2b).

Methods

Educational context

The current study was performed in the Bachelor’s (BA) programmes of Psychology as well as Education and Child Studies (ECS) at a large urban university in the Netherlands. The first two years of both three-year BA programmes consist of eight consecutive five-week courses; the third year consists of three (ECS) or four (psychology) five-week courses, a minor and a thesis and/or internship. At the end of each course, there is a written knowledge assessment that is graded on a 10-point scale (1 = poor, to 10 = perfect).

In February and March 2017, students from both course programmes took the same statistics course ‘Psychometrics, an introduction’. The course consisted of nine mandatory small-group meetings, six optional large-group lectures, and was concluded with a multiple-choice knowledge assessment. Since students from both course programmes followed the same course, they received identical instructional activities, course materials and assessments. However, for psychology students this statistics course is part of BA-2, whereas the same statistics course is a BA-3 course for ECS students. Since the BA-2 assessment policy differs from the BA-3 policy for both programmes, the same course is covered by different assessment policies for the two BA programmes.

Assessment policies

Psychology

In the psychology curriculum, students are allowed to enter BA-3 without passing BA-2 entirely, including the statistics course currently under study. Therefore, the stakes of this BA-2 assessment are relatively low. Nevertheless, psychology students do need to pass their entire BA programme in order to start the Master’s programme. The BA-2 Psychology assessment policy is compensatory, in that students need to obtain a grade point average (GPA) of 6.0 for the eight assessments. Grades below 4.0 are considered invalid, and not compensable by higher grades. Thus, the performance standard is 4.0 for individual five-week courses, as long as the overall BA-2 GPA is at least 6.0. BA-2 Psychology students are allowed a maximum of two resits for the eight BA-2 knowledge assessments. All resits take place in July after the academic year has ended, there is a maximum of one resit per course, and the highest attained grade counts. As the number of resits is limited for psychology students, the resit standard is relatively strict.

Education and child studies

BA-3 ECS students are required to have passed BA-2, and need to pass the entire BA programme in order to progress to the master’s programme. This means that if students fail at least one BA-3 course after the resit, this failure will result in one year of academic delay. Therefore, the stakes of the BA-3 ECS assessment are relatively high, compared to the stakes for the BA-2 Psychology assessment. The BA-3 ECS curriculum has a conjunctive assessment policy, which entails that students need to pass each separate assessment with a minimum grade of 5.5. Thus, for ECS students the performance standard is 5.5 for individual courses. ECS students are allowed to retake all three third-year assessments once in July after the academic year has ended, and the highest attained grade counts. Therefore, the resit standard is relatively lenient. In sum, compared to the psychology assessment policy, in the ECS policy the stakes are higher, the performance standard is higher, but the resit standard is more lenient. Hence, two out of three characteristics of the assessment policy were more difficult in the ECS policy. Therefore, we considered the ECS policy to be more difficult than the psychology policy.

Procedure

Students who followed the five-week course ‘Psychometrics, an introduction’, received a paper questionnaire at the start of the ninth and final small-group meeting of the course in March 2017, on the Tuesday of the fifth week. Completion of the questionnaire took 5–10 min and was completely voluntary. All students were informed about the study and active informed consent was given by all respondents. The course knowledge assessment took place on Thursday in week 5 and the resit took place approximately four months later, in July 2017.

Participants

Participants for this study were BA-2 Psychology students and BA-3 ECS students. In order to compare academic performance between the psychology and ECS assessment policies (RQ1), we compared the grades between the entire cohorts (NPsy = 219; NECS = 85). To investigate the relationship between assessment policies and academic performance (RQ2), we used a subsample of students who completed the questionnaire. Hence, the sample of psychology students consisted of 150 students, i.e. a 68% response rate (Mage = 20.86, SDage = 2.31, 20% male). The sample for ECS consisted of 51 students, i.e. a 60% response rate (Mage = 21.65, SDage = 1.72, 8% male, 2% gender missing). Both the initial and final grades of the psychology and ECS samples were representative for the respective cohorts.

Materials

Motivational factors

Participants completed two motivational subscales of a Dutch version of the Motivated Strategies for Learning Questionnaire (MSLQ; Pintrich et al. Citation1991; Blom and Severiens Citation2008): Task Value (e.g. ‘I am very interested in the content area of this course.’; alpha = .85) and Self-Efficacy for Learning and Performance (e.g. ‘I expect to do well in this class.’; alpha = .90). Items were scored on a 7-point Likert scale (1 = not at all true of me; 7 = very true of me). Subscale scores were computed by averaging the scores for the subscale items, under the condition of no more than one missing item per subscale. Some items were minimally adapted to adjust them to the specific educational context, for instance by changing the word ‘class’ to ‘course’.

In addition to the MSLQ-subscales, we posed two grade goal items and a performance self-efficacy item. These three items were each scored on a multiple-choice scale ranging from 1 to 10, with 0.5 point increments. Grade goals were measured through two items that were based on Locke and Bryan’s (Citation1968) original measurement of grade goals: (i) ‘Which grade are you aiming for on the course exam of this course?’, and (ii) ‘What is the lowest grade you would be satisfied with for the course exam of this course?’. We termed the first item aimed grade goal, and the second item minimum grade goal. Performance self-efficacy was measured by asking ‘Which grade do you expect to earn on the course exam of this course?’

Self-regulatory factors

Participants also completed three self-regulatory subscales of the Dutch version of the MSLQ: Effort Regulation (e.g. ‘I work hard to do well in this class even if I don't like what we are doing’; alpha = 0.73), Time and Study Environment Management (e.g. ‘I make good use of my study time for this course’; alpha = 0.78) and Test Anxiety (e.g. ‘When I take a test I think about the consequences of failing’; alpha = 0.83). The scoring, subscale computation and adaptation of items was as described for the motivational MSLQ subscales.

Other variables

At the end of the questionnaire, students reported their age (in years) and gender (male/female).

Grades

Student grades were obtained through the course coordinator, who is one of the authors of the current study (GKG). Since the psychology and ECS students were subjected to different resit standards, we used the grades after the initial assessment as well as after the resit. These grades were respectively termed initial grades and final grades (1 = poor, to 10 = perfect).

Statistical analyses

Data screening and validity checks

Before performing the analyses, we screened variables for missing values and normality, and checked relevant assumptions. One respondent only answered about half of the questionnaire and was removed from the sample. All MSLQ subscales, as well as course grades, were normally distributed. However, the two grade goal items were non-normally distributed, as many students indicated that their grade goals matched the performance standard.

Next, we performed two checks to strengthen the validity of our conclusions. These checks served to ensure that psychology and ECS students were comparable in terms of performance and motivation in other courses. Firstly, we performed an independent t-test on our respondents’ grades for a BA-1 statistics course. This BA-1 course was identical for psychology and ECS students, including an identical assessment policy. In this BA-1 assessment policy, all 60 BA-1 credits needed to be obtained after one year to prevent academic dismissal (i.e. high stakes); the performance standard and resit standard were identical to the BA-2 psychology assessment policy for both groups of students. Final grades for psychology (n = 140; M = 5.97; SD = 1.18) and ECS respondents (n = 50; M = 6.27; SD = 1.49) were not statistically significantly different, t(72.13) = −1.30, p = 0.199.

Secondly, we checked whether grade goals and performance self-efficacy were similar for psychology and ECS students in an earlier basic statistics course with the same assessment policy for both course programmes. This course was taken by the psychology students of the current study, but a later cohort of ECS students. The students of these two course programmes did not differ significantly on any of the items (p > 0.05).

Main analyses

In order to investigate possible differences in performance under different assessment policies (RQ1), we performed a t-test on the initial grades, and a t-test on the final grades. Additionally, we performed a chi-square test to assess whether different numbers of students took the resit under both policies.

To compare psychology and ECS students’ motivation and self-regulation (RQ2a), we performed a MANOVA with the two different assessment policies as the independent variable, and the five motivational (i.e. aimed grade goal, minimum grade goal, performance self-efficacy, academic self-efficacy and task value) and three self-regulatory factors (i.e. effort regulation, time and study environment management, and test anxiety) as the dependent variables. We calculated Pillai’s Trace for the overall model and in case of multivariate significance we performed univariate ANOVAs for the separate dependent variables. Also, we calculated Cohen’s d (0.20/0.50/0.80 = small/medium/large effect size; Cohen Citation1992) for the significant dependent variables.

We also investigated whether the association of the motivational and self-regulatory factors with academic performance was different under different assessment policies (RQ2b). To this end, we performed a five-step hierarchical forced entry multiple regression with initial grades as the dependent variable. We regressed on initial grades instead of final grades, to minimise the interval between the measurement of the independent variables and the dependent variable. We included the motivational variables in the model before the self-regulatory variables, because motivation precedes self-regulation (Covington Citation2000). In the first step we only included assessment policy. In the following models we cumulatively included: (i) the five motivational variables, (ii) the interactions between the assessment policy and the five motivational variables, (iii) the three self-regulatory variables, (iv) the interactions between the assessment policy and the three self-regulatory variables. For each of the five steps, we assessed whether the R2-change was significant. The interaction variables added in step three and five are needed to answer RQ2b: significant interactions denote differences between assessment policies concerning the associations of the motivational and self-regulatory predictors with academic performance.

Results

Descriptive statistics

Descriptive statistics, Cronbach’s alphas and correlations for the study variables under both assessment policies are shown in . All study variables except test anxiety are significantly correlated to either initial or final grades, in both psychology and ECS. Correlations between the study variables seem similar under both assessment policies. However, compared with psychology the correlation between the study variables and final grades is lower in ECS. None of the psychology and ECS students reported a minimum grade goal below the respective performance standards (4.0 for psychology, 5.5 for ECS).

Table 1. Descriptives, Cronbach’s alphas (on the diagonal, for both assessment policies combined) and Pearson correlations for the study variables (psychology respondents [n = 150] above diagonal, education and child studies respondents [n = 51] below diagonal).

Differences in performance (RQ1)

Concerning possible differences in academic performance between the ECS assessment policy (i.e. the combination of higher stakes, a higher performance standard, and a more lenient resit standard) and the psychology assessment policy (RQ1), hypothesis 1 was partly confirmed: the initial grades of psychology (M = 5.63, SD = 1.40) and ECS students (M = 5.69, SD = 1.36) did not differ significantly, t(302) = −0.32, p = 0.751; however, the final grades were significantly higher for ECS students (M = 6.28, SD = 1.22) than for psychology students (M = 5.72, SD = 1.34), t(302) = −3.32, p = 0.001, d = 0.42. ECS students took significantly more resits (36%) than psychology students (5%), χ2(1) = 50.86, p < 0.001.

Differences in motivation and self-regulation (RQ2a)

To assess possible differences in motivation and self-regulation between both assessment policies (RQ2a), we performed a MANOVA with the five motivational (i.e. aimed grade goal, minimum grade goal, performance self-efficacy, academic self-efficacy and task value) and the three self-regulatory factors (i.e. effort regulation, time and study environment management, and test anxiety) as dependent variables. Although Box’s M, as well as the Levene’s tests for minimum grade goals and performance self-efficacy, were significant, the largest variance was observed in the largest sample, i.e. psychology. Therefore, we continued our analyses because our hypothesis testing would be conservative (Stevens Citation2009). The multivariate test was significant for assessment policy, Pillai’s Trace = 0.194, F (8, 192) = 5.76, p < 0.001, indicating differences on the dependent variables between both assessment policies. Univariate analyses indicated that compared with psychology students, ECS students showed significantly higher minimum grade goals (F (1, 199) = 10.38, p = 0.001, d = 0.52), performance self-efficacy (F (1, 199) = 5.99, p = 0.015, d = 0.40), task value (F (1, 199) = 6.23, p = 0.013, d = 0.40), time and study environment management (F (1, 199) = 11.95, p = 0.001, d = 0.56) and test anxiety (F (1, 199) = 4.76, p = 0.030, d = 0.35): see for means and standard deviations for both assessment policies. Aimed grade goal, academic self-efficacy and effort regulation did not differ significantly between the psychology and ECS students. Thus, hypothesis 2a was partly confirmed.

Differences in associations with initial performance (RQ2b)

As shown in , of the five steps of the regression analysis two steps showed statistically significant R2change: step two, in which the motivational variables were added, R2change = 0.24, F(5,194) = 11.99, p < 0.001; and step four, in which the self-regulatory variables were added, R2change = 0.04, F(3,187) = 3.30, p = 0.022. The steps in which the interaction variables were added did not show statistically significant R2change. This indicates that the association of motivational and self-regulatory factors with initial grades is similar under both assessment policies, which confirms hypothesis 2b. Thus, the assessment policy does not moderate the association of motivation or self-regulation with initial grades. The variables that explained a significant proportion of variance in initial grades were aimed grade goal, performance self-efficacy, academic self-efficacy and effort regulation.

Table 2. Results of the five-step hierarchical multiple regression analyses, with initial grades as dependent variable, and the assessment policy, motivational and self-regulatory variables, as well as the interactions of motivational and self-regulatory factors with assessment policy as independent variables (N = 201).

Conclusion and discussion

The first research question was whether we would observe higher academic performance under the higher stakes, higher performance standard, and more lenient resit standard ECS assessment policy than under the psychology assessment policy. There were no significant performance differences on the initial assessment. However, in line with our hypothesis, final performance was indeed higher in the more difficult ECS assessment policy. Thus, our first hypothesis was partly confirmed.

In our attempt to clarify the relationship between assessment policies and academic performance (RQ2), we first investigated mean differences in motivation and self-regulation between both policies (RQ2a). We found significantly higher minimum grade goals, performance self-efficacy, task value, time and study environment management, and test anxiety in the ECS policy, but no significant differences in aimed grade goals, academic self-efficacy and effort regulation between the assessment policies. Thus, hypothesis 2a is partly confirmed. Concerning the relations of motivation and self-regulation with academic performance (RQ2b), in line with hypothesis 2b we found no significant differences in these relations between both assessment policies.

Academic performance

Although the higher final performance under the ECS assessment policy is in line with the literature (Cole and Osterlind Citation2008; Elikai and Schuhmann Citation2010; Kickert et al. Citation2018), the lack of a significant difference in initial performance is not. It seems that ECS students may have delayed their higher performance until the resit. Since the ECS students had a more lenient resit standard, these students had the guaranteed opportunity to retake the assessment, and thus had the option to postpone their effort until the resit. As ECS students took significantly more resits than psychology students, our results may confirm concerns about the consequences of resits, such as a reliance on second chances (Scott Citation2012), lower performance on the initial assessment (Grabe Citation1994), and lower investment of effort for the initial assessment (Nijenkamp et al. Citation2016). However, an alternative explanation is that ECS students were more incentivized to attempt to improve their grade in the resit, as these students performed under higher stakes and a higher performance standard than psychology students.

Motivational factors

In terms of motivation, we observed higher performance self-efficacy for ECS students compared with psychology students. A possible explanation for this finding may be that specific, difficult goals are motivating, as long as these goals are deemed attainable (Locke and Latham Citation2002). However, there was no significant difference in academic self-efficacy between both assessment policies. Thus, although ECS students expected a higher grade, judgements of relatively general academic capability did not differ between both policies. Therefore, these findings are an indication that performance self-efficacy and academic self-efficacy are separate constructs. Compared to academic self-efficacy, performance self-efficacy seems more susceptible to differences in assessment policies.

Minimum grade goals were significantly higher under the ECS policy, but there were no differences concerning aimed grade goals. A possible explanation is that the performance standard only determines which grade students consider sufficient, but not which grade students consider good. This needs further exploration, as it has been previously asserted that students dichotomously view grades as either ‘good’ or ‘bad’ (Boatright-Horowitz, and Arruda Citation2013).

Lastly, task value was significantly higher for ECS students. Although this is in line with previous findings (Kickert et al. Citation2018), it is surprising in the light of the assertion that extrinsic motivators, such as assessments, damage intrinsic motivation (Deci, Koestner, and Ryan Citation1999; Harlen and Crick Citation2003). However, we should note that the ECS students did not have more or different assessments, but only different standards. These standards were more difficult and thus perhaps more motivating.

Self-regulatory factors

In terms of self-regulation, for the ECS assessment policy we found significantly higher time and study environment management, as well as higher test anxiety, compared with the psychology policy. Thus, given the higher stakes and higher performance standard in the ECS policy, ECS students may be more inclined to properly manage their time and study environment. However, the higher demands also seem to result in more test anxiety. Lastly, contrary to previous findings (Kickert et al. Citation2018), there were no significant differences in effort regulation between both policies. Possible explanations for this discrepancy are that the earlier work involved medical students, or that the sample size of the current investigation was insufficient to detect an effect. In sum, more research is needed to draw firm conclusions about effort regulation under different assessment policies.

Differences in associations with performance

Our results showed similar relations of motivation and self-regulation with academic performance under both assessment policies, in line with previous findings (Kickert et al. Citation2018). Thus, the higher academic performance under the higher stakes, higher performance standard, lower resit standard assessment policy, seems to result from higher motivation and self-regulation, but not from different associations of motivation or self-regulation with performance.

We should note that in our regression analysis the most important predictors of academic performance were performance self-efficacy, aimed grade goals, academic self-efficacy and effort regulation. Although performance self-efficacy, academic self-efficacy, and effort regulation were higher in the ECS policy, only performance self-efficacy was significantly so. Thus, the assessment policy may not affect all the most important predictors of performance. For instance, although the minimum grade goal was related to the assessment policy, the aimed grade goal was not.

Limitations

The current study had several limitations that need to be addressed. Firstly, no causal conclusions can be drawn, as all data were observational. Besides different assessment policies, there were other differences between both groups, such as age and the attended course programme. However, to strengthen the validity of our conclusions, as reported in the methods we performed two checks that affirmed the groups’ comparability in terms of performance and motivation in other courses. Secondly, the sample size for ECS may not have been large enough to obtain sufficient power (Field Citation2013). Thus, research with larger samples is needed. Thirdly, given the current conjunction of differences in the stakes, performance standards and resit standards, it is not possible to draw conclusions on separate effects of these three characteristics of assessment policies.

Implications and suggestions for further research

To the best of our knowledge, the current study was the first to include all the most important motivational and self-regulatory predictors of performance in an investigation of assessment policies. However, as the current study was performed in a statistics course in social sciences course programmes, future studies could investigate whether similar conclusions are drawn in other types of courses and/or course programmes. Additionally, it would be interesting to compare assessment policies that only differ in one respect, in order to draw conclusions about the separate elements of the policies.

In order to better explain changes in academic performance due to changes in assessment policies, other measures of student learning could be investigated as well. For instance, it would be interesting to see how the quantity and quality of students’ use of time are affected. Moreover, students’ well-being and stress levels could be taken into account, in order to monitor possible negative impacts of assessment policies. Furthermore, although motivation may be higher in the short-term, this may not be the case in the long-term. Therefore, enduring effects of assessment policies on motivation need to be monitored as well.

Given that performance self-efficacy and aimed grade goal are both one-item measures, it is promising that these two constructs explain significant variance in academic performance. Therefore, it could be worthwhile to further investigate these two motivational measures, for instance by researching what types of students exist in terms of these measures.

Although changes to stakes, performance standards and resit standards seem to be rare, these changes require relatively little effort. Given our findings, these efforts seem highly effective in terms of gains in motivation, self-regulation and academic performance. However, aimed grade goals, academic self-efficacy and effort regulation did not differ significantly between both assessment policies. Hence, more research is needed on how these predictors of performance can be improved through educational interventions as well.

Conclusions

Students’ academic performance, motivation and self-regulation are sensitive to characteristics of the assessment policy. This makes sense, as all students wish to obtain a diploma, and thus need to perform to the standards of the assessment policy. Therefore, educators should be aware of the influence that their standards and expectations have on students’ academic performance: higher bars may lead to higher jumping.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Notes on contributors

Rob Kickert

Rob Kickert is a PhD student in the Department of Psychology, Education & Child Studies at Erasmus University Rotterdam, The Netherlands. His research interests include motivation, self-regulation, academic performance, and the possible consequences of different assessment policies in higher education.

Marieke Meeuwisse

Marieke Meeuwisse, PhD, is assistant professor of Education at the Erasmus University Rotterdam. Her main research interest is (ethnic) diversity in higher education, from the perspective of the learning environment, interaction, sense of belonging, motivation and academic success.

Karen M. Stegers-Jager

Karen Stegers-Jager, PhD, is assistant professor at the Institute of Medical Education Research Rotterdam, Erasmus MC, University Medical Centre Rotterdam. Her research interests include (ethnic and social) diversity, assessment, and selection and admission of medical students and residents.

Gabriela V. Koppenol-Gonzalez

Gabriela Koppenol-Gonzalez, PhD, was an assistant professor of Methodology and Statistics at the Department of Psychology, Education & Child Studies at Erasmus University Rotterdam at the time of this research. Currently, she works as a senior researcher in Methodology and Statistics at the department Research & Development of War Child Holland. Her main research interest is in education, psychometrics, and the application of latent class models.

Lidia R. Arends

Lidia R. Arends, PhD, is Professor of Methodology and Statistics at the Department of Psychology, Education & Child Studies, Erasmus University Rotterdam, The Netherlands. Besides, she is biostatistician at the Department of Biostatistics, Erasmus University Medical Center, Rotterdam, The Netherlands. Her areas of interest include research methods, (logistic) regression analysis, multilevel analysis, systematic reviews, and meta-analysis.

Peter Prinzie

Peter Prinzie, PhD, is Professor of Pedagogical Sciences at the Department of Psychology, Education & Child Studies, Erasmus University Rotterdam, The Netherlands. His research spans the field of developmental psychopathology, personality psychology, and developmental psychology.

References

  • Bandura, A. 1982. “Self-Efficacy Mechanism in Human Agency.” American Psychologist 37 (2):122–147. doi:10.1037/0003-066X.37.2.122.
  • Blom, S., and S. Severiens. 2008. “Engagement in Self-Regulated Deep Learning of Successful Immigrant and Non-Immigrant Students in Inner City Schools.” European Journal of Psychology of Education 23 (1):41–58. doi:10.1007/BF03173139.
  • Boatright-Horowitz, S. L., and C. Arruda. 2013. “College Students’ Categorical Perceptions of Grades: It’s Simply ‘Good’ vs. ‘Bad.” Assessment & Evaluation in Higher Education 38 (3):253–259. doi:10.1080/02602938.2011.618877.
  • Britton, B. K., and A. Tesser. 1991. “Effects of Time-Management Practices on College Grades.” Journal of Educational Psychology 83 (3):405–410. doi:10.1037/0022-0663.83.3.405.
  • Cohen, J. 1992. “A Power Primer.” Psychological Bulletin 112 (1):155–159. doi:10.1037/0033-2909.112.1.155.
  • Cole, J. S., and S. J. Osterlind. 2008. “Investigating Differences between Low-and High-Stakes Test Performance on a General Education Exam.” Journal of General Education 57 (2):119–130.
  • Covington, M. V. 2000. “Goal Theory, Motivation, and School Achievement: An Integrative Review.” Annual Review of Psychology 51 (1):171–200. doi:10.1146/annurev.psych.51.1.171.
  • Deci, E. L., R. Koestner, and R. M. Ryan. 1999. “A Meta-Analytic Review of Experiments Examining the Effects of Extrinsic Rewards on Intrinsic Motivation.” Psychological Bulletin 125 (6):627–668. doi:10.1037/0033-2909.125.6.627.
  • Elikai, F., and P. W. Schuhmann. 2010. “An Examination of the Impact of Grading Policies on Students’ Achievement.” Issues in Accounting Education 25 (4):677–693. doi:10.2308/iace.2010.25.4.677.
  • Field, A. 2013. Discovering Statistics Using IBM SPSS Statistics. London: SAGE.
  • Gaultney, J. F., and A. Cann. 2001. “Grade Expectations.” Teaching of Psychology 28 (2):84–87. doi:10.1207/S15328023TOP2802_01.
  • Grabe, M. 1994. “Motivational Deficiencies When Multiple Examinations Are Allowed.” Contemporary Educational Psychology 19 (1):45–52. doi:10.1006/ceps.1994.1005.
  • Harlen, W., and R. D. Crick. 2003. “Testing and Motivation for Learning.” Assessment in Education: Principles, Policy & Practice 10 (2):169–207. doi:10.1080/0969594032000121270.
  • Johnson, B. G., and H. P. Beck. 1988. “Strict and Lenient Grading Scales: How Do They Affect the Performance of College Students with High and Low SAT Scores?” Teaching of Psychology 15 (3):127–131. doi:10.1207/s15328023top1503_4.
  • Kickert, R., K. M. Stegers-Jager, M. Meeuwisse, P. Prinzie, and L. R. Arends. 2018. “The Role of the Assessment Policy in the Relation between Learning and Performance.” Medical Education 52 (3):324–335. doi:10.1111/medu.13487.
  • Komarraju, M., and D. Nadler. 2013. “Self-Efficacy and Academic Achievement: Why Do Implicit Beliefs, Goals, and Effort Regulation Matter?” Learning and Individual Differences 25:67–72. doi:10.1016/j.lindif.2013.01.005.
  • Locke, E. A., and J. F. Bryan. 1968. “Grade Goals as Determinants of Academic Achievement.” Journal of General Psychology 79 (2):217–228. doi:10.1080/00221309.1968.9710469.
  • Locke, E. A., and G. P. Latham. 2002. “Building a Practically Useful Theory of Goal Setting and Task Motivation: A 35-Year Odyssey.” American Psychologist 57 (9):705–717. doi:10.1037/0003-066X.57.9.705.
  • Maskey, V. 2012. “Grade Expectation and Achievement: Determinants and Influential Relationships in Business Courses.” American Journal of Educational Studies 5 (1):71–88.
  • Nijenkamp, R., M. R. Nieuwenstein, R. de Jong, and M. M. Lorist. 2016. “Do Resit Exams Promote Lower Investments of Study Time? Theory and Data from a Laboratory Study.” PLoS One 11 (10):e0161708. doi:10.1371/journal.pone.0161708.
  • Onwuegbuzie, A. J., and V. A. Wilson. 2003. “Statistics Anxiety: Nature, Etiology, Antecedents, Effects, and Treatments – a Comprehensive Review of the Literature.” Teaching in Higher Education 8 (2):195–209. doi:10.1080/1356251032000052447.
  • Pekrun, R., T. Goetz, A. C. Frenzel, P. Barchfeld, and R. P. Perry. 2011. “Measuring Emotions in Students’ Learning and Performance: The Achievement Emotions Questionnaire (AEQ).” Contemporary Educational Psychology 36 (1):36–48. doi:10.1016/j.cedpsych.2010.10.002.
  • Pell, G., K. Boursicot, T. Roberts. 2009. “The Trouble with Resits ….” Assessment & Evaluation in Higher Education 34 (2):243–251. doi:10.1080/02602930801955994.
  • Pintrich, P. R. 2004. “A Conceptual Framework for Assessing Motivation and Self-Regulated Learning in College Students.” Educational Psychology Review 16 (4):385–407. doi:10.1007/s10648-004-0006-x.
  • Pintrich, P. R., D. A. F. Smith, T. Garcia, and W. J. Mckeachie. 1991. “A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ).” http://eric.ed.gov/?id=ED338122.
  • Poropat, A. E. 2009. “A Meta-Analysis of the Five-Factor Model of Personality and Academic Performance.” Psychological Bulletin 135 (2):322–338. doi:10.1037/a0014996.
  • Proud, S. 2015. “Resits in Higher Education: Merely a Bar to Jump over, or Do They Give a Pedagogical ‘Leg Up’?” Assessment & Evaluation in Higher Education 40 (5):681–697. doi:10.1080/02602938.2014.947241.
  • Richardson, M., C. Abraham, and R. Bond. 2012. “Psychological Correlates of University Students’ Academic Performance: A Systematic Review and Meta-Analysis.” Psychological Bulletin 138 (2):353–387. doi:10.1037/a0026838.
  • Sawyer, R. 2013. “Beyond Correlations: Usefulness of High School GPA and Test Scores in Making College Admissions Decisions.” Applied Measurement in Education 26 (2):89–112. doi:10.1080/08957347.2013.765433.
  • Schneider, M., and F. Preckel. 2017. “Variables Associated with Achievement in Higher Education: A Systematic Review of Meta-Analyses.” Psychological Bulletin 143 (6):565–600. doi:10.1037/bul0000098.
  • Scott, E. P. 2012. “Short-Term Gain at Long-Term Cost? How Resit Policy Can Affect Student Learning.” Assessment in Education: Principles, Policy & Practice 19 (4):431–449. doi:10.1080/0969594X.2012.714741.
  • Sirin, S. R. 2005. “Socioeconomic Status and Academic Achievement: A Meta-Analytic Review of Research.” Review of Educational Research 75 (3):417–453. doi:10.3102/00346543075003417.
  • Stevens, J. P. 2009. Applied Multivariate Statistics for the Social Sciences. New York: Taylor & Francis.
  • Sundre, D. L., and A. Kitsantas. 2004. “An Exploration of the Psychology of the Examinee: Can Examinee Self-Regulation and Test-Taking Motivation Predict Consequential and Non-Consequential Test Performance?” Contemporary Educational Psychology 29 (1):6–26. doi:10.1016/S0361-476X(02)00063-2.
  • Uttl, B., C. A. White, and A. Morin. 2013. “The Numbers Tell It All: Students Don’t like Numbers!.” PLOS ONE 8 (12):e83443. doi:10.1371/journal.pone.0083443.
  • Vancouver, J. B., and L. N. Kendall. 2006. “When Self-Efficacy Negatively Relates to Motivation and Performance in a Learning Context.” Journal of Applied Psychology 91 (5):1146–1153. doi:10.1037/0021-9010.91.5.1146.
  • Wolf, L. F., and J. K. Smith. 1995. “The Consequence of Consequence: Motivation, Anxiety, and Test Performance.” Applied Measurement in Education 8 (3):227–242. doi:10.1207/s15324818ame0803_3.
  • Yocarini, I. E., S. Bouwmeester, G. Smeets, and L. R. Arends. 2018. “Systematic Comparison of Decision Accuracy of Complex Compensatory Decision Rules Combining Multiple Tests in a Higher Education Context.” Educational Measurement: Issues and Practice 37 (3):24–39.
  • Zimmerman, B. J. 1986. “Becoming a Self-Regulated Learner: Which Are the Key Subprocesses?” Contemporary Educational Psychology 11 (4):307–313. doi:10.1016/0361-476X(86)90027-5.