Abstract
The impact of two types of written feedback (process-oriented, grade-oriented) on changes in mathematics achievement, interest and self-evaluation was compared – with a particular focus on the mediating role of feedback’s perceived usefulness. Participants, 146 ninth graders (aged 14 to 17 years), were assigned to either a process-oriented or a grade-oriented experimental feedback condition. They worked on mathematics tests, received feedback on their test results and completed surveys measuring feedback’s perceived usefulness, interest and self-evaluation. Results of path analysis showed that process-oriented feedback was perceived as more useful than grade-oriented feedback and that feedback’s perceived usefulness had a positive effect on changes in achievement and interest. Consistent with this, process-oriented feedback had a greater positive indirect effect than grade-oriented feedback on changes in mathematics achievement and interest via its perceived usefulness. There were no such effects on changes in self-evaluation. Potential explanations for these findings, educational implications and possible directions for future research are discussed.
Acknowledgement
The preparation of this paper was supported by grants from the German Research Foundation (DFG, KL1057/10-2 and BL2751/17-1) in the Priority Program ‘Models of Competencies for Assessment of Individual Learning Outcomes and the Evaluation of Educational Processes’ (SPP 1293).
Notes
1. The content of a feedback message consists of a verification component and an elaboration component (Kulhavy & Stock, Citation1989). The verification component refers to the learning outcome and indicates the performance level achieved. The elaboration component consists of additional information, for example, relating to the task, error or solution, and can vary with regard to length and complexity (see also Shute, Citation2008). Depending on the theoretical approach (cognitive, motivational or metacognitive), different elaborative aspects of feedback are considered important. We discuss the role of elaborated feedback information for each theoretical approach specifically (see following sections).
2. The studies of Dweck and colleagues (Cimpian et al., Citation2007; Kamins & Dweck, Citation1999; Mueller & Dweck, Citation1998) as well as the study of Corpus and Lepper (Citation2007) refer to praise (person praise and process praise). Praise can be defined as ‘positive evaluations made by a person of another’s products, performances, or attributes’ (Kanouse, Gumpert, & Canavan-Gumpert, Citation1981, p. 98); thus, in comparison with feedback, praise is a less neutral form of recognition (Henderlong & Lepper, Citation2002).
3. Corresponding to this, Henderlong and Lepper (Citation2002) describe ‘ability vs. effort praise as merely one subset of the broader category of person (i.e., trait-oriented) vs. process (i.e., strategy- or effort-oriented) praise’ (p. 781).
4. Strijbos et al. (Citation2010) provided all students, including the control group, with evaluation criteria which may have led to increased self-evaluation and, as a result, improved performance in all conditions, whether feedback was perceived as useful or not.
5. The Co2CA-project is conducted collaboratively by researchers at the German Institute for International Educational Research, the University of Kassel and the Leuphana University of Lüneburg. The principal investigators of the project are Prof. Eckhard Klieme, Dr. Katrin Rakoczy, Prof. Werner Blum and Prof. Dominik Leiss.
6. Items from the German national standards cover a broad scope of known mathematical content domains from grade 5 to grade 9.
7. To avoid experimenter effects, the 24 experimenters were rotated across conditions.
8. Our path model had 17 free parameters. As our sample size was N = 146, the requirement for sufficient sample sizes (i.e., a cases/parameter ratio higher than 5:1, Kline, Citation2005) was fulfilled.
9. As prior studies demonstrated that achievement and interest (e.g., Schiefele, Citation1998) and achievement and calibration (e.g., Lucangeli, Tressoldi, & Cendron, Citation1998; Nietfeld, Cao, & Osborne, Citation2006; Winne & Jamieson-Noel, Citation2002) relate significantly to each other, correlations between these variables were included in the model.
10. The indirect effect equals the product of its constituent direct paths. In the present study the significance of the indirect effect was directly tested (e.g., MacKinnon, Citation2008). In a review and Monte Carlo study, MacKinnon, Lockwood, Hoffman, West and Sheets (Citation2002) systematically compared 14 different methods to test intervening variable effects and found that such a direct test of the indirect effect has several advantages over the better known stepwise regression procedure suggested by Kenny and colleagues (e.g., Baron & Kenny, Citation1986). One advantage is that it has greater statistical power whilst maintaining reasonable control over the Type I error rate (MacKinnon et al., Citation2002). In contrast to the stepwise approach, the direct test of the indirect effect does not presuppose a total effect (e.g., MacKinnon, Citation2008; the requirement of a significant total effect in the stepwise regression procedure has been shown to lead to the most Type II errors; MacKinnon et al., Citation2002).
11. The simulation study of MacKinnon et al. (Citation2004) showed that bias-corrected bootstrapping is the method of choice for estimating confidence limits for indirect effects. Bias-based bootstrapping revealed Type I errors close to the nominal level along with more power than other resampling methods.