253
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Does perception mean learning? Insights from an online peer feedback setting

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

Abstract

Many peer feedback studies examined students’ perceptions. Yet, little is known about how perceived and actual learning are related, and how they are influenced by individual and contextual characteristics including success level, educational level, gender, and academic major. This exploratory study addressed this research gap. Students from a Dutch university (N = 284) attended a three-week online module during which they were engaged in peer feedback on argumentative writing. At the end of the module, we measured students’ perceptions of learning. The results showed no significant effect of success level on perceived learning. Master students reported higher perceived learning than bachelor students regardless of success level. We did not find gender or academic major effects on perceived learning across different success levels. The findings suggest that relying solely on students’ perceptions of their learning experiences may not accurately reflect their actual learning. Students may need support to help them align their perceived learning with their actual learning achievements.

Introduction

Learning with and from peers is nowadays an essential instructional practice in many higher education contexts. To be prepared for life, students not only need to develop the skills to acquire knowledge, but they should also be able to monitor and evaluate their learning and that of others (Boud and Falchikov Citation2006), and to exchange feedback to improve performance. However, these skills can be difficult to acquire if students do not engage in learning activities in which they meaningfully practice these skills. One way of creating this experience for students is by involving them in peer feedback. We define peer feedback as “a learning activity where individuals or small group constellations exchange, react to and/or act upon information about their performance on a particular learning task with the purpose to accomplish implicit or explicit shared and individual learning goals.” (Alqassab, Strijbos, and Ufer Citation2018, p.12). Due to its significant benefits for academic performance (e.g. Wang et al. Citation2017), motivation (e.g. Chien, Hwang, and Jong Citation2020), reduced anxiety (e.g. Chien, Hwang, and Jong Citation2020), assessment skills (e.g. Alqassab, Strijbos, and Ufer Citation2018; Taghizadeh Kerman et al. Citation2022), and self-regulated learning (e.g. Bellhäuser, Liborius, and Schmitz Citation2022), peer feedback has been used to support students’ learning in various subject domains.

One area of research that has attracted particular attention in recent years is using online peer feedback to teach argumentative writing (Huisman et al. Citation2018; Latifi, Noroozi, and Talaee Citation2021, Citation2023; Noroozi et al. Citation2023). Online learning environments designed to teach argumentative essay writing through engaging university students in peer feedback have been shown to successfully improve students’ performance across different academic majors and levels of higher education (Noroozi et al. Citation2023). Students’ perceptions are commonly used to evaluate the impact of traditional and online peer feedback (see Alqassab et al. Citation2023). Several theoretical frameworks of feedback suggest that outcome variables such as perceived learning and actual performance are likely to be shaped by individual characteristics and contextual characteristics including success level, educational level, gender, and academic major (Narciss Citation2008; Panadero and Lipnevich Citation2022; Taghizadeh Kerman et al. Citation2023a). However, little is known about the relationship between perceived learning and actual performance in online peer feedback activities, and how this relationship might be influenced by individual and contextual characteristics. In this study, we attempt to fill in this gap of research by exploring their relationships.

Students’ perceptions in peer feedback research

Perceived learning and perceptions of the feedback (e.g. usefulness, fairness, acceptance) are frequently examined in peer feedback studies either as main outcomes (e.g. Beaver and Beaver Citation2011) or as complementary outcome measures (e.g. Misiejuk, Wasson, and Egelandsdal Citation2021). In their scoping review of studies on students’ perceptions of feedback, van der Kleij and Lipnevich (Citation2021), found that these perceptions are hardly examined with performance outcomes. This also appears to be the case in peer feedback research, as only a limited number of studies investigated the relationship between students’ perceptions and learning outcomes. For instance, Kaufman and Schunn (Citation2011) reported that students’ perceptions of fairness were not related to the revisions of their work. Brown, Peterson, and Yao (Citation2016) found perceived helpfulness to be positively related to self-regulation but negatively related to grade point average. In a previous study, Alqassab, Strijbos, and Ufer (Citation2019) showed that perceptions of peer feedback accuracy were positively related to the accuracy of peer feedback but not to task comprehension. Yet, this study was conducted in an experimental setting and students only underwent the peer feedback provider role. More recently, Taghizadeh Kerman, Banihashem, and Noroozi (Citation2023b) found no significant relationships between students’ perceptions of peer feedback and peer feedback content and uptake. So far, no study has investigated the relationship between perceived learning and actual performance when peer feedback is used as an instructional activity, especially in the context of argumentative essay writing. This is very important, especially for interpreting findings from studies that solely rely on perceived learning to evaluate the impact of online peer feedback activities.

Actual and perceived learning in online peer feedback

Online peer feedback is not new to the peer feedback literature. According to a systematic literature review, the largest cluster of studies is represented by online peer feedback (Alqassab et al. Citation2023). This finding is unsurprising given the flexibility and adaptability that technology can afford to design and instructionally support peer feedback activities (Latifi, Noroozi, and Talaee Citation2021, Citation2023). In studies on online learning, including those with peer feedback elements, researchers use two types of measures to assess learning: performance measures and self-reported perceived learning measures. Research on online learning environments often relies on perceived learning measures as an evaluation criterion (Yunusa and Umar Citation2021). Perceived learning is defined as “a set of beliefs and feelings one has regarding the learning that has occurred. As such, perceived learning is a retrospective evaluation of the learning experience.” (Caspi and Blau Citation2008; p. 327). Unlike performance-based learning which is evaluated by others, perceived learning is subjective self-evaluations by the learners themselves (Barzilia & Blau, 2014), and can therefore be susceptible to inaccuracies. Yet, this measure is associated with several variables related to the learning environment including teaching effectiveness (Ryan and Harrison Citation1995), satisfaction with online learning (Baturay Citation2011), and intrinsic motivation (Ferreira, Cardoso, and Abrantes Citation2011). Students’ perceptions are important because they can shape how students approach the peer feedback provision and processing (Aben et al. Citation2019). This is not only limited to students’ perceptions of the feedback but also whether they perceive they have learned something after being engaged in a specific learning activity. The latter is assumed to be related to students’ engagement (Capsi & Blau, 2008).

Several scholars stressed that perceived and actual learning can be unrelated (e.g. Caspi and Blau Citation2008), with repeated findings from technology-enhanced learning research indicating that students overestimate their learning (see Caspi and Blau Citation2008; Barzilai and Blau Citation2014; Porat, Blau, and Barak Citation2018). In a study of online game-based learning, no significant relationship between perceived learning and performance was found (Barzilai and Blau Citation2014). Porat, Blau, and Barak (Citation2018) found almost no significant associations between students’ perceived digital competencies and their actual performance. Prior studies even suggested that the misalignment between self-judgements and performance is greater when the learning material is presented on screen (e.g. Ackerman and Lauterman Citation2012). Students’ feedback perceptions were also found to be differently influenced by digital feedback compared to non-digital feedback (Ryan, Henderson, and Phillips Citation2019). Based on these findings it is quite evident that the relationship between perceived and actual learning in online peer feedback requires investigation.

Online peer feedback and argumentative essay writing

The positive impact of online peer feedback has been documented by studies in several subject domains such as language learning (Alemdag and Yildirim Citation2022), computer education (Wang et al. Citation2017), and argumentative essay writing (Latifi, Noroozi, and Talaee Citation2021, Citation2023, Noroozi et al. Citation2023; Awada and Diab Citation2023). In studies on argumentative essay writing, Latifi, Noroozi, and Talaee (Citation2021, Citation2023) showed that scaffolded peer feedback activities significantly improved students’ argumentative writing performance. Awada and Diab (Citation2023) compared the impact of online versus face-to-face peer feedback on argumentative writing and found that students in the online peer feedback condition outperformed those in the face-to-face condition. Using a pre-test and post-test design, Noroozi et al. (Citation2023) showed that online peer feedback improved students’ argumentative writing across five different university courses.

Students’ learning experiences in studies of online peer feedback on argumentative writing have been captured by various outcome measures including the quality of argumentative essay writing (e.g. Banihashem et al. Citation2023; Noroozi et al. Citation2023), the content of the provided peer feedback (e.g. Noroozi et al. Citation2022; Taghizadeh Kerman et al. Citation2022), gains in domain-specific knowledge (e.g. Latifi, Noroozi, and Talaee Citation2021; Valero Haro et al. Citation2022), and students’ perceptions of the learning experience in terms of learning satisfaction and perceived learning (Banihashem et al. Citation2023; Noroozi et al. Citation2023). To better understand the effects of feedback on learning, students’ perceptions should be taken into account because of their possible moderating/mediating effect (Strijbos, Pat-El, and Narciss Citation2021) that can influence how the students make sense of the feedback, uptake feedback, and provide feedback (Narciss Citation2008; Aben et al. Citation2019). Importantly, studies that examined the relationship between these two constructs did not take into account the role of individual and contextual characteristics (e.g. success level, education level, gender, and academic major) as these factors are likely to influence the relationship between perceived learning and actual performance.

Role of individual and contextual characteristics in online peer feedback

Several recent feedback models (e.g. Aben et al. Citation2019; Panadero and Lipnevich Citation2022) but also prior feedback models (e.g. Narciss Citation2008) stress the central role of students’ individual characteristics and contextual characteristics in shaping feedback processes and outcomes. Here we follow the model by Panadero and Lipnevich (Citation2022). Based on a systematic review of 14 existing models, this model presents five elements to be considered in the promotion of educational feedback. Namely: feedback message, implementation, context, agents, and individual characteristics. The latest is the central element modulating the interaction of the rest of the elements to generate a learning impact on students. In this study, we will focus on the elements of individual characteristics and context as we will explore how some of these factors influence the effects of online peer feedback on perceived and actual learning.

In research on online peer feedback, students’ characteristics were proposed as an element of a framework guiding the effective implementation of online peer feedback (see Taghizadeh Kerman et al. Citation2023a). When it comes to the design of peer feedback activities, these are typically the factors that researchers and educators do not manipulate but need to take into account when evaluating the impact of their design i.e. moderators/mediators (Alqassab et al. Citation2023). In a systematic review of online peer feedback studies in higher education, Taghizadeh Kerman et al. (Citation2023a) identified three types of students’ characteristics that are likely to influence peer feedback namely, demographic characteristics (e.g. gender, language), academic background (e.g. ability, success level, educational level), and personality and psychological features (e.g. self-efficacy, perceptions, epistemic beliefs). In the current study, we examined the impact of success level, educational level, and gender. While various individual variables, including success level, education level, gender, ability, language, and epistemic beliefs can play a role in online peer feedback activities, this study primarily focused on success level, education level, and gender, as these were deemed to have a more direct and significant role for the perception and measurement of learning outcomes (Noroozi et al. Citation2022; Taghizadeh Kerman et al. Citation2022; Banihashem et al. Citation2023). While other variables such as ability, language, and epistemic beliefs are important and could potentially influence the learning process, they were not included in the current study due to limitations in scope and data availability.

Also, recent research indicates that in online peer feedback, students’ domain knowledge and success level significantly impact the feedback they provide and receive, as well as their learning outcomes (Day, Saab, and Admiraal Citation2022; Taghizadeh Kerman et al. Citation2022; Valero Haro et al. Citation2023). Day, Saab, and Admiraal (Citation2022) discovered that students with medium and low abilities improved their presentation skills more than high-ability peers, potentially due to a ceiling effect. Although related to domain-knowledge level, students’ level of success in a learning activity can serve as a better indicator of learning gains from the peer feedback activity. Prior studies of online peer feedback on argumentative essay writing showed that successful students tend to produce higher-quality arguments (Valero Haro et al. Citation2022) and receive feedback that varies according to their success level (Taghizadeh Kerman et al. Citation2022). In contrast, educational level (bachelor vs. master) showed no significant effect on performance improvements, although master students reported greater learning satisfaction (Noroozi et al. Citation2023). This suggests that perceived learning, involving self-evaluation of knowledge gain, may also be influenced by success and educational levels. Regarding peer feedback in academic writing, studies have not established significant relationships between feedback perceptions and performance, or examined the influence of success or educational level on these perceptions (Strijbos, Narciss, and Dünnebier Citation2010; Huisman et al. Citation2018). These findings underscore the necessity to explore how students’ success and educational levels affect both perceived and actual learning in online peer feedback contexts. Since prior studies did not find significant relationships between perceived and actual learning (Caspi and Blau Citation2008; Barzilai and Blau Citation2014; Porat, Blau, and Barak Citation2018), we studied this relationship in light of success level, assuming that those who learn more will be more accurate at judging their perceived learning.

Additionally, gender is increasingly recognized as a crucial factor in online learning environments, including peer feedback (Taghizadeh Kerman et al. Citation2023a). Research on online peer feedback has shown varying outcomes based on gender, with female students excelling in certain aspects of written argumentation (Noroozi et al. Citation2022; Banihashem et al. Citation2023), while males showing advantages in others (Noroozi et al. Citation2020). Moreover, the types of peer feedback provided by males and females seem to differ (Noroozi et al. Citation2020; Ocampo, Panadero, and Díez Citation2023). Studies evaluating online courses found that female students often report higher learning perceptions compared to males (e.g. Rovai and Baker Citation2005). These findings suggest that perceived learning in online peer feedback could be gender-dependent, warranting further investigation.

Alqassab et al. (Citation2023) identified students’ academic major as a potential moderator/mediator, of peer feedback outcomes, however, a very limited number of studies investigated this factor so far, with more studies focusing on its impact on bias in student ratings (e.g. Aryadoust Citation2016). Planas-Lladó et al. (Citation2018) observed that students enrolled in technology-related majors were less positive towards peer feedback compared to non-technology majors. This revelation, coupled with the limited research on academic majors in peer feedback, underscores the need to explore its impact on perceived versus actual learning.

Aim and research questions

To address the research gaps described above, this study aimed to explore the impact of students’ success level, educational level, gender, and academic major on their perceived learning in an online peer feedback module on argumentative essay writing. The study sought to answer the following research questions:

  • RQ1: To what extent does students’ perceived learning differ between successful and less successful students in online peer feedback on argumentative essay writing?

  • RQ2: To what extent does perceived learning of bachelor and master students differ in relation to their success level in online peer feedback on argumentative essay writing?

  • RQ3: To what extent does perceived learning of female and male students differ in relation to their success level in online peer feedback on argumentative essay writing?

  • RQ4: To what extent does perceived learning of students in different academic majors differ in relation to their success level in online peer feedback on argumentative essay writing?

We decided to take an exploratory approach to our research questions since there is a lack of adequate research on the correlation between perceived and actual learning in relation to these variables, and therefore we did not formulate any specific hypotheses.

Method

This exploratory study was part of a large study conducted in the academic year 2020-2021 on designing, implementing, and evaluating online-supported peer feedback to teach argumentative writing for students. The study took place at a medium-sized public university in the Netherlands in the domain of life sciences, especially food and health, sustainability, and the healthy living environment. Findings on peer feedback content in relation to improvement in argumentative writing and other outcomes are published in previous work (Taghizadeh Kerman et al. Citation2022; Citation2023b; Noroozi et al. Citation2023).

Participants and ethical measures

In total, 330 university students participated in this study, however, only 284 students completed all steps of the study. Among the participants, 195 students were female (68%) and 89 students were male (32%), with a mean age of 22 years (range 18-33). In addition, 148 students were undergraduates (52%), while 136 students were graduates (48%). The selected courses were from different academic majors including Social Sciences (N = 56, 20%, Female = 27, Male = 29), Plant Sciences (N = 29, 10%, Female = 20, Male = 9), Health & Social Sciences (N = 47, 16%, Female = 31, Male = 16), Environmental Sciences (N = 101, 36%, Female = 70, Male = 31), and Food Sciences (N = 51, 18%, Female = 37, Male = 14). All participants were informed that their data would be collected and used anonymously. Students were provided with the option to withdraw from the study and request the exclusion of their collected data and gave their consent for the collection and utilization of their data for research purposes. Ethical approval for this research was obtained from the Social Sciences Ethics Committee of the university.

Procedure

A module titled ‘Argumentative Essay Writing’ was developed and integrated into an online platform called Brightspace. This module spanned three consecutive weeks. In the first week, students received an introduction to the module, followed by an assignment of writing an argumentative essay (task 1). The students were provided with three topics to choose one topic from. This approach aimed to minimize potential bias related to students’ domain-specific knowledge of a particular topic, as some students might possess extensive knowledge of a specific subject, while others may not. In the second week, the students were tasked with providing feedback on two essays authored by their peers. They received criteria and prompts associated with each criteria to guide the peer feedback provision. The criteria for providing feedback were integrated into the online environment (task 2) (see Supplementary Online Material 1). Within the Brightspace platform, we used the FeedbackFruits tool to automatically and randomly assign students to review two sets of their peers’ argumentative essays. Finally, in the third week, the students were asked to revise their essays based on the feedback they received from their peers during the previous task (task 3). At the end, students’ perceptions of learning were collected. Details of the characteristics of the peer feedback activity are reported in the Supplementary Online Material 2.

Measurement

Measurement of students’ actual learning

We utilized a rubric developed by Noroozi, Biemans, and Mulder (Citation2016) to assess students’ actual learning in argumentative essay writing. This rubric was constructed based on the key elements of high-quality argumentative essay writing (Toulmin Citation1958; Noroozi, Biemans, and Mulder Citation2016). It includes eight essential components: (1) introduction to the topic, (2) taking a position on the topic, (3) arguments for the position, (4) justifications for arguments for the position, (5) arguments against the position (counter-arguments), (6) justifications for arguments against the position, (7) response to counter-arguments, and (8) conclusion and implications.

Each element in the rubric was assessed on a scale ranging from zero (the lowest quality level) to three (the highest quality level) (for the rubric see Noroozi et al. Citation2023). The total score for the quality of the argumentative essay was obtained by summing up the scores across all these elements. The assessment of students’ argumentative essays was conducted in two stages involving the original essay and the revised version. In this study, the actual learning of the students was measured based on their progress from the original argumentative essay to the revised version. In subsequent analyses, these grades were categorized into two groups: successful and less successful, based on the students’ improvement from the pre-test to the post-test. For the distribution of grades of the original and revised assays refer to Supplementary Online Material 3. The classification of students into successful and less successful groups was based on the median score with successful students falling within 50% and above, and less successful students falling below 50%. The scoring of students’ essays was done by five coders, each possessing expertise in the field of education. To determine the inter-rater reliability among coders, we employed Fleiss’ Kappa statistic (Fleiss Citation1971). The analysis revealed a substantial level of agreement among the coders, with an overall agreement rate of 75% (Fleiss’ Kappa = 0.75 [95% CI: 0.70-0.81]; z = 26.08; p < 0.001).

Measurement of students’ perceived learning

We measured perceived learning using two subscales which are part of a survey on students’ learning satisfaction used in previous studies with acceptable internal consistency (e.g. Noroozi et al. Citation2023). The two subscales consisted of a total of 11 items measuring domain-specific (5 items; e.g. “The module broadened my knowledge on the topic”) and domain-general learning (6 items; “The module helped me learn how to offer argumentative peer feedback”). The students responded on a five-point Likert scale, ranging from “almost never true = 1”, “rarely true = 2”, “occasionally true = 3”, “often true = 4”, to “almost always true = 5”. The internal consistency for the two subscales was good, McDonald’s ω = 0.88 and 0.86 respectively.

Data analysis

To investigate students’ actual versus perceived learning experiences, we initially utilized a percentile rank measurement to classify students into two distinct groups based on their progress from the original argumentative essay writing to the revised argumentative essay writing: less successful students (progress below the 50th percentile) (N = 110, 39%) and successful students (exceeded the 50th percentile) (N = 167, 59%). Subsequently, we conducted a one-way ANOVA test to compare the perceived learning experiences between the less successful and the successful students. We conducted two-way ANOVAs to compare the perceived learning experiences with regard to education levels, gender, and academic majors across different success levels (less successful and successful).

Results

RQ1: differences in perceived learning between successful and less successful students

Although, successful students exhibited slightly higher perceptions of their learning experience compared to less successful students, these differences were not statistically significant (F (1, 231) = 0.56, p = 0.45, η2 = 0.01) (see ).

Table 1. Differences in perceived learning between less successful and successful students.

RQ2: differences in perceived learning of bachelor and master students in relation to their success

In general, bachelor students scored lower in terms of their perceived learning experiences compared to master students. These differences were statistically significant (F(1, 229) = 8.49, p < 0.01, η2 = 0.04). However, the interaction between education level and success level was not statistically significant (F(1, 229) = 0.56, p = 0.45, η2 = 0.00) (see ).

Table 2. Differences in perceived learning experiences of less successful and successful bachelor and master students.

RQ3: differences in perceived learning of female and male students in relation to their success level

Female students scored slightly higher in terms of their perceived learning experiences compared to male students. However, these differences were not statistically significant (F(1, 229) = 0.39, p = 0.54, η2 = 0.01). The interaction between gender and success level was not statistically significant (F(1, 229) = 0.02, p = 0.86, η2 = 0.00) (see ).

Table 3. Differences between perceived learning experiences of less successful and successful female and male students.

RQ4: differences in perceived learning of students in different academic majors in relation to their success level

Students in the academic major of food sciences reported slightly higher perceived learning compared to students in other majors. However, these differences were not statistically significant (F(1, 223) = 0.00, p = 0.99, η2 = 0.00). Also, the interaction between academic major and success level was not statistically significant (F(1, 223) = 1.36, p = 0.25, η2 = 0.01) (see ).

Table 4. Differences between perceived learning experiences of less successful and successful students from different majors.

Discussion

This study examined how students’ success level, educational level, gender, and academic major influenced their perceived learning experiences in an online peer feedback on argumentative essay writing. Regarding the first research question, we did not find statistically significant differences between successful and less successful students in terms of their perceived learning after the peer feedback module. Although the more successful students improved their writing argumentation skills more, we did not find any evidence for the same pattern in perceived learning. These findings are consistent with previous studies on online learning that did not find a significant relationship between perceived learning and actual performance (Barzilai and Blau Citation2014; Porat, Blau, and Barak Citation2018). Although we expected that the successful students would be more accurate in judging their perceived learning than the less successful students, their reported perceived learning after the peer feedback module did not seem to align with their actual learning. This study, indeed, is not the first study on peer feedback showing the mismatch between students’ perceived learning versus actual learning even for students with more gains. In a study of peer feedback provision on mathematics argumentation tasks, even though students with higher levels of domain knowledge improved the type of peer feedback they provided, the perceptions of learning significantly decreased regardless of students’ level of domain knowledge (Alqassab, Strijbos, and Ufer Citation2018). Caspi and Blau (Citation2008, Citation2011) asserted that when students evaluate their learning experience, they rely on both cognitive as well as socio-emotional sources. While the cognitive aspect is related to whether knowledge has been acquired, the social-emotional has to do with feelings and experiences such as difficulty, enjoyment, or engagement (Capsi & Blau, 2011). The socio-emotional aspects and not the cognitive aspects of perceived learning were found to be related to perceived learning in online learning (Barzilai and Blau Citation2014). This can be one plausible explanation for our findings. As our study only measured cognitive perceived learning, we do not know if students relied more or less on socio-emotional aspects to judge their perceived learning especially since they were involved in peer feedback which is a social learning activity.

Regarding the second research question, we found that students at the master level reported significantly higher perceived learning than bachelor students and the difference did not depend on the level of success. Although in the previous study by Noroozi et al. (Citation2023), no impact of education level was found on the improvement of argumentative writing, the master students in that study also had higher perceptions of learning which is consistent with the current study. These findings suggest that students at more advanced levels of higher education might possess better writing argumentation skills. Nevertheless, it is important to stress that no impact of the level of success was observed on perceived learning in our study which might suggest that students in the more advanced stages considered the learning activity as more relevant—which is also reflected in the higher perceived learning—and they ended up being more engaged in the module. Yet, this requires further examination in future studies by measuring socio-emotional aspects of perceived learning (Caspi and Blau Citation2008) in peer feedback research as we discussed earlier.

In our third research question, we investigated the impact of gender on perceived learning in relation to students’ success levels in online peer feedback. Our findings showed no significant differences between male and female students in terms of their perceived learning and for the successful and the less successful students. Unlike earlier studies of online courses which showed that females had higher perceptions of learning than males (e.g. Rovai and Baker Citation2005), we did not find statistically significant differences although descriptively females reported slightly higher perceived learning in our online peer feedback module. The previous studies on online peer feedback on argumentative writing that examined gender differences in argumentation skills overall reported no gender differences on most of the criteria of argumentative writing (Noroozi et al. Citation2022; Banihashem et al. Citation2023). Together with the results of the current study, these findings seem to support the results of Hsu et al. (Citation2018) who also found no significant differences between females and males. Our observation is that gender differences do not seem to exist when collective measures of argumentation skills are used as is the case in our study. Previous studies only found differences in sub-categories of argumentation skills and not in the overall measure (Hsu et al. Citation2018; Noroozi et al. Citation2022; Banihashem etal. 2023).

In our last research question, we explored the impact of academic majors on the differences in perceived learning for successful and less successful students. We found no significant differences across different academic majors. This result is inconsistent with the study by Planas-Lladó et al. (Citation2018) who reported significant differences in perceptions of peer feedback between students who attended technology majors compared to non-technology majors. Notwithstanding, in our study, the majority of students come from natural science majors (e.g. Environmental Sciences, Plant Sciences) or natural science combined with social science (Health & Social Sciences) and thus might have more disciplinarity similarities compared to technology versus non-technology majors. While in our study perceived learning was explored in relation to academic majors, Planas-Lladó et al. (Citation2018) compared broader perceptions of peer feedback including motivation, fairness, and openness to criticism.

Limitations and future directions

This study has several limitations. First, we examined the impact of students’ level of success only in terms of their argumentation skills which can be assumed—based on how the essays were assessed—as domain independent. Students’ level of success can also be measured in terms of gaining domain-specific knowledge, and this was found to be related to argumentation behavior (Valero Haro et al. Citation2022). Prior studies showed that peer feedback skills require domain-specific knowledge (Van Zundert et al. Citation2012; Alqassab, Strijbos, and Ufer Citation2018), which might suggest that the alignment between students’ self-evaluation of learning (i.e. perceived learning) and actual learning in online peer feedback on argumentative essay writing are likely to be influenced by domain-specific knowledge as well. Accordingly, the degree to which perceived and actual learning in peer feedback are more or less influenced by domain-specific and domain-general skills is open to further research.

Second, as discussed earlier, the perceived learning instrument used in our study mainly captured the cognitive aspects of learning, and we do not know how much the socio-emotional dimensions (Caspi and Blau Citation2008, Citation2011) contributed to students’ evaluation of their learning in the argumentative essay writing course. Future peer feedback studies should consider measures of perceived learning that align with the complexity of this learning activity by tapping multiple dimensions of learning including cognitive, and socio-emotional aspects.

Third, to measure success level, we divided participants into two groups only (successful and less successful) based on those below and above the 50th percentile. However, it is important to note that even within these groups, there may still be a wide range of scores that can make it difficult to determine the impact of success on perceived learning.

Fourth, we did not assess how perceived and actual performance differentially relate to other important factors such as epistemic beliefs or learning satisfaction (Banihashem et al. Citation2023). Fifth, in this study, it is essential to acknowledge the limitation concerning the generalization of the findings, given that the research was conducted at a single university. Therefore, the results should be interpreted within the context of this specific university, and caution should be exercised when attempting to generalize the findings to a broader population.

Finally, our study was cross-sectionally conducted on a skill that is known to be challenging even for higher education students (Ferretti and Graham Citation2019) using peer feedback activity which is also known to be cognitively demanding for students (Van Zundert et al. Citation2012). Students’ perceived learning might become more similar to their actual performance with more experience with online peer feedback. Future studies should explore how perceived and actual learning in (online) peer feedback activities evolve, and if with more experience students will become more accurate at assessing their perceived learning.

Research and practical implications

Our findings suggest that relying only on perceived learning to evaluate online peer feedback activities is not sufficient and might indeed be affected by ‘illusions of understanding’ (De Ridder Citation2022). We believe that both measures can provide valuable insights into students’ experiences in online peer feedback and should both be correspondingly used to evaluate online learning environments. While performance measures provide more objective measures of the impact of the design of the online peer feedback activity, they might miss aspects of the learning experience that motivate students to be engaged in online learning activities (Caspi and Blau Citation2011). Perceived learning can also contribute to the affective and self-regulation mechanisms of learning, especially when the information is fed back to the students using learning analytics tools for instance (Banihashem et al. Citation2022). The differences we observed in perceived learning experiences as a function of educational level suggest that bachelor students might need more instructional scaffolding in online peer feedback on argumentative essay courses. Students can benefit from different types of instructional scaffolds in the form of graphical (e.g. diagrams, tables, visualization) or text-based (e.g. scripts, prompts, hints) representations (Valero Haro et al. Citation2019). Our findings also show that all students need support in terms of raising their awareness about their actual performance and perceived learning in peer feedback so that they can adjust their perceptions accordingly. A learning dashboard with students as end-users can be employed in future studies to achieve this goal (Jivet et al. Citation2017).

Conclusions

This exploratory study shed light on the role of several individual and contextual characteristics (success level, educational level, gender, and academic major) on perceived learning when students engage in online peer feedback on argumentative essays. We found that educational level influenced perceived learning experiences suggesting that bachelor students might need instructional support to ensure successful peer feedback outcomes. While we did not find a relationship between perceived learning and success level in online peer feedback, this can be a starting point to reconsider how perceived learning is conceptualized and evaluated in peer feedback studies and most importantly if it is a valid measure of learning in peer feedback studies.

Author contributions

Omid Noroozi: Project administration, Conceptualization, Writing- Review & Editing. Maryam Alqassab: Writing- Original Draft. Nafiseh Taghizadeh Kerman: Formal analysis, Methodology, Writing- Original Draft. Seyyed Kazem Banihashem: Supervision, Conceptualization, Writing- Review & Editing. Ernesto Panadero: Writing- Review & Editing.

Supplemental material

Supplemental Material

Download MS Word (42.3 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was supported by the Ministry of Education, Culture, and Science, the Netherlands, the SURF organization, and Wageningen University and Research [funding number: 2100.9613.00. OCW] to the first author.

Notes on contributors

Omid Noroozi

Omid Noroozi is a faculty member in the field of Technology-Enhanced Transformative Learning at Wageningen University and Research, the Netherlands; His research interest are Educational Technology, Collaborative Learning, Peer Learning, Peer Feedback, Artificial Intelligence, and Computer Supported Collaborative Learning.

Maryam Alqassab

Maryam Alqassab is a Postdoctoral Researcher at the Open University of the Netherlands. Her research interests encompass formative (peer) assessment and feedback, technology-enhanced learning, learning design, and learning analytics.

Nafiseh Taghizadeh Kerman

Nafiseh Taghizadeh Kerman holds a PhD in Educational Administration from Ferdowsi University of Mashhad, Iran. Her research focuses on technology-enhanced learning, feedback, assessment, and argumentation.

Seyyed Kazem Banihashem

Seyyed Kazem Banihashem is an Assistant Professor of Educational Technology at Open University of the Netherlands. His research focuses on technology-enhanced learning, learning analytics, assessment, and feedback.

Ernesto Panadero

Ernesto Panadero is the Director at the Centre for Assessment Research, Policy and Practice in Education (CARPE) at Dublin City University. He is also Honorary Professor at the Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University (Australia). His research focuses on self-regulated learning and educational assessment, especially self & peer assessment, teachers’ feedback and the use of rubrics. He is the PI of the Education, Regulated Learning & Assessment (ERLA) research group since its creation in 2016. He received the Erik de Corte 2017, an award given by the EARLI to a promising early career European researcher in the field of Learning and Instruction.

References

  • Aben, J. E. J., F. Dingyloudi, A. C. Timmermans, and J. W. Strijbos. 2019. “Embracing Errors for Learning: Intrapersonal and Interpersonal Factors in Feedback Provision and Processing in Dyadic Interactions.” In The Impact of Feedback in Higher Education, edited by M. Henderson, R. Ajjawi, D. Boud, and E. Molloy. Cham: Palgrave Macmillan. doi:10.1007/978-3-030-25112-3_7.
  • Ackerman, R., and T. Lauterman. 2012. “Taking Reading Comprehension Exams on Screen or on Paper? A Metacognitive Analysis of Learning Texts under Time Pressure.” Computers in Human Behavior 28 (5): 1816–1828. doi:10.1016/j.chb.2012.04.023.
  • Alemdag, E., and Z. Yildirim. 2022. “Design and Development of an Online Formative Peer Assessment Environment with Instructional Scaffolds.” Educational Technology Research and Development 70 (4): 1359–1389. doi:10.1007/s11423-022-10115-x.
  • Alqassab, M., J. W. Strijbos, E. Panadero, J. F. Ruiz, M. Warrens, and J. To. 2023. “A Systematic Review of Peer Assessment Design Elements.” Educational Psychology Review 35 (1), 1–36. doi:10.1007/s10648-023-09723-7.
  • Alqassab, M., J. W. Strijbos, and S. Ufer. 2018. “Training Peer-Feedback Skills on Geometric Construction Tasks: Role of Domain Knowledge and Peer-Feedback Levels.” European Journal of Psychology of Education 33 (1): 11–30. doi:10.1007/s10212-017-0342-0.
  • Alqassab, M., J.-W. Strijbos, and S. Ufer. 2019. “Preservice Mathematics Teachers’ Beliefs about Peer Feedback, Perceptions of Their Peer Feedback Message, and Emotions as Predictors of Peer Feedback Accuracy and Comprehension of the Learning Task: Assessment & Evaluation in Higher Education.” Assessment & Evaluation in Higher Education 44 (1): 139–154. doi:10.1080/02602938.2018.1485012.
  • Aryadoust, V. 2016. “Gender and Academic Major Bias in Peer Assessment of Oral Presentations.” Language Assessment Quarterly 13 (1): 1–24. doi:10.1080/15434303.2015.1133626.
  • Awada, G. M., and N. M. Diab. 2023. “Effect of Online Peer Review versus Face-to-Face Peer Review on Argumentative Writing Achievement of EFL Learners.” Computer Assisted Language Learning 36 (1-2): 238–256. doi:10.1080/09588221.2021.1912104.
  • Banihashem, S. K., O. Noroozi, H. J. A. Biemans, and V. C. Tassone. 2023. “The Intersection of Epistemic Beliefs and Gender in Argumentation Performance.” Innovations in Education and Teaching International. doi:10.1080/14703297.2023.2198995.
  • Banihashem, S. K., O. Noroozi, S. van Ginkel, L. P. Macfadyen, and H. J. Biemans. 2022. “A Systematic Review of the Role of Learning Analytics in Enhancing Feedback Practices in Higher Education.” Educational Research Review 37: 100489. doi:10.1016/j.edurev.2022.100489.
  • Barzilai, S., and I. Blau. 2014. “Scaffolding Game-Based Learning: Impact on Learning Achievements, Perceived Learning, and Game Experiences.” Computers & Education 70: 65–79. doi:10.1016/j.compedu.2013.08.003.
  • Baturay, M. H. 2011. “Relationships among Sense of Classroom Community, Perceived Cognitive Learning and Satisfaction of Students at an e-Learning Course.” Interactive Learning Environments 19 (5): 563–575. doi:10.1080/10494821003644029.
  • Beaver, C., and S. Beaver. 2011. “The Effect of Peer Assessment on the Attitudes of Preservice Elementary and Middle School Teachers about Writing and Assessing Mathematics.” Teacher Attributes 5: 1–14.
  • Bellhäuser, H., P. Liborius, and B. Schmitz. 2022. “Fostering Self-Regulated Learning in Online Environments: Positive Effects of a Web-Based Training with Peer Feedback on Learning Behavior.” Frontiers in Psychology 13: 813381. doi:10.3389/fpsyg.2022.813381.
  • Boud, D., and N. Falchikov. 2006. “Aligning Assessment with Long-Term Learning.” Assessment & Evaluation in Higher Education 31 (4): 399–413. doi:10.1080/02602930600679050.
  • Brown, G. T. L., E. R. Peterson, and E. S. Yao. 2016. “Student Conceptions of Feedback: Impact on Self-Regulation, Self-Efficacy, and Academic Achievement: British Journal of Educational Psychology.” The British Journal of Educational Psychology 86 (4): 606–629. doi:10.1111/bjep.12126.
  • Caspi, A., and I. Blau. 2008. “Social Presence in Online Discussion Groups: Testing Three Conceptions and Their Relations to Perceived Learning.” Social Psychology of Education 11 (3): 323–346. doi:10.1007/s11218-008-9054-2.
  • Caspi, A., and I. Blau. 2011. “Collaboration and Psychological Ownership: How Does the Tension between the Two Influence Perceived Learning?” Social Psychology of Education 14 (2): 283–298. doi:10.1007/s11218-010-9141-z.
  • Chien, S. Y., G. J. Hwang, and M. S. Y. Jong. 2020. “Effects of Peer Assessment within the Context of Spherical Video-Based Virtual Reality on EFL Students’ English-Speaking Performance and Learning Perceptions.” Computers & Education 146: 103751. doi:10.1016/j.compedu.2019.103751.
  • Day, I. N. Z., N. Saab, and W. Admiraal. 2022. “Online Peer Feedback on Video Presentations: Type of Feedback and Improvement of Presentation Skills.” Assessment & Evaluation in Higher Education 47 (2): 183–197. doi:10.1080/02602938.2021.1904826.
  • De Ridder, J. 2022. “Online Illusions of Understanding.” Social Epistemology: 1–16. doi:10.1080/02691728.2022.2151331.
  • Ferreira, M., A. P. Cardoso, and J. L. Abrantes. 2011. “Motivation and Relationship of the Student with the School as Factors Involved in the Perceived Learning.” Procedia - Social and Behavioral Sciences 29: 1707–1714. doi:10.1016/j.sbspro.2011.11.416.
  • Ferretti, R. P., and S. Graham. 2019. “Argumentative Writing: Theory, Assessment, and Instruction.” Reading and Writing 32 (6): 1345–1357. doi:10.1007/s11145-019-09950-x.
  • Fleiss, J. L. 1971. “Measuring Nominal Scale Agreement among Many Raters.” Psychological Bulletin 76 (5): 378–382. doi:10.1037/h0031619.
  • Hsu, P. S., M. Van Dyke, T. J. Smith, and C. K. Looi. 2018. “Argue like a Scientist with Technology: The Effect of within-Gender versus Cross-Gender Team Argumentation on Science Knowledge and Argumentation Skills among Middle-Level Students.” Educational Technology Research and Development 66 (3): 733–766. doi:10.1007/s11423-018-9574-1.
  • Huisman, B., N. Saab, J. Van Driel, and P. Van Den Broek. 2018. “Peer Feedback on Academic Writing: Undergraduate Students’ Peer Feedback Role, Peer Feedback Perceptions and Essay Performance.” Assessment & Evaluation in Higher Education 43 (6): 955–968. doi:10.1080/02602938.2018.1424318.
  • Jivet, I., M. Scheffel, H. Drachsler, and M. Specht. 2017. “Awareness is Not Enough: Pitfalls of Learning Analytics Dashboards in the Educational Practice.” In Data Driven Approaches in Digital Education. EC-TEL 2017. Lecture Notes in Computer Science (), edited by É. Lavoué, H. Drachsler, K. Verbert, J. Broisin, and M. Pérez-Sanagustín, vol. 10474. Cham: Springer. doi:10.1007/978-3-319-66610-5_7.
  • Kaufman, J. H., and C. D. Schunn. 2011. “Students’ Perceptions about Peer Assessment for Writing: Their Origin and Impact on Revision Work.” Instructional Science 39 (3): 387–406. doi:10.1007/s11251-010-9133-6.
  • Latifi, S., O. Noroozi, and E. Talaee. 2021. “Peer Feedback or Peer Feedforward? Enhancing Students’ Argumentative Peer Learning Processes and Outcomes.” British Journal of Educational Technology 52 (2): 768–784. doi:10.1111/bjet.13054.
  • Latifi, S., O. Noroozi, and E. Talaee. 2023. “Worked Example or Scripting? Fostering Students’ Online Argumentative Peer Feedback, Essay Writing and Learning.” Interactive Learning Environments 31 (2): 655–669. doi:10.1080/10494820.2020.1799032.
  • Misiejuk, K., B. Wasson, and K. Egelandsdal. 2021. “Using Learning Analytics to Understand Student Perceptions of Peer Feedback.” Computers in Human Behavior 117 (December 2020): 106658. doi:10.1016/j.chb.2020.106658.
  • Narciss, S. 2008. “Feedback Strategies for Interactive Learning Tasks.” In Handbook of Research on Educational Communications and Technology, edited by J. M. Spector, M. D. Merrill, J. J. G. Van Merriënboer, and M. P. Driscoll, 3rd ed., 125–144. Mahwah, NJ: Lawrence Erlbaum.
  • Noroozi, O., S. K. Banihashem, H. J. A. Biemans, M. Smits, M. T. W. Vervoort, and C.-L. Verbaan. 2023. “Design, Implementation, and Evaluation of an Online Supported Peer Feedback Module to Enhance Students’ Argumentative Essay Quality.” Education and Information Technologies 28 (10): 1–28. doi:10.1007/s10639-023-11683-y.
  • Noroozi, O., S. K. Banihashem, N. Taghizadeh Kerman, M. Parvaneh Akhteh Khaneh, M. Babayi, H. Ashrafi, and H. J. A. Biemans. 2022. “Gender Differences in Students’ Argumentative Essay Writing, Peer Review Performance and Uptake in Online Learning Environments.” Interactive Learning Environments 31 (10): 6302–6316. doi:10.1080/10494820.2022.2034887.
  • Noroozi, O., H. Biemans, and M. Mulder. 2016. “Relations between Scripted Online Peer Feedback Processes and Quality of Written Argumentative Essay.” The Internet and Higher Education 31: 20–31. doi:10.1016/j.iheduc.2016.05.002.
  • Noroozi, O., J. Hatami, A. Bayat, S. van Ginkel, H. J. Biemans, and M. Mulder. 2020. “Students’ Online Argumentative Peer Feedback, Essay Writing, and Content Learning: Does Gender Matter?” Interactive Learning Environments 28 (6): 698–712. doi:10.1080/10494820.2018.1543200.
  • Ocampo, J. C. G., E. Panadero, and F. Díez. 2023. “Are Men and Women Really Different? The Effects of Gender and Training on Peer Scoring and Perceptions of Peer Assessment.” Assessment & Evaluation in Higher Education 48 (6): 760–776. doi:10.1080/02602938.2022.2130167.
  • Panadero, E., and A. A. Lipnevich. 2022. “A Review of Feedback Models and Typologies: Towards an Integrative Model of Feedback Elements.” Educational Research Review 35: 100416. doi:10.1016/j.edurev.2021.100416.
  • Planas-Lladó, A., L. Feliu, F. Castro, R. M. Fraguell, G. Arbat, J. Pujol, J. J. Suñol, and P. Daunis-I-Estadella. 2018. “Using Peer Assessment to Evaluate Teamwork from a Multidisciplinary Perspective.” Assessment & Evaluation in Higher Education 43 (1): 14–30. doi:10.1080/02602938.2016.1274369.
  • Porat, E., I. Blau, and A. Barak. 2018. “Measuring Digital Literacies: Junior High-School Students’ Perceived Competencies versus Actual Performance.” Computers & Education 126: 23–36. doi:10.1016/j.compedu.2018.06.030.
  • Rovai, A. P., and J. D. Baker. 2005. “Gender Differences in Online Learning: Sense of Community, Perceived Learning, and Interpersonal Interactions.” Quarterly Review of Distance Education 6 (1): 31–44.
  • Ryan, J. M., and P. D. Harrison. 1995. “The Relationship between Individual Instructional Characteristics and the Overall Assessment of Teaching Effectiveness across Different Instructional Contexts.” Research in Higher Education 36 (5): 577–594. doi:10.1007/BF02208832.
  • Ryan, T., M. Henderson, and M. Phillips. 2019. “Feedback Modes Matter: Comparing Student Perceptions of Digital and Non-Digital Feedback Modes in Higher Education.” British Journal of Educational Technology 50 (3): 1507–1523. doi:10.1111/bjet.12749.
  • Strijbos, J.-W., S. Narciss, and K. Dünnebier. 2010. “Peer Feedback Content and Sender’s Competence Level in Academic Writing Revision Tasks: Are They Critical for Feedback Perceptions and Efficiency?” Learning and Instruction 20 (4): 291–303. doi:10.1016/j.learninstruc.2009.08.008.
  • Strijbos, J. W., R. Pat-El, and S. Narciss. 2021. “Structural Validity and Invariance of the Feedback Perceptions Questionnaire.” Studies in Educational Evaluation 68: 100980. doi:10.1016/j.stueduc.2021.100980.
  • Taghizadeh Kerman, N., S. K. Banihashem, M. Karami, E. Er, S. van Ginkel, and O. Noroozi. 2023a. “Online Peer Feedback in Higher Education: A Synthesis of the Literature.” Education and Information Technologies 29 (1): 763–813. doi:10.1007/s10639-023-12273-8.
  • Taghizadeh Kerman, N., S. K. Banihashem, and O. Noroozi. 2023b. “The Relationship among Students’ Attitude towards Peer Feedback, Peer Feedback Performance, and Uptake.” In The Power of Peer Learning. Social Interaction in Learning and Development, edited by O. Noroozi and B. De Wever. Cham: Springer. doi:10.1007/978-3-031-29411-2_16.
  • Taghizadeh Kerman, N., O. Noroozi, S. K. Banihashem, M. Karami, and H. J. A. Biemans. 2022. “Online Peer Feedback Patterns of Success and Failure in Argumentative Essay Writing.” Interactive Learning Environments 32 (2): 614–626. doi:10.1080/10494820.2022.2093914.
  • Toulmin, S. 1958. The Uses of Argument. Cambridge University Press.
  • Valero Haro, A., O. Noroozi, H. Biemans, and M. Mulder. 2019. “First- and Second-Order Scaffolding of Argumentation Competence and Domain-Specific Knowledge Acquisition: A Systematic Review.” Technology, Pedagogy and Education 28 (3): 329–345. doi:10.1080/1475939X.2019.1612772.
  • Valero Haro, A., O. Noroozi, H. Biemans, and M. Mulder. 2022. “Argumentation Competence: Students’ Argumentation Knowledge, Behavior and Attitude and Their Relationships with Domain-Specific Knowledge Acquisition.” Journal of Constructivist Psychology 35 (1): 123–145. doi:10.1080/10720537.2020.1734995.
  • Valero Haro, A., O. Noroozi, H. J. Biemans, M. Mulder, and S. K. Banihashem. 2023. “How Does the Type of Online Peer Feedback Influence Feedback Quality, Argumentative Essay Writing Quality, and Domain-Specific Learning?” Interactive Learning Environments: 1–20. doi:10.1080/10494820.2023.2215822.
  • van der Kleij, F. M., and A. A. Lipnevich. 2021. “Student Perceptions of Assessment Feedback: A Critical Scoping Review and Call for Research”. Educational Assessment, Evaluation and Accountability 33 (2): 345–373. doi:10.1007/s11092-020-09331-x.
  • Van Zundert, M. J., K. D. Könings, D. M. A. Sluijsmans, and J. J. G. Van Merriënboer. 2012. “Teaching Domain-Specific Skills before Peer Assessment Skills is Superior to Teaching Them Simultaneously.” Educational Studies 38 (5): 541–557. doi:10.1080/03055698.2012.654920.
  • Wang, X. M., G. J. Hwang, Z. Y. Liang, and H. Y. Wang. 2017. “Enhancing Students’ Computer Programming Performances, Critical Thinking Awareness and Attitudes towards Programming: An Online Peer-Assessment Attempt.” Journal of Educational Technology & Society 20 (4): 58–68.
  • Yunusa, A. A., and I. N. Umar. 2021. “A Scoping Review of Critical Predictive Factors (CPFs) of Satisfaction and Perceived Learning Outcomes in E-Learning Environments.” Education and Information Technologies 26 (1): 1223–1270. doi:10.1007/s10639-020-10286-1.