6,684
Views
21
CrossRef citations to date
0
Altmetric
Articles

Worked example or scripting? Fostering students’ online argumentative peer feedback, essay writing and learning

ORCID Icon, ORCID Icon & ORCID Icon
Pages 655-669 | Received 09 Feb 2020, Accepted 17 Jul 2020, Published online: 01 Aug 2020

ABSTRACT

This study compared the effects of worked example and scripting on students’ argumentative peer feedback, essay and learning qualities. Participants were 80 BSc students who were randomly divided over 40 dyads and assigned to two experimental conditions (worked example and scripting). An online peer feedback environment named EduTech was designed and students were tasked with writing an argumentative essay, engaging in peer feedback, and revising their essay. The results indicate that students in the scripting condition benefited more than students in the worked example condition regarding peer feedback quality. Furthermore, the scores of students in both conditions improved from pre-test to post-test in terms of argumentative essay writing and learning. This difference was significant, however, between the two conditions only with regard to argumentative essay writing favoring the scripting condition. We explain how each of these approaches can be used to improve students’ argumentative peer feedback and essay writing and learning.

1. Introduction

Teaching students how to write high quality argumentative essays has been a cause of concern for many teachers and scholars in higher education. Unfortunately, teachers often are not content with students’ essay quality (see Noroozi et al., Citation2016). Online peer feedback is one of the instructional approaches to facilitate student’s learning processes (argumentative peer feedback quality) and outcomes (argumentative essay writing and domain-specific learning). Recently, online peer feedback has been used for various purposes such as improving students’ writing quality (e.g. Huisman et al., Citation2018), domain-specific learning (e.g. Latifi et al., Citation2019; Noroozi, Hatami, et al., Citation2018; Valero Haro et al., Citation2019), motivation (e.g. Hsia et al., Citation2016), self-efficacy (e.g. Hsia et al., Citation2016), and argumentative essay writing (Latifi et al., Citation2019; Valero Haro et al., Citation2019). Nevertheless, some doubts have been casted upon the quality of peer feedback and its effects on various learning processes and outcomes (see Noroozi et al., Citation2016). Therefore, a variety of approaches have been suggested to support students when they engage in peer feedback (see Lin, Citation2018; Noroozi et al., Citation2016; Valero Haro et al., Citation2019). This study focuses on worked example and scripting approaches for enhancing quality of argumentative peer feedback processes which may also result in improving quality of students’ argumentative essay and domain-specific learning.

One promising approach to facilitate students’ learning is to provide them with worked examples. Although, positive effects of worked examples on learning well-structure domains (such as mathematics and sciences) has been approved, the effectiveness of such approach regarding ill-structured domains (such as humanities, the context of this study) remain unclear (Kyun et al., Citation2013; Spiro & DeSchryver, Citation2009). Also, the worked example effect has been criticized with regard to the use of an inappropriate control group (without additional support) (Koedinger & Aleven, Citation2007; Kyun et al., Citation2013). A supported condition could be a more appropriate control group for worked examples (Sweller et al., Citation2011).

In this study, we use a well-known instructional design approach known as scripting as a supported control group condition for the worked example. Scientific literature shows that scripting can be used as a promising approach to facilitate students peer feedback processes which in turn might result in students’ argumentative essay writing quality (see Noroozi et al., Citation2016). Although, both worked example and scripting approaches have been investigated independently, yet, their effects on various aspects of argumentative peer feedback quality and argumentative essay writing and domain-specific learning have not been compared to each other. Therefore, in this study, we aim to compare the effects of worked example and scripting instructional approaches on students’ learning processes and its outcomes in the field of educational sciences.

1.1. Argumentative essay writing

Writing argumentative essays is typical for higher education students in various disciplines (see Mei, Citation2006; Noroozi et al., Citation2016, Citation2018). Writing argumentative essays requires students to generate a clear position on the issue as their main claim, supported with evidence and followed by counter-arguments against the claim. This would then need integrations of pros and cons resulting in a general conclusion on the issue (Noroozi et al., Citation2016). This suggests that essay writing needs solid argumentation strategies (Mei, Citation2006; Wingate, Citation2012). Teachers and course coordinators often complain about the lack of structure, sound argumentation and solid reasoning in the argumentative essay of students (Kellogg & Whiteford, Citation2009). Several reasons contributes to the poor quality of students’ argumentative essays: First, some learners might not know the characteristics of an argumentative essay (Bacha, Citation2010). Second, if we assume that students are aware of these characteristics, they may face difficulties to put them into practice when they are busy with writing essays (Noroozi et al., Citation2016). Third, literature suggests that features and terminologies of argumentative essays could be different from domain to domain (Andrews, Citation1995; Wingate, Citation2012). This may cause students struggle in writing similar essays for different argumentation tasks in other domains and topics (see Noroozi, Hatami, et al., Citation2018; Wingate, Citation2012). These problems suggest that students need supplementary support for how to write high quality argumentative essays. Peer feedback has been one of the most promising approaches that can be used for facilitation of argumentative essay writing.

1.2. Peer feedback

Peer feedback is defined as a reciprocal process whereby learners provide information on one or more aspect(s) of the work or performance of other equal-status students (see Baker, Citation2016; Hattie & Timperley, Citation2007). Peer feedback can be an interesting substitute for teacher feedback (Baker, Citation2016) who often have limited time. Peer feedback enables students to regulate their learning processes, that is, to take an active role in evaluating, monitoring, and managing their own learning. Peer feedback engages students in the learning activities and provides them with an opportunity to receive faster in a timelier manner, more frequent, and more voluminous feedback (Topping, Citation1998) than teacher feedback. This is important when class size is increasing and there are resource constraints (e.g. limited number of teaching staff with limited time available) for providing students with detailed feedback (Noroozi et al., Citation2016).

Empirical studies have shown that peer feedback can be effective for increasing students’ writing proficiency in various contexts (e.g. Baker, Citation2016; Novakovich, Citation2016; Xiao & Lucking, Citation2008). Peer feedback enables self-assessment and helps students to improve detection and revising skills (Liu & Carless, Citation2006). Compared to teacher feedback, peer feedback, enhances a sense of audience, raises learners’ awareness of their own strengths and weaknesses, encourages collaborative learning, and fosters the ownership of text (Xiao & Lucking, Citation2008). Peer feedback also enables students to better monitor, evaluate, and regulate their own learning and performance independently of the teacher (Boom et al., Citation2007). In addition, peer feedback facilitates students’ reflection and critical thinking (Novakovich, Citation2016), and encourages them to become more sophisticated thinkers and writers (Baker, Citation2016). In other words, peer feedback not only leads to improvement of students’ writing proficiency, but also helps them develop discipline-specific knowledge (Liu & Carless, Citation2006), enhance their understandings on the topic (Hattie & Timperley, Citation2007; Liu & Carless, Citation2006). Although, scientific literature highlights the importance of peer feedback for various aspects of learning processes and outcomes, there are challenges to high quality peer feedback (Noroozi et al., Citation2016).

1.3. Challenges for peer feedback

Scientific evidence points out various challenges and issues for peer feedback next to its benefits for writing and learning. First, students (especially novices in their own disciplines) typically provide surface level feedback to peers’ writing rather than providing semantic feedback and advice, since they are not trained on how to give constructive and critical comments to peers’ writing (Cho & Schunn, Citation2007). Second, there is always a concern on validity of peer feedback compared to teacher feedback (Liu & Carless, Citation2006), because, students have less knowledge and expertise than academics which makes them less likely to carry out reliable and objective assessment (Cho & Schunn, Citation2007; Liu & Carless, Citation2006). This may result in distrust in learning peer's quality of feedback which may not only have consequences for learning but also creation of a negative attitude that can also result in negative emotional reactions and further complications during peer feedback process (Cheng et al., Citation2014; Noroozi et al., Citation2016). Third, although peer feedback (especially in face-to-face settings) leads to saving staff time, it increases time on task for students (Due to thinking, analyzing, comparing and communicating) (see Liu & Carless, Citation2006). Therefore, peer feedback can be time-consuming for students (Falchikove, Citation2001). Forth, providing and receiving critical feedback from peers may bring out psychological and emotional effects (Noroozi et al., Citation2016) such as fear of losing face, and treating critiques as somewhat personal attacks (Rourke & Kanuka, Citation2007). In addition, holding a sense of grading during peer feedback is emotionally fraught for students and can disrupt their abilities to provide useful feedback (Liu & Carless, Citation2006). As a result, feedback may remain at the surface level lacking well-founded justifications for promoting critical thinking and elaborative learning (Noroozi et al., Citation2016). Last but not least, some peer learning tasks, such as peer feedback requires more complex and higher-level cognitive processing (King, Citation2002) that can be challenging for both teachers and students.

In general, “high-level cognitive processing involves making inferences, drawing conclusions, synthesizing ideas, generating hypotheses, comparing and contrasting, finding and articulating problems, analyzing and evaluating alternatives, monitoring thinking, and so on” (King, Citation2002) and may prevent students to provide high quality feedback. All these challenges imply that such thoughtful processes do not happen spontaneously (King, Citation2002; Kollar & Fischer, Citation2010), and asking students to engage in peer feedback without any support may not result in achieving the intended learning outcomes. So, additional instructional strategies are needed to fully attain potentials of peer feedback for argumentative essay writing. Online learning environments provide us with ample opportunities to support peer feedback processes.

1.4. Online peer feedback

In recent years, a variety of online environments have been designed to support peer feedback processes and its outcomes (Latifi et al., Citation2019; Noroozi, Kirschner, et al., Citation2018). The online peer feedback environments allow students to submit their works, provide feedback to their peers’ works reciprocally and anonymously and continuously revise their works based on feedback received from their peers, without restriction of time and space (Tsai, Citation2009). Also, such environments increase timeliness of feedback for learners to reflect on one’s own and their peers’ work (Chen & Tsai, Citation2009). Next to these benefits for students, online peer feedback environments also have notable benefits for instructors. For example, these environments enable instructors to systematically manage peer feedback processes and monitor progress of and interactivity between students (Chen & Tsai, Citation2009). Instructors can automatically assign students to groups based on various demographic and background features (such as gender, fields, achievement and preference) (Tsai, Citation2009). Online environments can increase validity and reliability of peer feedback by using anonymous online peer feedback and clarification of criteria embedded in the systems for example in terms of rubrics (Wen & Tsai, Citation2008). Implementing peer feedback process in online environments could also decline instructor’s workload (Davies, Citation2000) and peer feedback time (McGourty, Citation2000). Using online peer feedback, instructors can automatically collect and record data about students activities such as time spent on task, off-tasks, degree of participation and communication among learners, and use them for further learning analytics (Tsai, Citation2009). With advancement and possibilities of online learning environments in the recent years, various approaches, such as worked example and scripting strategies can be offered to support students with providing high-quality peer argumentative feedback in these environments.

1.5. Scaffolding argumentative peer feedback

Worked examples and scripting have been considered to be promising instructional approaches that can be used to facilitate students’ learning processes and outcomes (Kyun et al., Citation2013; Valero Haro et al., Citation2019). A worked example provides a clear and step-by-step solution to a problem (Kyun et al., Citation2013; Sweller et al., Citation2011). Most empirical studies on worked example have been used in the context of problem-solving where learners are provided with worked examples and then are asked to solve the equivalent problem (Sweller et al., Citation2011). In this approach, students are able to learn key aspects about the problem by looking to an expert model and use those aspects to solve other equivalent problems (Sweller et al., Citation2011).

Scientific research has shown that worked examples can facilitate problem solving processes (Kirschner et al., Citation2006) and the acquisition of domain-specific knowledge (Kyun et al., Citation2013) by reducing extraneous cognitive load which in turn lead to better learning outcomes (Sweller et al., Citation2011). Up until now, the bulk of scientific research on worked examples has been used in well-structured domains ranging from mathematics and science domains to related technical domains (e.g. Carroll, Citation1994; Paas & Van Merriënboer, Citation1994; Renkl, Citation2005). In this study, we use worked examples for the domain of educational sciences.

Despite positive impacts of the worked examples on various aspects of students’ learning processes and outcomes (Kalyuga & Sweller, Citation2004; Kyun et al., Citation2013; Sweller et al., Citation2011), there are two important challenges related to worked example effect. Some studies have questioned the passive approach of learners when they deal with worked examples (Kyun et al., Citation2013). Also, the worked example effect has been criticized because of the use of an inappropriate control group (Koedinger & Aleven, Citation2007). In order to investigate the actual effects of worked examples, one need to compare that with a supported condition as a control group (see Sweller et al., Citation2011). Hence, in this study we introduce a well-known instructional approach known as scripting as a supported control condition for worked examples.

Scripting is a typical instructional approach that has been frequently used to scaffold various aspects of the learning processes and outcomes in a more active form than worked examples (Gan & Hattie, Citation2014; Gielen & De Wever, Citation2012). Scripts are seen as specific type of scaffolds in form of detailed and explicit guidelines or instructions that help students engage in a structured and desired learning processes to achieve expected learning outcomes (see Kollar et al., Citation2006). Recent studies revealed that providing structure is essential to support learners in generating high-quality peer feedback (e.g. Gan & Hattie, Citation2014; Gielen & De Wever, Citation2012; Peters et al., Citation2017). Students who receive and/or provide high quality peer feedback, often write high quality argumentative essays and vice versa (Noroozi et al., Citation2016). Core elements of scripts are prompts that cue students on how to identify weaknesses and strengths in a learning product and to generate specific suggestions for improvement (Peters et al., Citation2017). Previous research have indicated that scripts can facilitate both processes and outcomes of argumentative essay writing and knowledge construction (see Kollar et al., Citation2007; Stegmann et al., Citation2007, Citation2012). Yet, studies report mixed effects of scripts on domain-specific knowledge acquisition (Stegmann et al., Citation2012). In other words, scripting has been shown to be beneficial for the acquisition of domain-general skills (e.g. Noroozi et al., Citation2013), while there are contradictory results (e.g. Stegmann et al., Citation2007) regarding the effect of scripts on domain-specific learning outcomes (Kollar et al., Citation2014). This is especially the case if scripts actually obstruct learner's knowledge attainment when they become too strict or simply too flexible (e.g. Fischer et al., Citation2013). Also, students with low prior knowledge or experience in formative feedback might benefit from more detailed instructions and directive feedback prompts (a form of script), whereas a too strict structure of a feedback script might also interrupt students’ natural problem-solving processes (Peters et al., Citation2017). Therefore, it is essential to determine the precise scripting level that learners need (Dillenbourg et al., Citation2009). That is why in this study, we aim to compare the effects of scripting with worked example effects (with more degree of freedom) on various aspects of the learning processes and outcomes of students.

To conclude, there are both benefits and challenges of worked example and scripting. Although, the effects of these instructional approaches have been studied in relation to different learning processes and outcomes independently and in various disciplines, yet, their effects on argumentative peer feedback quality and argumentative essay writing have not been compared to each other. Therefore, in this study, we aim to compare the effects of worked example and scripting on peer feedback processes and its outcomes in the field of educational sciences. Following research questions are formulated to achieve the main goals of this study.

  1. What are the differences between the effects of worked example and scripting on students’ argumentative feedback quality?

  2. What are the differences between the effects of worked example and scripting on students’ argumentative essay quality?

  3. What are the differences between the effects of worked example and scripting on students’ domain-specific knowledge acquisition?

2. Method

2.1. Participants

The study took place at Kharazmi University, Tehran, Iran. The participants were 80 BSc students in the field of Educational Sciences who enrolled for the course “Applying Computer in Education”. The aim of this course is to help students learn about functionalities and pros and cons of using various types of educational technologies in classrooms. This course gives students awareness on the relevant ethical issues when using these technologies and the Internet in classroom settings. The average age of the participants was 20.86 (SD = 0.95) years. Almost, 92.5% of students were female and 7.5% were male. The data was collected from 80 students who enrolled in the same course in three consecutive semesters. The experimental sessions (in which the data was collected) was a substitute as part of the regular course, Applying Computer in Education. Assignments of the experimental sessions contributed to 50% of the students’ final grade for the course. The results of the written exam contributed to other 50% of the students’ final grade for the course.

2.2. Materials and learning tasks

Mainly, the learning process was characterized by three main phases: the draft, the feedback and the revision phases. At the draft phase, each student wrote an essay on the statement: “The use of mobile phones and tablets in the classroom should be banned”. To write the essay, students were provided with a research article on mobile learning, hyperlinks to three relevant journal articles, and a set of keywords (were bolded in the article) for searching in the Google and the Bing engines. Students were asked to consider different viewpoints on using/banning “mobile phones and tablets in the classroom”. In the next phase, each student was asked to carefully read her/his learning partner’s written argumentative essay (draft) and then provide feedback on that. In the revision phase, each student was asked to revise her/his own written argumentative essays (draft) based on feedback received from her/his learning partner.

2.3. Experimental conditions

Students were randomly divided over 40 dyads and assigned to two different conditions: 20 dyads were assigned to worked example condition and the other 20 dyads were assigned to the scripting condition. An argumentative essay model was developed to support students’ argumentative peer feedback processes. This argumentative essay model was based on the literature which was further adjusted through panel of experts for the specific topic of the course. Scientific literature (see Bacha, Citation2010; Hyland, Citation1990; Leitão, Citation2003; Mei, Citation2006; Schneer, Citation2014; Toulmin, Citation1958; Toulmin et al., Citation1984; Wood, Citation2001) suggests that an argumentative essay model contains introduction (mostly at the beginning of essay which serves as an attention grabber providing background information and the writer’s clear position on the topic), argumentation (the main body of the essay to support and provide reasons and evidence for the writer’s position), and conclusion (the final take home message reaffirming or restating the writer’s position). Also, some scholars (e.g. Bacha, Citation2010; Leitão, Citation2003; Reid, Citation1988; Toulmin, Citation1958; Toulmin et al., Citation1984) suggest to include counter-arguments and opposing points of view as essential elements of argumentative essay. The reasons is that arguments and counter-arguments should then be weighed to come up with a valid conclusion. Argumentative essay models are subject-dependent since various domains have different argumentation structure, discipline's value, terminology, and epistemology (Andrews, Citation2010; Wingate, Citation2012). Therefore, writing essays can be somewhat different in a given domain than others and should thus be consulted with relevant experts in the field (Wingate, Citation2012). We held a series of meetings with disciplinary experts, namely professors of the educational sciences and educational technology to define the elements of a high-quality argumentative essay for students in the field of Educational Sciences. The outcomes of these meetings provided us with a list of items that needs to be included in students’ argumentative essays in the field of Educational Sciences as following: a clear position on the topic, the expression of the topic's context (introduction), the arguments and evidence (examples, facts, Expert opinion etc.) for and against the position, the weighing benefits and drawbacks of the position for integration of various pros/cons, and the final conclusion on the position (see ). We then designed our worked example and also scripts accordingly and embedded them in the online EduTech platform during the peer feedback phase.

Table 1. Features of a high-quality argumentative essay based on literature adjusted by panel of experts and corresponding scripting support.

The conditions varied only in terms of the peer feedback process phase. In the worked example condition, various elements of the argumentation model were presented to students before peer feedback phase. Specifically, the worked example included a typical answer model of a high-quality argumentative essay. This answer model provided students with key aspects of an argumentative essay elements (Sweller et al., Citation2011). The worked example in this study was an argumentative essay model related to “the implementation and use of e-learning in organizations and its consequences”. In addition, in the worked example, various components of the argumentative essay were explained and elaborated. In the scripting condition, various elements of the argumentation model was presented to students during peer feedback phase through a sets of question prompts in the EduTech (see ).

2.4. EduTech and procedure

A self-made online learning environment (EduTech) was designed. All learning processes and activities of students was recorded in the EduTech online learning platform. This platform was anonymous, meaning that in the peer feedback phase students did not know the identity of the feedback providers and receivers. Providing and receiving anonymous feedback are considered to actively engage students in the peer feedback processes and activities (Nicol et al., Citation2014), reduces bias in the feedback process and provide more objective feedback (Raes et al., Citation2015).

Overall, the study took about 5 h in five phases which was divided over five consecutive weeks: In phase 1, students received introductory explanations in the form of textual and verbal formats in the EduTech. Then, they completed a survey containing their demographic variables and domain-specific knowledge as the pre-test. In phase 2, students read articles and relevant text on the topic of mobile learning, searched the Internet (using a set of keywords bolded in the text), and wrote a draft on the following statement: “The use of mobile devices such as phones and tablets in the classroom should be banned”. 3) In phase 3, each student was asked to read the draft of her/his learning partner and provide feedback on that draft. In phase 4, each student read the comments of her/his learning partner and then revised her/his own draft based on the comments received. 5) Finally, in phase 5, each student was asked to fill in a survey on their domain-specific knowledge as the post-test.

2.5. Measurements

2.5.1. Argumentative feedback and essays quality

A rubric was developed on the basis of Noroozi et al. (Citation2016) to measure the quality of students’ argumentative feedback and their essays’ qualities; the draft and the revised versions. This rubric was built on the argumentation model presented in . The validity of this rubric was obtained through the panel of experts namely three professors in the field of Educational Sciences and the first author of the article. This rubric included a series of elements that reflect the quality of students’ argumentative feedback and their essays (see ). We assigned a single score for each of these elements both in the draft, feedback, and revised phases. For each element, students could get a score between zero and two for the peer feedback quality. A student received zero point if she/he did not provide any feedback related to each specific element of the argumentation model. She/he received one point if at least one comment was mentioned but not elaborated during peer feedback. She/he received two points if at least one comment was mentioned and elaborated during peer feedback.

The same approach was applied to the quality of argumentative essay both in the draft and also in the revision phases. Each student was given zero point if she/he did not mention anything related to each specific element of the argumentation model (e.g. not mentioned), one point if she/he provided at least one argument related to each specific element of the argumentation model (e.g. non-elaborated), and two points if she/he provided arguments related to each specific element of the argumentation model and also elaborated on that (e.g. elaborated). All points assigned to each student were added together and served as the final score indicating their quality of argumentative peer feedback and their essays for both draft and revised versions. Two trained coders (an expert coder in the context of content analysis and first author of the article) coded 10% of the data both in the feedback, draft and revised phases to evaluate the reliability index of inter-rater agreement. This resulted in identical scores in 84% of the contributions in the feedback phase, 87% of the contributions in the draft and 90% of the contributions in the revised versions. Discrepancies were resolved through discussion before the final coding. When the team of researchers made sure that the main coder was competent for coding the data alone without any further problem, coding the other 90% of the data was done individually.

2.5.2. Domain-specific knowledge measurement

The pre-test and post-test knowledge surveys, consisted of 10 multiple-choice questions, were used to measure students’ domain-specific knowledge acquisition. These questions were related to the topic of the essay such as the appropriate functionalities of various educational technologies (e.g. computers and mobile devices, smartphones and tablets) and under which condition and how to properly use them for learning purposes. The multiple-choice questions were also related to relevant ethical issues and the pros and cons of using various types of educational technologies in classrooms. The pre-test was completed by students before the study and draft phase while the post-test was administrated right after the revision phase. Each correct answer was then given one point and as a result each student could receive 10 points at maximum for both pre-test and post-test. The reliability coefficient scores for both the pre-test (Cronbach’s α = 0.83) and post-test (Cronbach’s α = 0.79) was sufficiently high.

2.5.3. Data analysis

One-way ANOVA was used to compare the two conditions in term of students’ quality of peer feedback. ANOVA test for repeated measurement was conducted to see if students’ quality of argumentative essays has improved from the draft version to revised version. ANOVA test for repeated measurement was conducted to compare the students’ domain-specific knowledge gain from pre-test to post-test.

3. Results

3.1. Results for research question 1

This section presents findings for the effects of the worked example and scripting on students’ feedback quality. The results showed a significant difference between the worked example and scripting conditions in terms of argumentative feedback quality, F (1, 78) = 53.70, p < 0.001, η2 = 0.40. Specifically, the mean score for students in the worked example condition (M = 9.02, SD = 1.09) was significantly lower than students in the scripting condition (M = 11.62, SD = 1.95). shows the students’ mean and standard deviation scores for quality of argumentative peer feedback in both conditions.

Table 2. Students’ mean scores for quality of argumentative peer feedback (max = 16; min = 0).

3.2. Results for research question 2

This section presents findings for the effects of the worked example and scripting on quality of students’ argumentative essay. The results showed that the quality of argumentative essay of students in both conditions improved significantly from draft phase to the revision phase, Wilks λ = 0.24, F (1, 78) = 244.34, p < 0.001, η2 = 0.75. However, there was a significant difference between the two conditions in term of their improvement. Students in the scripting condition (MT1 = 4.67; MT2 = 8.37; SDT1 = 0.88; SDT2 = 1.12) benefited more than students in the worked example condition (MT1 = 5.52; MT2 = 7.30; SDT1 = 1.35; SDT2 = 1.15), F (1, 78) = 30.20, p < 0.001, η2 = 0.27 ().

Table 3. Students’ draft and revised mean scores for quality of argumentative essay writing (max = 16; min = 0).

3.3. Results for research question 3

This section presents findings for the effects of the worked example and scripting on quality of students’ domain specific learning. The results showed that the domain-specific knowledge of students in both conditions improved significantly from pre-test to post-test, Wilks λ = 0. 14, F (1, 78) = 472.15, p < 0.001, η2 = 0. 85. This gain of knowledge was not significantly different between the two conditions. Students in the scripting condition (MT1 = 3.65; MT2 = 6.97; SDT1 = 0.97; SDT2 = 1.22) benefited as equal as students in the worked example condition (MT1 = 3.25; MT2 = 6.47; SDT1 = 1.25; SDT2 = 1.06), F (1, 78) = 0.11, p < 0.74, η2 = 0.001.

4. Discussions

4.1. Discussions of results for research question 1

Students in both conditions meaningfully provided high quality feedback to peers’ argumentative essays. Both worked example and also scripting approaches can be regarded as applicable instructional strategies in online learning environments to facilitate peer feedback processes which may result in better learning outcomes.

Online peer feedback environments can provide a secure and flexible environment for giving and receiving critical, constructive and evaluative feedback anonymously (e.g. Wang et al., Citation2017). Such capability can reduce peer pressure and its negative effects (Rourke & Kanuka, Citation2007). In this study, students in the scripting condition provided a higher quality feedback than students in the worked example condition. This result is in line with previous research (e.g. Noroozi et al., Citation2016, Citation2018; Valero Haro et al., Citation2019) on the effect of scripted online peer feedback on quality of students’ feedback comments (see Noroozi et al., Citation2016).

In this study, the online interface during peer feedback processes for students in the scripting condition was designed in a way that would actively provoke them to provide feedback to each of the eight components of argumentative essay model. In this condition, students were forced to write their feedback comments to each of these components in a set of text boxes. Thus, students could not skip any of these eight components. In the worked example condition, students just learnt how to provide feedback to their peers, prior to peer feedback phase, by looking into a model answer. Students during peer feedback may have neglected or forgotten to provide feedback to all the eight components of the argumentative essays of their peers. In general, since students in the worked example were not prompted, they may have ignored or neglected some of the components of their peer’s argumentative essay. This could be a reason for scripting condition to be superior to worked example condition in terms of their peer feedback quality.

4.2. Discussions of results for research question 2

The results indicated that the quality of students’ essays significantly improved overtime (from draft version to revised version) for all students in both conditions regardless of what scaffolds they received. In that sense, this study supported previous study results (e.g. Latifi et al., Citation2019; Noroozi et al., Citation2016, Citation2018; Valero Haro et al., Citation2019) about the positive effects of online peer feedback on writing performance. Although, originally worked example and scripts were designed for improving quality of students’ argumentative peer feedback, these approaches fostered quality of students’ argumentative essays, too. For the worked example condition, Sweller et al. (Citation2011) argue that students by studying a worked example are able to learn key aspects about the problem and use those aspects to solve other equivalent problems. For the scripting condition, Noroozi et al. (Citation2016) showed that students who learn the essential components of an argumentative essay during peer feedback processes are also able to improve their own argumentative essays. Both worked example and scripting instructional design approaches enabled students to engage in high-quality argumentation far beyond their current level of competence (Stegmann et al., Citation2007). Engaging in such high quality argumentative peer feedback helped students learn how to provide solid argumentative feedback to various aspects of peer’ essays which they could also transfer such argumentation into their revised argumentative essay. In other words, in both conditions, students learnt the structure and components of an argumentative essay during peer feedback which were finely reflected into their own argumentative essays.

The results also indicated that the quality of the argumentative essays of students in the scripting condition improved more than students in the worked example condition. Noroozi et al. (Citation2016) showed that students’ learning outcomes directly depends on the learning processes and activities. In other words, students who receive and provide more clarified, elaborated, and justified feedback benefit more from the argumentative peer feedback setting. As explained before, the quality of the argumentative feedback of the students in the scripting condition was higher than students in the worked example condition. Thus, such difference in the improvement of the argumentative essay quality from draft to revision phase of both conditions can be explained by looking into students’ learning processes and activities.

4.3. Discussions of results for research question 3

Overall the domain-specific knowledge of students in both conditions significantly improved over time (from pre-test to post-test) regardless of what scaffolds they received. In that sense, this study supported the previous findings (e.g. Latifi et al., Citation2019; Noroozi et al., Citation2013, Citation2018; Valero Haro et al., Citation2019) about the positive effects of supported online peer feedback on domain specific learning.

The results also indicated that there was not a difference between students in both condition regarding the gain of knowledge from pre-test to post-test. Both worked example and scripting supports embedded in the EduTeh platform allowed students to engage in a high quality feedback processes. In this regard, various studies have shown that engaging in high quality feedback is related to deeper cognitive processing and elaboration which in turn fosters knowledge acquisition (e.g. Baker, Citation2003; Stegmann et al., Citation2007, Citation2012) Furthermore, giving feedback to and receiving feedback from peers followed by solid reasoning and justified arguments helped students clarify, reflect and acquire different perspectives on the issue (Bayerlein, Citation2014). These reflections and clarifications helped learners to cognitively process activities for discovering and benefiting from complementary knowledge of learning partners which may also enhanced their domain specific knowledge.

5. Conclusions, limitations, and suggestions for future research

This study was conducted to investigate the impacts of two different peer feedback scaffolding approaches including worked example and scripting on students’ learning processes and outcomes. The results showed that both approaches can improve the quality of students’ argumentative feedback and, consequently, argumentative essays and domain-specific learning. However, students in the scripting condition outperformed students in the worked example condition regarding their argumentative peer feedback quality and essay writing.

These findings have significant implications: First, both worked example and scripting approaches can enhance students’ quality of the learning processes and outcomes. That implies that when the use of scripting approach is not possible for any reasons, worked examples, as new instructional design approach, can be an appropriate substitute for facilitation of peer feedback processes. One of the advantages of worked examples is their easy design and easy to use within user interface of online peer feedback environments. Also, some literature suggests that scripts, if not designed carefully, may impede learning processes by imposing cognitive load on learners (Dillenbourg, Citation2002), while the worked example strategy may reduce extraneous cognitive load (Sweller et al., Citation2011). Therefore, using worked examples can optimize the peer feedback processes in settings in which we aim to reduce students’ extraneous cognitive load. Furthermore, in this study we found out that the worked example effect can also be obtained in ill-structured domains. In other words, worked example effect is not restricted only to well-structured domains and thus can be extended to ill-structured domains as well (e.g. Spiro & DeSchryver, Citation2009).

There are some limitations to this study that can be addressed in subsequent research agenda. First, in this study most of the participants were female (92.5%). Scientific evidence has shown differences between females and males in terms of the quality of their argumentative feedback (Noroozi, Hatami, et al., Citation2018), and also their quality of argumentation processes and outcomes (Asterhan et al., Citation2012). This means that the results of this study should be treated with cautious when learners are both female and male in our sample. Therefore, we propose to conduct similar studies with almost an equal number of participants to see if the effects of worked examples and scripting instructional design approaches can be different or similar for both female and male students. Second this study was conducted in a real educational setting that put some limits in terms of sampling and also the domain of the study. Future studies could be conducted with more participants also in different domains to see to what extent the outcomes of this study can be generalized. Third, this study was conducted only at one time practice. However, the effects of worked example and scripting approaches can be much more visible in longer duration studies over time, because, automation takes place slowly and therefore requires substantial acquisition time (Sweller et al., Citation2011). Therefore, it is recommended that future studies investigate the effects of these approaches over a longer period of time (more than one time practice). Finally, in the present study, quantitative scoring rubrics were incorporated to measure the quality of students’ feedback messages and their argumentative essays. However, the use of qualitative methods to analyze in-depth content of students’ feedback massages and their essays could also reveal more information on the actual effects of the worked example and scripting approaches. Therefore, future studies can use both quantitative and qualitative measurements to see whether the outcomes are comparable.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Saeed Latifi

Saeed Latifi is a PhD graduate of Educational Technology at Tarbiat Modares University, Iran. He is now working at Kharazmi University, Iran. His research interests include E-Learning and Distance Education, Computer-Supported Collaborative Learning (CSCL), CSCL Scripts and Transactivity. Also he is interested in designing e-learning courses that learners feel comfortable with (i.e. user-friendly).

Omid Noroozi

Omid Noroozi is an Associate Professor at the Education and Learning Sciences Chair Group, Wageningen University and Research, The Netherlands (bode 68, P.O. Box 8130, 6700 EW Wageningen, The Netherlands; Tel.: +31(0)317482710; E-mail: [email protected]). His research interests include Peer Feedback, Collaborative Learning, E-Learning and Distance Education, Computer-Supported Collaborative Learning (CSCL), Argumentative Knowledge Construction in CSCL, Argumentation-Based CSCL, CSCL Scripts and Transactivity.

Ebrahim Talaee

Ebrahim Talaee is a faculty member of Educational Technology at Tarbiat Modares University, Iran. His research interests include Educational Technology, Peer Feedback, E-Learning and Distance Education.

References

  • Andrews, R. (1995). About argument: Teaching and learning argument. Cassell. https://ueaeprints.uea.ac.uk/id/eprint/58902/
  • Andrews, R. (2010). Argumentation in higher education. Improving practice through theory and research. Routledge.
  • Asterhan, C. S. C., Schwarz, B. B., & Gil, J. (2012). Small-group, computer-mediated argumentation in middle-school classrooms: The effects of gender and different types of online teacher guidance. British Journal of Educational Psychology, 82(3), 375–397. https://doi.org/10.1111/j.2044-8279.2011.02030.x
  • Bacha, N. N. (2010). Teaching the academic argument in a university EFL environment. Journal of English for Academic Purposes, 9(3), 229–241. https://doi.org/10.1016/j.jeap.2010.05.001
  • Baker, M. (2003). Computer-mediated argumentative interactions for the co-elaboration of scientific notions. Arguing to Learn, 47–78. https://doi.org/10.1007/978-94-017-0781-7_3
  • Baker, M. (2016). Peer review as a strategy for improving students ‘ writing process. https://doi.org/10.1177/1469787416654794
  • Bayerlein, L. (2014). Students’ feedback preferences: How do students react to timely and automatically generated assessment feedback? Assessment and Evaluation in Higher Education, 39(8), 916–931. https://doi.org/10.1080/02602938.2013.870531
  • Boom, G. V. D., Paas, F., & Van Merrie, J. J. G. (2007). Effects of elicited reflections combined with tutor or peer feedback on self-regulated learning and learning outcomes. Learning and Instruction, 17, 532–548. https://doi.org/10.1016/j.learninstruc.2007.09.003
  • Carroll, W. M. (1994). Using worked examples as an instructional support in the algebra classroom. Journal of Educational Psychology, 86(3), 360–367. https://doi.org/10.1037/0022-0663.86.3.360
  • Chen, Y. C., & Tsai, C. C. (2009). An educational research course facilitated by online peer assessment. Innovations in Education and Teaching International, 46(1), 105–117. https://doi.org/10.1080/14703290802646297
  • Cheng, K. H., Hou, H. T., & Wu, S. Y. (2014). Exploring students’ emotional responses and participation in an online peer assessment activity: A case study. Interactive Learning Environments, 22(3), 271–287. https://doi.org/10.1080/10494820.2011.649766
  • Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system. Computers and Education, 48(3), 409–426. https://doi.org/10.1016/j.compedu.2005.02.004
  • Davies, P. (2000). Computerized peer assessment. Innovations in Education and Teaching International, 37(4), 346–355. https://doi.org/10.1080/135580000750052955
  • Dillenbourg, P. (2002). Over-scripting CSCL: The risks of blending collaborative learning with instructional design. In Three worlds of CSCL: Can we support CSCL. In Three worlds of CSCL. Can we support CSCL? https://doi.org/10.1016/j.sbspro.2014.07.508
  • Dillenbourg, P., Jarvela, S., & Fischer, F. (2009). The evolution of research on computer-supported collaborative learning: From design to orchestration. In N. Balacheff, S. Ludvigsen, T. De Jong, A. Lazonder, & S. Barnes (Eds.), Technology enhanced learning: Principles and Products. Springer. https://doi.org/10.1007/978-1-4020-9827-7_1
  • Falchikove, N. (2001). Learning together: Peer tutoring in higher education. RoutledgeFalmer.
  • Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56–66. https://doi.org/10.1080/00461520.2012.748005
  • Gan, M. J. S., & Hattie, J. (2014). Prompting secondary students’ use of criteria, feedback specificity and feedback levels during an investigative task. Instructional Science, 42(6), 861–878. https://doi.org/10.1007/s11251-014-9319-4
  • Gielen, M., & De Wever, B. (2012). Peer assessment in a wiki: Product improvement, students’ learning and perception regarding peer feedback. Procedia – Social and Behavioral Sciences, 69, 585–594. https://doi.org/10.1016/j.sbspro.2012.11.450
  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
  • Hsia, L. H., Huang, I., & Hwang, G. J. (2016). Effects of different online peer-feedback approaches on students’ performance skills, motivation and self-efficacy in a dance course. Computers and Education, 96, 55–71. https://doi.org/10.1016/j.compedu.2016.02.004
  • Huisman, B., Saab, N., van Driel, J., & van den Broek, P. (2018). Peer feedback on academic writing: Undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assessment and Evaluation in Higher Education, 43(6), 955–968. https://doi.org/10.1080/02602938.2018.1424318
  • Hyland, K. (1990). A genre description of the argumentative essay. RELC Journal, 21(1), 66–78. https://doi.org/10.1177/003368829002100105
  • Kalyuga, S., & Sweller, J. (2004). Measuring knowledge to optimize cognitive load Factors during Instruction. Journal of Educational Psychology, 96(3), 558–568. https://doi.org/10.1037/0022-0663.96.3.558
  • Kellogg, R. T., & Whiteford, A. P. (2009). Training advanced writing skills: The case for deliberate practice. Educational Psychologist, 44(4), 250–266. https://doi.org/10.1080/00461520903213600
  • King, A. (2002). Structuring peer interaction to promote high-level cognitive processing. Theory Into Practice, 41(1), 33–39. https://doi.org/10.1207/s15430421tip4101_6
  • Kirschner, P. A., Sweller, J., Clark, R. E., Kirschner, P. A., & Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 37–41. https://doi.org/10.1207/s15326985ep4102_1
  • Koedinger, K. R., & Aleven, V. (2007). Exploring the assistance dilemma in experiments with cognitive tutors. Educational Psychology Review, 19(19), 239–264. https://doi.org/10.1007/s10648-007-9049-0 doi: 10.1007/s10648-007-9049-0
  • Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: A cognitive perspective. In Learning and instruction (Vol. 20, Issue 4, pp. 344–348). Elsevier BV. https://doi.org/10.1016/j.learninstruc.2009.08.005
  • Kollar, I., Fischer, F., & Hesse, F. (2006). Collaboration scripts-A conceptual analysis. Collaboration Scripts-A Conceptual Analysis. Edu-Cational Psychology Review, 18(2), https://doi.org/10.1007/s10648-006-9007-2ï
  • Kollar, I., Fischer, F., & Slotta, J. D. (2007). Internal and external scripts in computer-supported collaborative inquiry learning. Learning and Instruction, 17(6), 708–721. https://doi.org/10.1016/j.learninstruc.2007.09.021
  • Kollar, I., Ufer, S., Reichersdorfer, E., Vogel, F., Fischer, F., & Reiss, K. (2014). Effects of collaboration scripts and heuristic worked examples on the acquisition of mathematical argumentation skills of teacher students with different levels of prior achievement. Learning and Instruction, 32, 22–36. https://doi.org/10.1016/j.learninstruc.2014.01.003
  • Kyun, S., Kalyuga, S., & Sweller, J. (2013). The effect of worked examples when learning to write essays in English literature. The Journal of Experimental Education, 81(3), 385–408. https://doi.org/10.1080/00220973.2012.727884
  • Latifi, S., Noroozi, O., Hatami, J., & Biemans, H. J. A. (2019). How does online peer feedback improve argumentative essay writing and learning? Innovations in Education and Teaching International, https://doi.org/10.1080/14703297.2019.1687005
  • Leitão, S. (2003). Evaluating and selecting counterarguments: Studies of children’s rhetorical awareness. Written Communication, 20(3), 269–306. https://doi.org/10.1177/0741088303257507
  • Lin, G. Y. (2018). Anonymous versus identified peer assessment via a Facebook-based learning application: Effects on quality of peer feedback, perceived learning, perceived fairness, and attitude toward the system. Computers and Education, 116, 81–92. https://doi.org/10.1016/j.compedu.2017.08.010
  • Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education, 11(3), 279–290. https://doi.org/10.1080/13562510600680582
  • McGourty, J. (2000). Using multisource feedback in the classroom: A computer-based approach. IEEE Transactions on Education, 43(2), 120–124. https://doi.org/10.1109/13.848062
  • Mei, W. S. (2006). Creating a contrastive rhetorical stance: Investigating the strategy of problematization in students’ argumentation. RELC Journal, https://doi.org/10.1177/0033688206071316
  • Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment and Evaluation in Higher Education, 39(1), 102–122. https://doi.org/10.1080/02602938.2013.795518
  • Noroozi, O., Biemans, H., & Mulder, M. (2016). Relations between scripted online peer feedback processes and quality of written argumentative essay. The Internet and Higher Education, 31, 20–31. https://doi.org/10.1016/j.iheduc.2016.05.002
  • Noroozi, O., Hatami, J., Bayat, A., van Ginkel, S., Biemans, H. J. A., & Mulder, M. (2018). Students’ online argumentative peer feedback, essay writing, and content learning: Does gender matter? Interactive Learning Environments, https://doi.org/10.1080/10494820.2018.1543200
  • Noroozi, O., Kirschner, P. A., Biemans, H. J. A., & Mulder, M. (2018). Promoting argumentation competence: Extending from first- to second-order scaffolding through adaptive fading. Educational Psychology Review, 30(1), 153–176. https://doi.org/10.1007/s10648-017-9400-z
  • Noroozi, O., Weinberger, A., Biemans, H. J. A., Mulder, M., & Chizari, M. (2013). Facilitating argumentative knowledge construction through a transactive discussion script in CSCL. Computers and Education, 61(1), 59–76. https://doi.org/10.1016/j.compedu.2012.08.013
  • Novakovich, J. (2016). Fostering critical thinking and reflection through blog-mediated peer feedback. Journal of Computer Assisted Learning, 32(1), 16–30. https://doi.org/10.1111/jcal.12114
  • Paas, F. G. W. C., & Van Merriënboer, J. J. G. (1994). Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach. Journal of Educational Psychology, 86(1), 122–133. https://doi.org/10.1037/0022-0663.86.1.122
  • Peters, O., Koerndle, H., & Narciss, S. (2017). Effects of a formative assessment script on how vocational students generate formative feedback to a peer’s or their own performance. European Journal of Psychology of Education, 1–27. https://doi.org/10.1007/s10212-017-0344-y
  • Raes, A., Vanderhoven, E., & Schellens, T. (2015). Increasing anonymity in peer assessment by using classroom response technology within face-to-face higher education. Studies in Higher Education, 40(1), 178–193. https://doi.org/10.1080/03075079.2013.823930
  • Reid, J. M. (1988). The process of composition. Prentice Hall.
  • Renkl, A. (2005). The worked-out examples principle in multimedia learning. In The Cambridge handbook of multimedia learning (pp. 229–245). Cambridge University Press. https://doi.org/10.1017/CBO9780511816819.016
  • Rourke, L., & Kanuka, H. (2007). Barriers to online critical discourse. International Journal of Computer-Supported Collaborative Learning, 2(1), 105–126. https://doi.org/10.1007/s11412-007-9007-3
  • Schneer, D. (2014). Rethinking the argumentative essay. December, 619–653. https://doi.org/10.1002/tesj.123
  • Spiro, R. J., & DeSchryver, M. (2009). Constructivism: When it’s the wrong idea and when it’s the only idea. In S Tobias & TM Duffy (Eds.), Constructivist instruction: Success or failure? (pp. 106–123). Routledge.
  • Stegmann, K., Wecker, C., Weinberger, A., & Fischer, F. (2012). Collaborative argumentation and cognitive elaboration in a computer-supported collaborative learning environment. Instructional Science, 40(2), 297–323. https://doi.org/10.1007/s11251-011-9174-5
  • Stegmann, K., Weinberger, A., & Fischer, F. (2007). Facilitating argumentative knowledge construction with computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(4), 421–447. https://doi.org/10.1007/s11412-007-9028-y
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.
  • Topping, K. J. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.3102/00346543068003249
  • Toulmin, S. E. (1958). The uses of argument. Cambridge University Press.
  • Toulmin, S. E., Rieke, R. D., & Janik, A. (1984). An introduction to reasoning. Macmillan.
  • Tsai, C. C. (2009). Internet-based peer assessment in high school settings. In Handbook of research on new media literacy at the K-12 level (pp. 743–754). IGI Global.
  • Valero Haro, A., Noroozi, O., Biemans, H. J. A., & Mulder, M. (2019). The effects of an online learning environment with worked examples and peer feedback on students’ argumentative essay writing and domain-specific knowledge acquisition in the field of biotechnology. Journal of Biological Education, 53(4), 390–398. https://doi.org/10.1080/00219266.2018.1472132
  • Wang, X.-M., Hwang, G.-J., Liang, Z.-Y., & Wang, H.-Y. (2017). Enhancing students’ computer programming performances, critical thinking awareness and attitudes towards programming. Journal of Educational Technology & Society, 20(4), 58–68. http://www.jstor.org/stable/26229205
  • Wen, M. L., & Tsai, C. C. (2008). Online peer assessment in an inservice science and mathematics teacher education course. Teaching in Higher Education, 13(1), 55–67. https://doi.org/10.1080/13562510701794050
  • Wingate, U. (2012). ‘Argument!’ helping students understand what essay writing is about. Journal of English for Academic Purposes, 11(2), 145–154. https://doi.org/10.1016/j.jeap.2011.11.001
  • Wood, N. V. (2001). Perspectives on argument. Prentice Hall.
  • Xiao, Y., & Lucking, R. (2008). The impact of two types of peer assessment on students’ performance and satisfaction within a wiki environment. The Internet and Higher Education, 11(3–4), 186–193. https://doi.org/10.1016/j.iheduc.2008.06.005