4,604
Views
18
CrossRef citations to date
0
Altmetric
Research Article

Students’ perceptions of the peer-feedback experience in MOOCs

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 145-163 | Received 26 May 2020, Accepted 23 Dec 2020, Published online: 10 Feb 2021

ABSTRACT

Various studies advocate training students prior to a peer-feedback activity to ensure high quality of feedback. Next to investing in students’ peer-feedback skills, it is important to focus on the underlying perceptions since perceptions influence learning behavior. We implemented an online peer-feedback training session in a massive open online course and examined students’ perceptions of peer-feedback and training focusing on their willingness, perceived usefulness, perceived preparedness, and general attitude; and students’ peer-feedback experience and its relation to their perception. Analysis of a perception survey from 259 students revealed that the amount of prior experience results in significant differences in students’ perception. Students without prior peer-feedback experience scored higher on willingness, usefulness, preparedness, and general attitude compared to students with some prior experience. Those with a lot of experience showed the strongest positive perception scores. No significant differences for the effect of training on perception could be measured with the available data.

Introduction

Due to their open character, open online courses such as massive open online courses (MOOCs) are often taken by large numbers of students from different backgrounds (Kulkarni et al., Citation2013). In MOOCs, teachers can teach hundreds or thousands of students at the same time but are challenged by enabling meaningful interactions with and among students (Yousef et al., Citation2015). Peer-feedback stimulates students to interact with each other in a meaningful way, learning from and with each other (Gikandi et al., Citation2011; Keppell et al., Citation2006; Xie, Citation2013). Due to its formative character, peer-feedback promotes engagement with the course material and interaction between peers (McCarthy, Citation2017). In a MOOC, it can provide a safe place for students to exchange ideas and to receive as well as provide critical feedback. In this study, we focused on the implementation of an online peer-feedback training and activity session in a MOOC in which students were asked to provide each other with formative feedback as part of a learning activity. Ideally, formative feedback informs about strong and weak points in combination with suggestions on how to improve (Neubaum et al., Citation2014). Formative feedback, when elaborated, enables students to reflect on their own learning and provides them with information on how to improve their performance (Gikandi et al., Citation2011; Narciss & Huth, Citation2004; Vonderwell et al., Citation2007). It can be an effective learning activity that significantly improves students’ motivation to engage in online learning (Liu & Carless, Citation2006; Xie, Citation2013). However, at the same time student motivation is a prerequisite for successful online learning and peer-feedback (Xie, Citation2013). Due to the physical distance in (open) online learning, there is often a lack of interaction (McBrien et al., Citation2009). This transactional distance (Moore, Citation2013), that is, a learners’ sense of distance, can to some extent be addressed by providing opportunities for flexible learning and meaningful dialogue (e.g., peer-feedback).

But providing good feedback is a skill which not every learner is capable of providing (Carless & Boud, Citation2018). Students and experts exhibit differences in feedback quality, yet the majority of students are able to meet the given evaluation criteria and provided good quality (Hovardas et al., Citation2014). Various studies have recommended training students prior to using peer-feedback, providing clear instructions, transparency of the criteria (e.g., via rubrics or exemplars), cues, and the possibility to practice (Boud & Molloy, Citation2013; Evans, Citation2013; Hovardas et al., Citation2014; Jönsson & Panadero, Citation2017; Kulkarni et al., Citation2013; Narciss & Huth, Citation2004; Nicol & Macfarlane-Dick, Citation2007; Sluijsmans et al., Citation2002; Suen, Citation2014; Wang, Citation2014; Yousef et al., Citation2015).

Although peer-feedback enables teachers to manage large numbers of students, its quality and effectiveness depends on the design and the way students are prepared (Boud & Molloy, Citation2013). A study in face-to-face education showed significant effects of peer-feedback training on peer-feedback skills (Sluijsmans et al., Citation2002). Hsiao et al. (Citation2015) found that for feedback training, high domain knowledge is not the main prerequisite for providing good feedback on students’ self-regulation (i.e., the highest level of feedback). Training focused on pedagogical and motivational aspects of peer-feedback results in better feedback compared to training focused on domain knowledge. Yet, students’ domain knowledge is perceived as a relevant aspect in peer-feedback and should ideally be taken into account by teachers. Studies in MOOCs in particular (Kasch et al., Citation2017, Citation2020) have shown that MOOC learners did not receive clear information on the educational purpose of participating in peer-feedback prior to peer-feedback activities. Additionally, instructions on how to use rubrics and examples of good practice were scarce.

Despite the educational value of peer-feedback, previous studies revealed contradictory findings regarding student beliefs and perceptions (Luo et al., Citation2014; Zutshi et al., Citation2013). Studies have found low student motivation to provide peer-feedback (Neubaum et al., Citation2014), students’ mistrust of its quality (Suen, Citation2014), a decrease in perceived usefulness (Alqassab et al., Citation2018; Wang, Citation2014) but also an increase in the perceived usefulness of peer-feedback (Sluijsmans et al., Citation2002). Luo et al. (Citation2014) found that students even recommended the implementation of peer-feedback activities. Students’ perceived usefulness and willingness have been studied in different contexts (Hsiao et al., Citation2015; Neubaum et al., Citation2014; Wang, Citation2014); yet; the possible effects of peer-feedback training on participation of MOOC students remains unclear.

Next to the importance and effects of training, previous research pointed out the influence of prior peer-feedback experience on students’ response and beliefs about feedback (Keppell et al., Citation2006; Price et al., Citation2011; Van Gennip et al., Citation2009). Students’ perceptions and beliefs can also be seen as a result of their previous experiences. Again, mixed findings are reported in the feedback literature (Huisman et al., Citation2019). Although, in general, students have positive beliefs about the value of peer-feedback, Mulder et al. (Citation2014) pointed out that beliefs change over time and the perceived value of peer-feedback decreases after students have participated in a peer-feedback activity.

Research on the pedagogical effectiveness of MOOCs is limited (Jung et al., Citation2019; Keppell et al., Citation2006; Meek et al., Citation2017). Although there is a lot of literature on peer-feedback processes, feedback levels (task, process, and self-regulation), and design, there is still need for empirical research (Mercader et al., Citation2020), in particular in an open online context. With this study we aimed to shed light on students’ perceptions regarding peer-feedback and peer feedback training and the possible effects of prior peer-feedback experiences on peer-feedback perceptions.

By providing a dedicated training session on how to provide feedback, how to use an online rubric and by explaining the educational value of feedback, we expected to positively influence students’ peer-feedback perceptions. Since MOOC students can decide for themselves whether participating in an activity is useful and valuable to them or not, it is important to stress the value of learning materials (Jung et al., Citation2019). Without any incentive, MOOC students might not see value in investing time to provide and receive peer-feedback.

We developed an online questionnaire to investigate student perceptions of peer-feedback, peer-feedback training and prior peer-feedback experience. Based on the feedback literature (Hsiao et al., Citation2015; Neubaum et al., Citation2014; Phua et al., Citation2012; Wang, Citation2014), we included the following perception variables: willingness, usefulness, preparedness, and general attitude (see the appendix for details of the pre-test and post-test questionnaires).

Using an experimental approach, we investigated the following two research questions (RQs):

  • RQ1: To what extent does online peer-feedback training in a MOOC positively influence students’ perception of peer-feedback and peer-feedback training?

  • RQ2: What is the relationship between students’ prior peer-feedback experience and their peer-feedback perceptions?

Students receiving peer-feedback training are expected to have a more positive attitude regarding peer-feedback and peer-feedback training compared to those who did not receive training. They are also expected to be more willing to participate in peer-feedback and peer-feedback training, have a more positive perception of the usefulness of peer-feedback training, and are expected to feel more prepared (Carless & Boud, Citation2018). Based on the literature, we expected mixed results regarding students’ prior peer-feedback experience (Mulder et al., Citation2014; Price et al., Citation2011). Previous studies reported the effects of prior experience on perceptions, which is why we expected to see significant differences between students with little experience and those with much experience.

Method

Research design

The effect of the online peer-feedback training on student perception was studied using an experimental approach (see ). Student perceptions were investigated by comparing the results of the pre- and post-test questionnaires of both the treatment as well as control group. The prior peer-feedback experience of all students (N = 259) was investigated by using the results of the pre-questionnaire only.

Figure 1. Study design showing activities and measurements

Figure 1. Study design showing activities and measurements

Participants

Participants were students of a MOOC, Marine Litter, on the Open edX platform from May to August 2019, which was taught in Spanish and English at the Bachelor of Science level and offered by the United Nations Environment Program and the Open University of the Netherlands. The mean age of the participants who were included in the analyses (N = 259) was 33 years with 53% being female, 46% male, and 1% other or not indicated. The study was approved by the ethics commission of the Open University of the Netherlands and participation in the study was based on informed consent.

Peer-feedback in the Marine Litter MOOC

During a runtime of 4 months, the MOOC on marine litter requires students to work on change-oriented solutions; it includes topics on managing, modeling, and monitoring marine litter (Löhr et al., Citation2017). The MOOC promotes environmental activism with regard to marine litter (Tabuenca et al., Citation2019).

In the second week of the MOOC, students were randomly assigned to either the experimental or the control group. The student allocation was done automatically by the platform of the MOOC (Open edX).

The experimental group was offered a peer-feedback training session while students in the control group merely had the option to participate in a peer-feedback activity (see ). As with any activity in a MOOC, student participation was voluntary. The third week of the MOOC concluded with an authentic assignment in which students were required to submit a visualization of a marine litter problem by means of the driver-pressure-state-impact-response framework (DPSIR; Kristensen, Citation2004). DPSIR is an adaptive management tool to analyze environmental problems and to map potential responses. Students did not receive a grade on the assignment but had to submit it in order to receive a MOOC completion certificate. Both the peer-feedback training and activity were linked to the assignment and were therefore relevant for the students (Gikandi et al., Citation2011). Students were given 2 weeks of time to submit their draft assignment and provide peer-feedback via the peer-feedback tool of the edX MOOC platform. Taking into account the study load, the training and activity had to fit into the short runtime of the MOOC. Therefore, students were asked to provide feedback on only two out of the five aspects of the DPSIR framework. To receive peer-feedback, students had to upload their draft assignment to the peer-feedback tool. The peer-feedback tool then randomly assigned students (anonymously) to each other. Students were able to review each other’s draft assignment, assign quality levels, and provide constructive formative feedback and recommendations on how to improve two DPSIR aspects via the online rubric. Prior to the peer-feedback activity, students of the experimental group received a short list of important good practices when providing peer-feedback.

Peer-feedback training and activity

The peer-feedback training was developed based on the premise that the social and cognitive process of providing and receiving peer-feedback is affected by various factors including the instructional design (Boud & Molloy, Citation2013; Mercader et al., Citation2020; Winstone & Carless, Citation2019). The goal of the peer-feedback training was twofold: to prepare students to interpret the quality criteria of the rubric and to get an impression of the upcoming peer-feedback activity; and to make students aware of the educational value of peer-feedback, for example, receiving hints on how to improve, reflect on their own work, and gain insight into new perspectives (Kasch et al., Citation2017; Xie, Citation2013).

Without making students aware of the personal benefits of peer-feedback, training will not lead to high voluntary participation (Xie, Citation2013).

To achieve these two goals, the training included the following aspects, which are based on common design recommendations (Boud & Molloy, Citation2013; Jönsson & Svingby, Citation2007; Narciss & Huth, Citation2004; Nicol & Macfarlane-Dick, Citation2007; Winstone & Carless, Citation2019).

Peer-feedback as a learning goal

With the training and instructions, we informed students about peer feedback and the expected outcomes and fostered a shared understanding of what good formative feedback entails. By following the training, students learned how to use and interpret the online rubric but also see the value of practicing and achieving this skill. The peer-feedback training was part of the MOOC and therefore a learning activity through which students could acquire the required new skills. Students were informed that participating in the training and peer-feedback activity would prepare them for the upcoming peer-feedback activity and that providing and receiving peer-feedback would help them gain an understanding of their own work as well as that of their peers. Moreover, this would help them to improve their DPSIR draft.

Instruction video

In collaboration with one of the MOOC teachers and given the marine litter context, the training included a video (duration 4:45 min) in which the feedback process and rubric and two DPSIR elements were explained. Through the video the relevance of peer-feedback in the context of combating marine litter was explained to enhance students’ peer-feedback perceptions (general attitude, willingness, preparedness, perceived usefulness of peer feedback and training). Students were informed how to interpret the two elements on which they had to provide feedback. The DPSIR elements had to be assigned one of the three quality levels (low, average, or high). The three quality levels were explained by means of a fictional DPSIR scheme. Students were supported by prompts, which helped them to formulate feedback and recommendations. To enhance the transparency and to prepare students for the peer-feedback activity, the same rubric was used as in the training.

Practicing with peer-feedback

In addition to the video, students could actively practice with the rubric itself by providing feedback on the fictional DPSIR scheme. Students could check their understanding of the DPSIR model by selecting the most suitable feedback comment and quality level (low, average, or high). During the exercise, students were provided with automated elaborate feedback, which explained why their choice was good or not and whether another feedback comment and/or quality level would have been more suitable. By providing elaborate feedback, the exercise was meaningful and scalable (Alqassab et al., Citation2018; Boud & Molloy, Citation2013; Jönsson & Panadero, Citation2017; Zutshi et al., Citation2013).

Measures and instruments

Students’ peer-feedback experience and perceptions were measured before the peer-feedback training (pre-test). Directly after the peer-feedback activity (post-test), student perceptions were measured again (). The pre- and post-test questionnaires were developed in the context of this course and contained the following sections (see the Appendix for more details):

  • Demographics

  • Experience with peer-feedback (pre-test only)

  • Experience with DPSIR (pre-test only)

  • Willingness

  • Usefulness

  • Preparedness

  • General attitude

  • Peer-feedback quality (post-test only).

Analyses

We conducted a mixed ANOVA to ascertain the extent to which online peer-feedback training in a MOOC can positively influence students’ perception of peer-feedback and peer-feedback training (RQ1). The independent variables consisted of a between-subjects factor, which is the group people were assigned to (control vs. experimental group) and a within-subjects factor, which consisted of the measurements of the two questionnaires that the participants completed (pre-test vs. post-test questionnaire). The dependent variable measured perceptions of peer-feedback consisting of willingness, usefulness, preparedness, and general attitude.

We carried out an ordinal regression analysis (generalized linear model) to investigate the relationship between prior peer-feedback experience and perception (RQ2). The advantage of an ordinal regression analysis over a categorical regression analysis is that the order of the answer options is taken into account. Furthermore, an ordinal regression analysis does not assume equal distance between answer options, which is the case for linear regression. For the regression analysis, the responses of the pre-questionnaire (N = 259) were used of both the treatment and control groups. All analyses were conducted using R version 3.5.3.

Results

From the 5,433 enrolled students, 171 students completed the first half of the MOOC (where our experiment took place) with a certificate. A total of 263 students filled in the pre-test questionnaire completely, of whom 4 did not give consent to include their data in this study, resulting in 259 students for the pre-test. A total of 74 students filled in the post-test questionnaire, of whom 43 students were in the control group and 31 in the experimental group. A total of 45 students filled in both the pre-test and post-test questionnaires, of whom 31 students were in the control and 14 in the experimental group.

Influence of peer-feedback training on perception of peer-feedback and peer-feedback training (RQ1)

As presented in , the results from the mixed ANOVA showed a significant effect of measurement (pre-test vs. post-test) on perception scores for preparedness (F(1, 43) = 4.46, p = 0.41). The perceived perception was significantly higher in the pre-test measurement (M = 2.77, SD = 1.08) than in the post-test (M = 2.40, SD = 1.30). No significant effects were found for the condition (treatment vs. control) on the perception values (willingness, usefulness, preparedness, and general attitude) nor for the interaction between the conditions and the measurement (pre-test vs. post-test). The experimental group received peer-feedback training prior to the peer-feedback activity. The answers on the pre-test questionnaire show that 81% of the participants were unfamiliar with peer-feedback training in a MOOC. However, 45% of the participants had experience with using a rubric to provide feedback (26% sometimes, 12% often, and 7% always).

Table 1. Mixed ANOVA results for students’ peer-feedback perceptions

Relationship between peer-feedback experience and peer-feedback perceptions (RQ2)

The results of the regression analyses with perception (willingness, usefulness, preparedness, and general attitude) as dependent and peer-feedback experience as independent variables are presented in . We tested the constructs of the pre-test (N = 259) for internal consistency using the Cronbach’s alpha. The alpha values were between acceptable (0.7 ≤ α ≤ 0.8) and good (0.8 ≤ α ≤ 0.9). This shows that the concepts of our instruments measured what we intended to measure.

Table 2. Ordinal regression analyses with the peer-feedback perception as dependent and peer-feedback experience as independent variables

Table 3. Ordinal regression analyses with the peer-feedback perception and peer-feedback experience as dependent and independent variables respectively

To explore students’ prior experience with peer-feedback, the answers of the 259 students on the pre-test questionnaire were used. A majority of participants (72%) indicated that they already had experience with peer-feedback prior to enrolling in this MOOC. From the 259 students, the majority (72%) had experienced peer-feedback in a regular course context, but only 36% had experience with peer-feedback in a MOOC. A total of 87% of participants perceived peer-feedback as valuable for their own learning.

The majority of participants (84%) were familiar with the DPISR framework, which was part of the peer-feedback training and activity; however, only 26% had used the framework. In summary, the majority of students had experience with peer-feedback (in a regular course context), but no experience with peer-feedback training. A substantial group had experience with a rubric, and a majority reported having domain knowledge.

Experience with peer-feedback and its value

Results from the regression analyses indicate that students with no peer-feedback experience had significantly higher scores on willingness, usefulness, and general attitude than those who had experienced it once (). However, those who had experienced it several times (2–5 times) scored significantly higher on all perception variables than those with no experience. The same pattern is seen when it comes to the experienced value of peer-feedback. Those who had never found peer-feedback valuable scored higher than those who found it sometimes valuable. However, those who found peer-feedback often or always valuable scored significantly higher on all perception variables.

Table 4. Relationship between peer-feedback experience, value, and students’ perceptions: Extract of significant results from

Experience with peer-feedback training and rubric

Moving to (significant effects are shown in ), we see that those who have often or always experienced peer-feedback training as valuable reported significantly higher scores on the perception variables compared to those who have never experienced it. Experience with using a rubric made only a significant difference when it came to perceived preparedness. Students who have often or always used a rubric scored significantly higher on these two variables compared to those with no experience with rubrics.

Table 5. Relationship between peer-feedback training, rubric, and students’ perceptions: Extract of significant results from

Finally, as can be seen in (significant effects are shown in ), domain knowledge about the DPSIR framework was linked to significantly lower perceived willingness and preparedness in students.

Table 6. Relationship between domain knowledge and students’ perceptions: Extract of significant results from Table 3

Discussion and conclusion

The purpose of this study was twofold: to investigate to what extent online peer-feedback training in a MOOC positively influences students’ perception of peer-feedback and peer-feedback training; and to examine the relationship between students’ peer-feedback experience and their peer-feedback perceptions.

Influence of peer-feedback training on peer-feedback perceptions (RQ1)

Low completion rates of the online questionnaires did not allow for the desired comparison between those with no prior training (control group, N = 31) and those with prior training (experimental group, N = 14).

With the results at hand, we could not investigate significant effects of peer-feedback training on peer-feedback perception (willingness, usefulness, preparedness, and general attitude). Therefore, we were not able to answer the first research question. Related research on peer-feedback perceptions in MOOCs shows that in general, students had positive peer-feedback perceptions (Meek et al., Citation2017) and that peer-feedback perceptions can be influenced by the instructional design (Mercader et al., Citation2020). Instructional design that includes prior training, long-term and double loop peer-feedback processes was perceived as beneficial by students (Mercader et al., Citation2020).

Unexpected results were found regarding students’ preparedness score, which significantly decreased in the post-test measurement. It is unclear why the preparedness score of the pre-test was higher than the post-test score. A possible explanation might be that students became more aware of the tasks related to providing and receiving peer-feedback. It could be that students overestimated their preparedness, and by following the training and/or participating in the peer-feedback activity, they realized that their initial perceived preparedness did not match with the course expectations. Additionally, despite the fact that training prior to a peer-feedback activity is seen as necessary and important (Patton, Citation2012; Tai et al., Citation2016), we found that the majority of the participants (81%) were unfamiliar with peer-feedback training in MOOCs.

Previous research pointed out that instructional designs with long-term prior training activities are rated higher by students than short-term prior training activities (Mercader et al., Citation2020). We were not able to implement long-term peer-feedback activities and prior training in the current MOOC. However, we expect that long-lasting peer-feedback activities would not only have resulted in higher student participation rates but also in higher perception scores.

Relationship between peer-feedback experience and peer-feedback perception (RQ2)

Whilst the results could not be used to confirm the first research question, ordinal regression analyses revealed that the amount of peer-feedback experience has a significant impact on perception. The majority of students with no experience perceived willingness, usefulness, preparedness, and general attitude significantly higher than those with some experience. Peer-feedback research by Yu and Hu (Citation2017), which was conducted in a non-MOOC setting, supports the premise that prior experience mediates students’ peer-feedback processes. Prior experience is among other factors, including beliefs and values, which influences peer-feedback practices.

The majority of students (86%) indicated finding peer-feedback valuable. This is consistent with related research, which also reported that 86% of students had a positive experience with peer-feedback (Nicol et al., Citation2014). The results of this study are also in line with a study by McCarthy (Citation2017), who reported that a majority of students were willing to receive peer-feedback again. Our study shows that those who have never experienced peer-feedback as valuable for their learning scored significantly higher on willingness and general attitude than those who have experienced it sometimes. No significant differences were seen for the perceived usefulness of peer-feedback.

The study partly supports previous research by Price et al. (Citation2011), who found that students need to experience the value of feedback before being able to see its value. Our study shows that those who have never experienced providing or receiving peer-feedback had a significantly higher perceived preparedness than those who have provided feedback once. Yet, more experienced students had significantly higher perceptions than those with no experience. Although we can only speculate about the underlying causes of students’ perceptions, it would be interesting to further investigate if and how perceptions change during one or several peer-feedback loops. A possible explanation for these findings might be that students in general have a positive opinion about peer-feedback until they have one negative or unsuccessful experience. Additionally, it might be the case that only after repeated (positive) peer-feedback experiences students can see its value and be more willing to provide and receive peer-feedback (Mulder et al., Citation2014).

The results show that students experienced with providing and receiving peer-feedback are not necessarily familiar with peer-feedback training, nor do they have higher peer-feedback perceptions. We saw that there are significant differences in their perceptions about peer-feedback between students with no, some, and a lot of experience. These results raise the question whether and to what extent students’ prior peer-feedback experience influenced their participation in the peer-feedback training and peer-feedback activity.

Looking back on this study, it became clear how complicated it is to carry out a long-term experimental intervention study in a MOOC. Since all activities as well as our intervention were on a voluntary basis and the fact that students could complete the MOOC and earn a certificate without participating in the intervention, we were dependent on high student activity. Though the response rate to the questionnaires accords with completion rates of MOOCs (Reich & Ruipérez-Valiente, Citation2019) and was, in principle, sufficient for our analysis, in practice it was not since only 45 students filled in both pre- and post-test questionnaires (31 from the control group and 14 from the experimental group). Future research in MOOCs could consider the possible advantages of mandatory participation for a given set of activities for students who opt for a certificate (Keppell et al., Citation2006). Alternatively, it might be interesting to see if student engagement in MOOCs and peer-feedback (training) can be enhanced using gamification elements to foster engagement, social presence, and sense of community (Antonaci et al., Citation2019).

Regarding peer-feedback training, we suggest that future research could study the effects of a training focusing both on student perceptions and peer-feedback skills.

Due to the small sample sizes, the generalizability of the present findings is somewhat limited. The specific context of MOOCs, however, with issues such as language, cultural aspects, and trust still warrants future research. We therefore recommend repeating the study in a more controlled MOOC environment. By collecting self-reported data from volunteering students, it is possible that these students may have a stronger view (positive as well as negative) on peer-feedback (training). Therefore, future research might also include log data of student activity in the MOOC.

Although this study could not confirm an effect of a peer-feedback training intervention on student perceptions, it adds to the growing literature that focuses on student perceptions and experiences in the context of peer-feedback, such as Alqassab et al. (Citation2018), Hovardas et al. (Citation2014), Luo et al. (Citation2014), Neubaum et al. (Citation2014), Sluijsmans et al. (Citation2002), and Zutshi et al. (Citation2013). Based on the current findings, we recommend devoting extra attention to students’ prior experience when designing peer-feedback activities.

Declaration of interest

Each of the authors confirms that this manuscript has not been previously published and is not currently under consideration for publication elsewhere. Additionally, we have no conflicts of interest to disclose.

Additional information

Funding

This work was financed via a grant by the Dutch National Initiative for Education Research, The Netherlands Organisation for Scientific Research, and the Dutch Ministry of Education, Culture and Science under the grant number 405-15-705 (SOONER, http://sooner.nu).

Notes on contributors

Julia Kasch

Julia Kasch is a post-doc researcher at Utrecht University, where she studies interdisciplinary collaboration and challenge-based learning in online courses. Julia holds a PhD in the field of technology-enhanced learning, where she studied the educational design of scalable interaction and support in MOOCs.

Peter van Rosmalen

Peter van Rosmalen is an associate professor in the Department of Educational Research and Development of the Faculty of Health, Medicine and Life Sciences at Maastricht University. Peter is chair of the taskforce on instructional design and e-learning and conducts research in educational technology.

Ansje Löhr

Ansje Löhr is an associate professor in the Department of Environmental Sciences at the Open University of the Netherlands. She is involved with UN Environment Programme in developing training programs on marine litter and developed several MOOCs on marine litter. Ansje is a visiting lecturer at the Soegijapranata Catholic University.

Roland Klemke

Roland Klemke is a professor in the Faculty of Educational Science at the Open University of the Netherlands and professor for game informatics at the Cologne Game Lab of TH Cologne. Roland conducts research in artificial intelligence, machine learning, augmented reality, technology-enhanced learning, information systems, software engineering, and human-computer interaction.

Alessandra Antonaci

Alessandra Antonaci is a program manager at EADTU, Europe’s leading institutional association in online, open, and distance higher education. Alessandra holds a PhD in the field of technology-enhanced learning, where she studied the gamification design process in the context of MOOCs.

Marco Kalz

Marco Kalz is a full professor of technology-enhanced learning at the Heidelberg University of Education. Marco is also affiliated to the UNESCO chair of open education at the Open University of the Netherlands. His research interest lies in the use of open education, pervasive technologies, and formative assessment.

References

Appendix

Pre-test and post-test questionnaires