57,353
Views
231
CrossRef citations to date
0
Altmetric
Articles

Supporting Self-Regulated Learning in Online Learning Environments and MOOCs: A Systematic Review

, , , , &

ABSTRACT

Massive Open Online Courses (MOOCs) allow learning to take place anytime and anywhere with little external monitoring by teachers. Characteristically, highly diverse groups of learners enrolled in MOOCs are required to make decisions related to their own learning activities to achieve academic success. Therefore, it is considered important to support self-regulated learning (SRL) strategies and adapt to relevant human factors (e.g., gender, cognitive abilities, prior knowledge). SRL supports have been widely investigated in traditional classroom settings, but little is known about how SRL can be supported in MOOCs. Very few experimental studies have been conducted in MOOCs at present. To fill this gap, this paper presents a systematic review of studies on approaches to support SRL in multiple types of online learning environments and how they address human factors. The 35 studies reviewed show that human factors play an important role in the efficacy of SRL supports. Future studies can use learning analytics to understand learners at a fine-grained level to provide support that best fits individual learners. The objective of the paper is twofold: (a) to inform researchers, designers and teachers about the state of the art of SRL support in online learning environments and MOOCs; (b) to provide suggestions for adaptive self-regulated learning support.

Virtual spaces for learning are becoming increasingly prominent in both the business and education spaces (Muñoz Cristóbal et al., Citation2017). Massive Open Online Courses (MOOCs) have created more accessible educational opportunities to the masses. However, the discrepancy between enrollment and completion rates in MOOCs (Breslow et al., Citation2013; Jordan, Citation2014) suggests that learning online presents unique challenges, and learners may require some form of additional support to become successful.

Prior studies showed that learners struggle in online learning environments because they do not use critical self-regulated learning (SRL) strategies (Azevedo, Citation2005). Research also identified SRL processes as enabling learners to successfully learn in online environments (Winters, Greene, & Costich, Citation2008). This is supported by the significant positive relationship between SRL strategies and online academic success found in Broadbent and Poon’s (Citation2015) meta-analysis. In MOOCs, both Kizilcec, Pérez-Sanagustín, and Maldonado (Citation2017) and Littlejohn, Hood, Milligan, and Mustain (Citation2016) found that SRL is related to learner’s engagement and achievement of personal learning goals. Therefore, providing SRL support to learners is likely to lead to greater online academic success.

However, one of the assumptions of the SRL model by Zimmerman (Citation1989, Citation1990) is the influence of biological, developmental, contextual, and individual constraints on learners’ ability to regulate their motivation, cognition, and behavior (Azevedo, Citation2005). Vu, Hanley, Strybel, and Proctor’s (Citation2000) study showed that experts and novices used different strategies to complete tasks of varying levels of complexity. This is further substantiated by Winters et al.’s (Citation2008) review, which revealed that SRL is related to different learner characteristics. In MOOCs, Hood, Littlejohn, and Milligan (Citation2015) found that learners’ SRL is related to their motivation and working experience.

In recent years, there has been a growing number of studies examining SRL supports in online environments (Tsai, Shen, & Fan, Citation2013). Taking into account the role of SRL in online academic success and the influence of human factors, this systematic review aims to report on approaches to support SRL strategies in online learning environments. The ultimate goal of this review is to transform these insights into suggestions for future research in the development of MOOCs. Since MOOCs are fairly recent in the field of online education, research in this area has focused mainly on challenges and trends (Liyanagunawardena, Adams, & Williams, Citation2013). Empirical studies carried out in MOOCs have only recently begun to appear in peer-reviewed publications—some of which examine SRL supports in MOOCs. Given the scarcity of empirical studies in MOOCs (compared to other learning environments and populations), the current review also considers studies in other online learning environments, as insights from these other environments are valuable for understanding how SRL functions in MOOCs.

1.1. Online learning environments and massive open online courses

In 2013, more than a quarter of the undergraduates in the United States alone enrolled for at least one online course and the number of learners learning at a distance is rising yearly (Allen & Seaman, Citation2015). Online learning in primary and secondary education is also growing in popularity worldwide (Barbour, Citation2013). Findings from Picciano, Seaman, Shea, and Swan (Citation2012) showed a 47% increase in of online and blended courses in the primary and secondary education since 2007.

Emerging technology constantly expands the possibilities for online learning and continues to fuel the evolution of distance education (Johnson & Aragon, Citation2003). Therefore, terminologies such as distance learning, online learning, web-based learning, e-learning, cyberlearning, and computer-based learning emerged in the literature with little consensus on their definitions (Moore, Dickson-Deane, & Galyen, Citation2011). In some cases these definitions are even used interchangeably (Tsai & Machado, Citation2002). Therefore, the term “online learning environment” is used in this review as an umbrella term for all the related concepts used by the included studies to refer to learning taking place on the internet (Moore et al., Citation2011).

Out of the many concepts associated with online learning environments, MOOCs are receiving a great deal of attention from educational researchers, teachers, and learners (Hew & Cheung, Citation2014). From the first use of the term in 2008 (which described an open online course offered by the University of Manitoba in Canada) to 2013, MOOC providers have enrolled over four and a half million learners (Breslow et al., Citation2013). By 2016, over 23 million people signed up for MOOCs (Shah, Citation2016).

Despite high enrolment rates, the vast majority of learners do not successfully complete MOOCs. Hew and Cheung (Citation2014) found that learners drop out for a variety of reasons including having no one to ask for help, lack of time to follow through the course, insufficient prior knowledge, and inability to understand course content. Learners’ reasons for dropping out give an indication for the need to support SRL. Learning in MOOCs is more open and networked compared to other online learning environments.

Studying in an open and networked environment like a MOOC is challenging because the control of learning is shifted from the educational institutions and cultures to the individual—often isolated—learner (Fournier, Kop, & Durand, Citation2014). Tasks that were previously carried out by the educators or peers, such as setting goals and evaluating progress, are now the learners’ responsibilities. These tasks can be overwhelming for the unprepared learner in an autonomous learning environment (Kop, Citation2011). Not all learners have the same ability to cope with the information they are given or know how to learn with minimal guidance. Lee and Ryu (Citation2013) argued that learners prefer to use systems for learning when they are designed to promote learner engagement. Therefore, supporting SRL should theoretically enhance learners’ performance and completion in MOOCs.

1.2. Self-regulated learning (SRL) and online academic success

In conventional learning environments, learners who can effectively self-regulate are regarded as the most effective learners by researchers and educators (Boekaerts, Citation1999). They assume the greatest responsibility for their own learning outcomes by being metacognitively, motivationally, and behaviorally involved in their own learning processes (Zimmerman, Citation1989, Citation1990). In the same manner, SRL appears to be important for learners in online learning environments that afford high levels of learner autonomy and low levels of teacher presence (Lehmann, Hähnlein, & Ifenthaler, Citation2014).

A multitude of SRL models exists in the literature due to the myriad theoretical backgrounds of educational researchers (Boekaerts, Citation1997; for review, see Puustinen & Pulkkinen, Citation2001). Zimmerman’s model stems from a social-cognitive perspective that emphasizes both motivational factors and learning strategies in highly autonomous learning environments. The current review employs Zimmerman’s model to provide a more integrated understanding of the SRL strategies referred in the selected papers.

Zimmerman (Citation1989, Zimmerman & Campillo, Citation2003) described SRL processes as triadic and cyclical. Triadic reciprocity refers to the dynamic influence of self-regulatory processes as well as environmental and behavioral events. For instance, one’s approach to solving a problem is determined not only by one’s self-efficacy perceptions, but also by environmental events (e.g., instructor’s feedback) and behavioral events (e.g., accurately solving the last problem). According to the model, three self-regulatory phases operate in a cyclical manner: forethought, performance, and self-reflection.

In the forethought phase, learners are involved in task analysis processes (i.e., goal setting and strategic planning) and self-motivation beliefs (i.e. self-efficacy, outcome expectations, intrinsic interest or value, and goal orientation). Next, the performance phase takes place. In this phase, learners are involved in self-control processes (i.e., imagery, self-instruction, attention focusing, and task strategies) and self-observation (i.e., self-recording and self-experimentation). The third phase is self-reflection. In this phase, self-judgment (i.e., self-evaluation and causal attribution) and self-reaction (i.e., self-satisfaction or affect, and adaptive or defensive) occur. These phases repeat in a cyclical manner throughout the learning process.

Broadbent and Poon (Citation2015) reviewed studies examining the relationship between nine SRL strategies (i.e., metacognition, time management, effort regulation, critical thinking, elaboration, rehearsal, organization, help seeking, and peer learning) and online academic success. The results from the 12 studies reviewed showed that metacognition, time management, effort regulation, and critical thinking were related to online academic success, while other SRL strategies had nonsignificant effects. Only nine SRL strategies were examined in their study, so it is possible that there are other SRL strategies that contribute to online academic success. Nonetheless, the study supports the notion that SRL is related to online academic success.

The link between SRL and online academic success is further supported by evidence from previous studies which showed that learners studying complex topics online are not proficient in regulating their own learning and do not gain conceptual understanding when they are not given SRL support (Azevedo & Hadwin, Citation2005). This demonstrates the need for SRL supports in online environments to help learners achieve academic success. Hill and Hannafin (Citation2001) identified four functionalities of supports: (i) conceptual support to help learners prioritize information, (ii) metacognitive support to assist learners in gauging their learning, (iii) procedural support to aid use of resources, and (iv) strategic support to provide additional options to complete a task. These types of support can come in the form of tools (e.g., organizers and search functions), additional cues (e.g., questions for learners to reflect and suggestions to use certain resources), feedback (e.g., evaluation of learning), or guidance (e.g., intelligent tutoring system) during learning (Zheng, Citation2016).

Until MOOCs gained mainstream popularity in 2012, SRL research in the context of online learning was done primarily in intelligent tutoring systems (ITS). Dating back to the early 1970s, ITS aimed to “engage each student in sustained reasoning activity and interact with the student based on a deep understanding of the student’s behavior” (Corbett, Koedinger, & Anderson, Citation1997). In other words, ITS cater and adapt their instruction to each individual learner based on automated estimations of that learner’s progress. One common strategy for this estimation is Knowledge Tracing, which models a learner’s acquisition of knowledge based on his or her responses to quiz questions (Corbett & Anderson, Citation1994). Based on this model of the learner’s current knowledge state, the ITS would decide the appropriate next step in the learning process. Ongoing research in ITS is still working towards supporting tutoring at scale, and preliminary attempts have been made to embed ITS in MOOCs (Aleven et al., Citation2015) to support SRL.

1.3. MOOCs, SRL, and human factors

In online learning environments where the instructor presence is low, learners have to make the decisions regarding when to study or how to approach the study materials. Therefore, learners’ ability to self-regulate their own learning becomes a crucial factor in their learning success. Applying Zimmerman’s SRL model to learning in MOOCs, the first phase will consist of learners setting learning goals and devising a schedule to engage in the learning materials. The learners will also have to make decisions on what learning strategies they will use during learning. For the second phase of SRL, learners will use the approaches they have planned in the first phase such as taking notes while viewing videos or self-explaining concepts after reading. In the third phase, learners will self-evaluate whether they have understood the concepts and met their learning goals. Based on the self-evaluations, they will make new plans to restudy or move on to new concepts. Accordingly, supporting self-regulated learning strategies can help learners become better at regulating their learning, which in turn could enhance their learning performance.

Whereas ITS are deliberate in prescribing the learning sequence for learners based on their behavior and performance, MOOCs present a single path from which learners can—and frequently do (Davis, Chen, Hauff, & Houben, Citation2016; Guo & Reinecke, Citation2014)—deviate should they choose to. Learner autonomy is generally far higher in MOOCs than in traditional ITS.

The key difference between MOOCs and ITS is that MOOCs are at the stage of their development where, so far, the focus has been to make the content open to the world. Now that this part is well established with nearly 25 million users worldwide, a next step in the advancement of MOOCs will be to make them more adaptive to the unique needs of the millions of learners who stand to benefit from them. With learners from all around the world converging on these platforms, MOOCs must equip their audience with the necessary SRL support to help them improve and maintain their engagement with the course materials in a more effective and persistent manner.

When supporting SRL, it is important to consider the influence of human factors. Previous studies showed that human factors affect both the way we learn and our learning outcomes (e.g., Kalyuga, Chandler, & Sweller, Citation1998). In a study by Zimmerman and Martinez-Pons (Citation1990), differences in SRL were found to relate to learners’ age, gender, and level of giftedness. For example, the results showed that girls were more proficient than boys in setting goals, planning, monitoring, and structuring the learning environment. Also, gifted learners were more proactive in seeking peer assistance and reviewing materials than other learners.

Similarly, McSporran and Young (Citation2001) found that older learners and women were more motivated and better at managing time compared to younger learners and men in their study. Differences in SRL were also found between graduates and undergraduates in Artino and Stephens’ (Citation2009) study. These studies collectively indicate that certain groups of learners vary in their SRL strategies and will benefit to a different extent given the same SRL support. One size does not fit all.

Although Guo and Reinecke (Citation2014) did not investigate SRL in MOOCs per se, they found that learners who earned a passing grade revisited prior lectures and reattempted past assessments more often than non–certificate earners. The act of revisiting prior lectures and reattempting past assessments suggest that these learners were engaging in a form of self-regulation. In addition, the authors found that learners’ navigation patterns varied by age, gender, and country. Not all learners will benefit equally from the same SRL support. The most effective SRL supports adapt to the needs of the individual learner. In order to achieve adaptivity, the first step is to find out for whom the various approaches of SRL support are most effective.

1.4. The current study

The current paper systematically reviews the current state of the art in SRL support in online learning environments. Bearing in mind that online learners are diverse (e.g., different age groups, different levels of prior knowledge), the review also examines the impact of human factors addressed in the studies. The main research question explored in this paper is: What is the effectiveness of approaches to support SRL strategies in online learning environments, and do these approaches account for the role of human factors?

The ultimate goal is to identify potential ways to support SRL in MOOCs—what has been done so far, and what gaps are yet to be explored. The present review considers SRL supports in all online learning environments to gain a deeper understanding of the efficacy of different SRL supports in order to attempt to transfer these findings into possible applications in MOOCs.

2. Method

The present review follows the five-step methodology by Khan, Kunz, Kleijnen, and Antes (Citation2003): (1) frame question for the review, (2) identify relevant studies, (3) assess quality of identified work, (4) summarize evidence, and (5) interpret the findings.

2.1. Identification and assessment of relevant studies

Based on the research question, identification and assessment of relevant studies were conducted in four stages. To search for papers that examined approaches to support SRL strategies in online learning environments, key words encompassing the three concepts were used. The terms used in the search are shown in . Subprocesses of SRL (e.g., goal setting, self-recording, self-evaluation) were not included in the search based on the assumption that the term self-regulat* learning would be sufficient in identifying papers encapsulating the constructs of SRL. The search year was set between the years of 2006 and 2016.

Table 1. Key terms used for the search that was conducted in April 2016.

2.1.1. Stage one

The aim of the first stage was to identify as many relevant papers as possible from the databases of Scopus, Web of Science and ERIC within the last decade. The database searches were supplemented by searches on Google Scholar to cover the ground for literature that may be not be indexed in commercial publisher databases (Veletsianos & Shepherdson, Citation2016). However, the search on Google Scholar ended at the 200th paper as ensuing results became increasingly out of scope. In addition, reference lists of meta-analyses conducted on SRL and academic success in online learning environments were manually searched to add to the pool of relevant literature (Broadbent & Poon, Citation2015; Zheng, Citation2016). The search resulted in a selection of 398 articles by browsing the titles of the papers.

2.1.2. Inclusion criteria

A set of five inclusion criteria was used during the second selection process:

  1. The studies had to be empirical, peer-reviewed, and written in English. Dissertation, proceeding papers, reviews, and editorials were excluded.

  2. The SRL strategies supported in the studies had to be identifiable in the phases and subprocesses of self-regulatory learning behaviors identified in Zimmermann’s SRL model. Studies were excluded if the link with SRL was not explicitly made.

  3. The studies should clearly describe the approach to support SRL strategies in online learning environments.

  4. The approach to support SRL in online learning environments should be empirically tested in an experimental or quasi-experimental design. A clear description of the participants, methods, and results should be reported. Studies should have control groups.

  5. The studies were required to report the effects of the approaches to support SRL strategies on either the targeted SRL strategies or learning outcomes.

2.1.3. Stages two and three

The first author scanned the abstracts and introductions of the 398 papers. From the pool of literature, 77 studies met the first two criteria and were selected for the third stage. Five more studies were identified based on the reference lists of the potential papers and were added to the set of papers for further assessment. All 82 papers were read in detail using the inclusion criteria. Fifty-one papers were found to meet the first four criteria and selected for the fourth stage.

2.1.4. Stage four

Three authors (JW, TZ & MB) examined the full papers separately and discussions were held when there were discrepancies. The process resulted in the final set of 35 studies that were included in the review.

3. Results

The search resulted in a wide range of empirical studies. The final set of literature consisted of studies conducted over various educational levels, from seventh graders (e.g., Chen & Huang, Citation2014) to working adults (e.g., Sitzmann & Ely, Citation2010). Out of the 35 studies, 23 studies investigated online learning at the undergraduate level. The studies investigated learning in a range of subject domains, covering educational psychology (e.g., Bannert & Reimann, Citation2012), chemistry (e.g., Biesinger & Crippen, Citation2010), biology (e.g., Duffy & Azevedo, Citation2015), and medical sciences (e.g., Wäschle et al., Citation2014). The results were examined in accordance with the following research questions:

1a)Which approaches to support SRL strategies have been investigated in online learning environments?

1b)To what extent are these approaches to support SRL strategies effective?

2a)Which human factors have been investigated?

2b)What is the impact of human factors on the effectiveness of the approaches to support SRL strategies?

A summary of the approaches and the number of papers reviewed across the approaches is shown in . The approaches (e.g., prompt, feedback, integrated support systems, and other approaches) were identified by the terms used in the papers reviewed. The following sections discuss the efficacy of the approaches and examine the human factors investigated in the studies.

Table 2. Summary of approaches and number of studies reviewed across approaches.

3.1. Prompts

Based on the range of studies included in the review, prompting had been extensively examined as an approach to support SRL in online learning environments. The studies reviewed provided questions (e.g., Do you understand the key points?) and/or suggestions (e.g., take time to read and understand) to encourage SRL activities. The assumption is that learners do not use SRL strategies spontaneously (Bannert & Reimann, Citation2012), so prompting can induce SRL strategies and enhance learning outcomes. In the following section, we reviewed studies examining the various methods in which prompting had been used to support learning.

3.1.1. Comparing prompts and no prompts

In the first experiment conducted by Bannert and Reimann (Citation2012), cognitive prompts were employed in a hypermedia learning environment. Prompts were provided at different phases of the learning process to support orientation, planning, goal specification, monitoring, searching for information, and evaluation of learning. Each prompt included a question to encourage introspection followed by two to four suggested activities to support the subprocesses of SRL in pop-up windows. Analysis of video protocols showed that the prompted group used significantly more SRL strategies than the no-prompt group. Large significant effects were found for orientation (Cohen’s d = .79), goal specification (d = .81), evaluation (d = .76), and monitoring (d = .87). The greatest effect was found on planning (d = 1.38). A close to significant effect was found for transfer tasks while no significant effects were found for knowledge and comprehension tasks.

In the second experiment, Bannert and Reimann (Citation2012) added a training session before the learning episode where prompts were explained, demonstrated, and practiced to reduce the disturbance of pop-ups on the learning process. The scoring of the recorded video protocols showed that trained and prompted learners engaged in significantly more SRL activities (planning, d = .57; goal specification, d = 1.00; search and judge, d = .66; evaluation, d = .56) than learners who were not prompted. No significant effects were found for orientation and monitoring activities. A significant effect was found only for the transfer task (d = .58). A further analysis showed that prior knowledge was the only human factor related to compliance in both experiments. In Experiment 1, higher prior knowledge learners scored better when prompted than lower prior knowledge learners. However, in Experiment 2 where training was provided before presenting the prompts, the prompted lower prior knowledge learners achieved better transfer performance than the not-prompted group. Their compliance with the prompts enhanced their performances to a level that was comparable to the higher prior knowledge learners. The findings suggest that the effectiveness of prompting might be dependent on learners’ cognitive abilities. Without training, lower prior knowledge learners might not have the additional cognitive resources to attend to the prompts and use SRL strategies to achieve greater academic success.

Kizilcec et al.’s (Citation2016) study was the only study found to investigate the effect of prompting in a MOOC setting. Learners enrolled in a MOOC were recommended SRL strategies in a pre-course survey. They had to rate the usefulness of each recommended strategy and write a suggestion for other learners to use the strategies. Despite the large number of participating learners (N = 653), the SRL tips had no effect on a wide range of measures, including the number of lectures viewed, assessments passed, and active days in the course.

Moos and Azevedo (Citation2008) investigated the use of conceptual scaffolds in the form of prompts to support SRL strategies. Learners were given five guiding questions to support conceptual understanding during learning. Coding of learners’ think-aloud protocols showed that learners who were prompted planned significantly more than non-prompted learners (Eta squared or ŋ2 = .22). A comparison of learners’ mental models in a pre- and post-test showed that although there were an equal number of learners with low mental models at pre-test in both groups, more learners in the prompted group developed high mental models at post-test. The study provided evidence to support the use of prompts to enhance both SRL strategies and learning performance.

Kauffman, Ge, Xie, and Chen (Citation2008) investigated the effects of two types of metacognitive prompts (i.e., problem-solving prompt and reflection prompt). Problem-solving prompts procedurally guide learners through the problem-solving process in the form of questions. Reflection prompts consisted of a confidence rating scale and advice to either return or move on based on the reported confidence level. Results revealed a significant effect of problem-solving prompts (ŋ2 = .27) and a significant interaction between the two types of prompts (ŋ2 = .25) on problem-solving scores. Similarly, for writing quality, a main effect was found for problem-solving prompts (ŋ2 = .35) and a significant interaction between the two types of prompts (ŋ2 = .36). The author concluded that the problem-solving prompt was an effective approach to support learners in online learning environments, whereas the reflection prompt was only effective when learners had clear goals for the problem-solving process. This suggested that the effectiveness of prompting on one SRL phase (i.e., reflection) could be conditional and dependent on the other SRL phases (i.e., forethought or performance).

In a similar problem-solving web-based environment, Crippen and Earl (Citation2007) examined the use of self-explanation prompts to support learning with worked examples. The prompts encouraged learners to self-explain the problem-solving strategy after viewing worked examples. The results showed that learners who were prompted to self-explain after viewing worked examples reported higher self-efficacy than learners who only viewed the worked examples. Even though learners’ scores in the two groups were not significantly different, learners who were prompted to self-explain after viewing worked examples consistently scored higher than learners who only viewed the worked examples throughout the whole semester. Due to the lack of behavioral measurements in this study, the effect of prompt on SRL strategies could not be examined. It was not clear whether prompted learners self-explained or simply spent more time viewing the worked examples. However, prompts appear to support learning with worked examples.

Apart from worked examples, prompts were also investigated with different types of note-taking formats. Kauffman, Zhao, and Yang (Citation2011) investigated the effects of self-monitoring prompts when paired with three note-taking formats (i.e., conventional, outline and matrix). The prompt reminded learners to check if they have gathered sufficient information and return to the information if necessary. The results showed that prompted learners collected more notes (ŋ2 = .11) and performed better on a factual test (ŋ2 = .05) than non-prompted learners. There was a significant interaction between the note-taking formats and prompt (ŋ2 = .06), indicating that prompted conventional note-takers took more notes than other prompted note-takers. Prompted matrix note-takers did better on application test than other prompted note-takers. All in all, the study supports self-monitoring prompt as an effective approach as learners performed better when prompted regardless of the note-taking formats.

Stahl and Bromme (Citation2009) investigated effects of prompting when used in conjunction with graphic organizers on help-seeking processes. Learners were given instructions on general SRL strategies and adviced to use help when needed. The coded think-aloud protocols and knowledge test did not reveal any significant differences in help-seeking behavior and learning performance between the prompted and not prompted groups. The authors reasoned that thinking-aloud itself could be enough to support SRL. By thinking-aloud, learners were made aware of their learning processes. Therefore, prompting in this case was redundant and did not result in any significant effects on SRL strategies.

3.1.2. Comparing effects of prompt timings

Sitzmann, Bell, Kraiger, and Kanar (Citation2009) provided learners with SRL prompts at different timings (i.e., immediate, delayed, and no prompt). Immediate prompts were given after every training session while delayed prompts were given at the fifth training session and onward. A hierarchical model was used to assess the effect of prompting on test scores over time. For the immediate condition, test scores slightly increased in the first four modules and remained above average for the rest of the course. For the delayed condition, test scores in the first five modules were below average but once the self-regulation prompts were presented, test scores increased dramatically and were above average. For the no prompt condition, learners’ performance deteriorated over time. The results provide support for the positive effects of prompting SRL on sustaining learning.

Sitzmann et al. (Citation2009) extended their first experiment by taking into account learners’ cognitive ability and self-efficacy in a second experiment. The schedules of deploying prompts were similar to the first experiment. The results showed that the basic performance scores for the two prompted groups were higher than the not prompted group (ŋ2 = .04) but there were no significant differences for strategic performance scores across the three groups. Using aptitude–treatment interaction analysis, the results showed that the effect of prompting on basic performance was moderated by cognitive ability while the effect of prompting on strategic performance was moderated by self-efficacy. This suggests that prompting was more beneficial for learners with higher levels of cognitive ability and self-efficacy than those with lower levels of cognitive ability and self-efficacy.

In another study, Sitzmann and Ely (Citation2010) investigated the effects of prompts by deploying them at different schedules (i.e., pre-training, early, delayed, and continuous). The results showed that there were no significant differences in self-reported self-regulatory activities between prompted groups and no-prompt group. However, learners in the continuous and early groups spent on average half an hour more reviewing each module than learners in the no-prompt group. Learners who were prompted continuously also performed significantly better than the no-prompt group, but such differences were not found for pre-training, early, and delayed groups. Furthermore, low performance obtained in the previous module did not reduce self-regulatory activities or predict drop out for learners in the continuous-prompt group, whereas this was observed in the no-prompt group. The effects of learning on self-regulatory activity and attrition were moderated by prompting. The authors concluded that continuous prompting is the most effective.

3.1.3. Comparing effects of specificity and timing of prompts

Ifenthaler (Citation2012) investigated the extent of specificity of reflection prompts (i.e., generic prompt, directed prompt, and no prompt) to support self-monitoring and self-evaluation. The results showed that the generic prompt group scored better on domain-specific knowledge than directed prompt group and no-prompt group (ŋ2 = .15). The generic prompt group also had better understanding of the problem-solving task. The findings suggest that learners who possessed certain skill sets, such as undergraduates, benefit more from generic prompts than directed prompts. One possible explanation is that generic prompts are less restrictive and allow learners to exercise autonomy.

Besides examining specificity, Lehmann et al. (Citation2014) conducted two experiments to investigate the effects of specificity and timing of prompts to support SRL strategies (i.e., generic preflection, directed preflection and generic reflection prompts). Generic preflection prompt and directed preflection prompt were presented to learners when they were studying the problem scenario while generic reflection prompt was presented to learners when they were writing solutions. The results showed a significant effect of prompting on learning gains (ŋ2 = .17). The directed preflection prompt group performed better than generic reflection prompt group. The results suggest that the preflection prompts are more effective for learning. However, there were no significant effects found for enhanced SRL strategies from the quality of learners’ answers. Hierarchical regression analyses revealed that increases in learners’ reported interest are associated with higher quality knowledge maps.

The second experiment by Lehman et al. (Citation2014) compared the effect of generic preflection prompts, directed preflection prompts, and no prompts. The procedures were the same as their first experiment. Although there were no significant effects of prompting on learning gains, the results showed that both preflection prompt groups wrote higher quality answers than the no-prompt group (ŋ2 = .25). With regards to individual learning preferences, the hierarchical regression analysis did not yield any significant amount of variance. However, directed preflection prompts were found to significantly affect positive activation of learners’ performance. Nonetheless, results from both experiments provide support for prompts as an effective approach.

3.1.4. Short- and long-term effects of prompting

Bannert, Sonnenberg, Mengelkamp, and Pieger (Citation2015) investigated the lasting effects self-directed metacognitive prompts on learning performance. In the study, learners decided when they would like to see a pop-up prompt. In terms of short-term effects, analyses of the log files showed that the prompted group visited more relevant web pages (d = .65) and spent more time on the relevant web pages (d = .58) than the no-prompt group. They also performed better than learners in the no-prompt group on transfer tasks (d = .44), but no differences were found for recall and comprehension tasks. In terms of long-term effects, data collected three weeks later showed that the prompted group spent more time on relevant web pages (d = .50) than no-prompt group. No significant differences were found for frequency of relevant web pages visited. The prompted group continued to perform better on transfer tasks (d = .62) than the no-prompt group. The findings suggest that prompts support SRL strategies in the short-term and these benefits are carried over to enhance learning performance in the long-term.

3.1.5. Comparing different combinations of prompts

Zhang, Hsu, Wang, and Ho (Citation2015) examined the effects of metacognitive and cognitive prompts in an online inquiry learning module. The study also accounted for learners’ prior metacognitive ability and further grouped them based on high, medium and low metacognition. For the class that received only cognitive prompts, low prior metacognitive ability subgroup performed worse in analyzing than both high and medium prior metacognitive ability subgroups. No significant differences were found for the three subgroups in the class that received both types of prompts. This suggests that metacognitive prompts mediated the effect of prior metacognitive ability on learning performance and leveled the learning opportunities for the learners with different levels of metacognitive ability.

3.1.6. Differential impacts of prompts on different levels of prior knowledge

Yeh, Chen, Hung, and Hwang (Citation2010) investigated the differential impact of two types of self-explanation prompts (i.e., reasoning-based, predicting-based) on learners with different levels of prior knowledge in a computer-based environment. Results revealed that lower prior knowledge learners who received reasoning-based prompts scored better than those who received predicting-based prompts followed by those who were not prompted (ŋ2 = .53). For higher prior knowledge learners, those who received predicting-based prompts scored better than those who received reasoning-based prompts and those who were not prompted (ŋ2 = .18). Both the lower (ŋ2 = .16) and higher (ŋ2 = .12) prior knowledge groups spent more time learning when they received reasoning-based prompts than when they received predicting-based prompts or no prompts. These results taken together suggest that learners with lower prior knowledge benefit more from reasoning-based prompts whereas higher prior knowledge learners benefit more from predicting-based prompts.

3.1.7. Conclusion and discussion on prompts

Prompting appears as an effective approach to support SRL strategies and academic success. Studies have provided evidence for effects of prompting on planning, goal specification, evaluation (Bannert & Reimann, Citation2012), metacognition (Bannert et al., Citation2015; Kauffman et al., Citation2008), self-monitoring (Kauffman et al., Citation2011), and reflection (Ifenthaler, Citation2012). There is also evidence for higher academic success (Crippen & Earl, Citation2007; Moos & Azevedo, Citation2008; Sitzmann et al., Citation2009). However, effectiveness of prompting cannot be simply defined by one effect size since the studies differed in how the prompting was implemented (e.g., prompting by providing a question, an advice or an instruction), the intention (e.g., to create metacognitive awareness, to procedurally or conceptually guide learners), specificity (e.g., generic and directed), and timing (e.g., pre-learning, early, delayed, and continuous).

In addition to the range of prompts, studies differ in their operationalization and measurement of SRL strategies. Several studies used recorded video protocols (Bannert & Reimann, Citation2012), coded think-aloud procedures (Moos & Azevedo, Citation2008; Stahl & Bromme, Citation2009), and log files (Bannert et al., Citation2015) to examine the underlying processes of prompting to enhance SRL strategies. Some studies also lack behavioral measurements to draw conclusions on the underlying processes between prompting and SRL strategies used by the learners to enhance learning. For example, it is not clear whether learners reflected when prompted in Ifenthaler’s (Citation2012) study.

3.2. Feedback

Two studies were found in the literature that investigated feedback as an approach to support SRL. Unlike prompts that provide questions or suggestions to encourage use of SRL, feedback was defined by the studies as a method to promote reflective activities by informing learners’ about their state of learning. Through feedback, learners become more aware of their current learning state, thereby, take steps to enhance their learning. Biesinger and Crippen (Citation2010) investigated the effects of interaction between two forms of feedback (i.e., norm-referenced vs. self-referenced) and learners’ learning environment perceptions (i.e., mastery approach vs. performance approach). Feedback on learners’ quiz performance was provided in the form of bar graphs. Results revealed that irrespective of the forms of feedback received by learners with mastery approach or performance approach perceptions, there were no significant changes in goal orientation, SRL activity, self-efficacy, and performance over time. The authors reasoned that learners might not have fully perceived the intentions of the bar graphs as they were not salient. Hence, a major limitation of the study was not measuring learners’ awareness of the feedback.

Wäschle et al. (Citation2014) conducted two experiments to examine whether using visual feedback to inform learners of their procrastination behavior would deter them from further procrastination. Learners in the visual feedback condition were shown a colored line chart depicting their weekly reported level of procrastination (i.e., red for high, yellow for medium, and green for low). Results showed that learners in the visual feedback condition had significantly lower levels of self-reported procrastination (ŋ2 = .26) and set more specific learning goals (ŋ2 = .23). However, there were no significant effects on learning outcomes.

A second experiment was conducted by Wäschle et al. (Citation2014) to examine whether the effect of visualization was due to a signaling effect or an informational feedback effect. The results showed that the reduction in procrastination was most effective with real information from learners’ self-reported procrastination followed by random feedback and no visual feedback. Moreover, the results showed that learners in both real and random visual feedback conditions reported higher levels of SRL strategy use. However, there was no significant effect on either perceived goal achievement or learning outcomes.

3.2.1. Combining feedback with prompts

SRL processes are dynamic. As mentioned in the introduction, they occur in a cyclical process. Therefore, a number of studies in the pool of literature were found to combine prompts with feedback to support this cyclical process. Van den Boom, Paas, and van Merriënboer (Citation2007) compared three conditions in a distance learning environment: reflection prompts with peer feedback, reflection prompts with tutor feedback, and a control condition without any support.  Results from learners’ reported SRL strategies on the Motivated Strategies for Learning Questionnaire (MSLQ) (MSLQ; Pintrich, Smith, Garcia, & McKeachie, Citation1991) showed that learners who received prompts and feedback from either peer or tutor scored higher on the MSLQ subscale of value than learners who were not supported (ŋ2 = .21). For the MSLQ subscale of test anxiety, learners who received prompts and tutor feedback reported lower levels of test anxiety than learners who received prompts and peer feedback and learners who were not supported (ŋ2 = .13). However, no significant effects were found on other MSLQ subscales. In addition, learners who received prompts and tutor feedback significantly outperformed learners in the other two groups (ŋ2 = .12). There were no significant differences in learning outcomes between learners who only received prompts and peer feedback and learners who were not supported. The findings suggest that feedback supports reflective activities, strengthening the positive effects of prompts on SRL strategies.

Lee, Lim, and Grabowski (Citation2010) investigated the use of generative learning strategy prompts and metacognitive feedback to enhance SRL in a self-paced computer-based learning environment. Metacognitive feedback informed learners whether their answers were correct together with an advice to restudy if their answers were wrong. The results showed that learners who received both prompts and feedback reported higher SRL strategy use, used more generative learning strategies and achieved better learning performance. Learners who received prompts without feedback only used more generative learning strategies. The findings suggest that prompting is effective in fostering use of task strategies, such as highlighting and note-taking, while metacognitive feedback supports the more internal processes of SRL strategies like monitoring and evaluating. Therefore, providing prompts and feedback supports the dynamic SRL processes.

Duffy and Azevedo (Citation2015) also investigated the combination of prompts and feedback using an intelligent tutoring system. Learners were prompted to use SRL strategies (e.g., write a summary) and feedback was provided on how well they used the strategies. Results showed that prompts and feedback had a significant effect on learning processes (ŋ2 = .24). Learners spent more time viewing relevant pages and used more SRL strategies when they received prompts and feedback. No significant effects were found for learning outcomes. Learners’ achievement goal (i.e., mastery-oriented vs. performance-oriented) interacted with the intervention. Performance-oriented learners who were supported scored better than mastery-oriented learners who were supported. The authors concluded that the support given in the study helped performance-oriented learners put in more effort to learn and achieve higher academic success.

In an extensive study conducted by Chen, Wei, Wu, and Uden (Citation2009), various permutations of high-level prompts (with vs. without), observing peer’s reflection contents (low quality vs. high quality vs. without), and observing peer’s feedback (negative vs. positive vs. without) on learners’ reflection levels were explored. High-level prompts are comprehension and integration questions to get learners to use their own words to describe the learning contents and to connect concepts. The results showed a significant interaction effect between high-level prompts and observing peer’s reflection. However, there was no significant effect of observing peer’s feedback on learners’ reflection levels. This suggests that high-level prompts support learner’s level of reflection. In addition, the level of learner’s reflection is further enhanced by reading high-quality peer’s observation. Contrary to the other studies in this section, receiving feedback from peers did not enhance the level of learner’s reflection.

3.2.2. Conclusion and discussion on feedback

The conclusion on feedback alone as an approach to support SRL activities is hard to draw as there were only two studies identified. Significant results were found for reducing procrastination (Wäschle et al., Citation2014), while no significant effects were found for changes in goal orientation over time (Biesinger & Crippen, Citation2010). None of the studies provide evidence for the effect of feedback on learning outcomes. However, due to the small number of studies found, a strong conclusion cannot be drawn.

Alternatively, combining feedback with prompts appeared to be more promising. Significant effects were found for MSLQ subscales of value and test anxiety (Van Den Boom et al., Citation2007) and use of more SRL strategies (Duffy & Azevedo, Citation2015; Lee et al., Citation2010). Positive effects were also found for learning outcomes (Lee et al., Citation2010; Van Den Boom et al., Citation2007). However, Chen et al. (Citation2009) did not find any effect of peer feedback. Many MOOCs employ peer feedback because individualized feedback in MOOCs is near to impossible due to the large number of students (Suen, Citation2014). The small number of studies in this section suggest that more studies are needed to investigate the effect of feedback on SRL and learning outcomes in online learning environments.

3.3. Integrated support systems

Besides combining approaches, research conducted in online learning environments also explored the use of integrated support systems to enhance SRL strategies and learning performances. An integrated support system has a set of embedded features that support different SRL processes. Integrated support systems can include prompts and feedback, as well as other SRL tools that might help learners to better self-regulate their learning. The following section describes the different integrated support systems found in the selected literature.

Chen, Wang, and Chen (Citation2014) tested the effectiveness of supporting SRL during web-based language learning. The study examined learners reading comprehension performance when using a digital reading annotation system packaged with various SRL-enabling tools such as: (a) a self-monitoring table to set an SRL schedule, (b) a self-regulated radar plot which visualizes five SRL indicators based on learner activity, (c) an annotation ranking table which indicates their overall annotation performance, and (d) up- and down-voting for others’ annotations. The authors reported that, for learners who set goals and monitor their progress, the system increased learners’ reading comprehension.

Along the same theme of annotated text, Chen and Huang (Citation2014) conducted a study evaluating the effectiveness of providing English language acquisition learners with pre-annotated texts. The authors found that the experimental group that was supported by the attention-based self-regulating mechanism displayed higher sustained attention and reading comprehension than the control group that did not receive any awareness mechanism. The authors also introduced gender into the analyses as a human factor and found that females benefitted most from the web-based reading annotation system (d = 1.00).

Kim and Pedersen (Citation2011) investigated the effects of metacognitive scaffolds embedded in an interactive web-based program to facilitate ill-structured problem-solving. The scaffolds consisted of reflective prompts that popped up during learning, a checklist to guide self-questioning and a checklist to monitor learning progress. The results showed that the group that received the scaffold outperformed the group without the support (ŋ2 = .07).

Molenaar, van Boxtel, and Sleegers (Citation2011) also measured the effect of dynamic computerized scaffolds in a collaborative learning environment. The system had three levels: the input level where data on learners’ attention were collected, the reasoning level where scaffolds were selected based on the data from input level, and the intervention level where the selected scaffold was provided to the learner. Learners either received structuring scaffolds that gave instructions on regulation or problematizing scaffolds that elicited metacognitive activities. The results showed that neither group performance nor individuals’ domain knowledge was positively affected by the scaffolding; but a small effect (r = .16) was found for enhanced metacognitive knowledge. When compared to structuring scaffolds, problematizing scaffolds had a small effect on group performance (r = .28), individual transfer of domain knowledge (r = .13), and metacognitive knowledge (r = .16). The results suggest that although the scaffolds did not increase quantity of domain knowledge, providing scaffolds could increase quality of domain knowledge.

Using the same dynamic computerized scaffolding system, Molenaar, Roda, van Boxtel, and Sleegers (Citation2012) compared a control group that learned without scaffolds and an experimental group that received (i) cognitive scaffolds that pointed learners in the direction of re-learning content they had been struggling with and (ii) metacognitive scaffolds that advised learners to properly allocate their time and resources in the learning process. The results align with Molenaar et al.’s (Citation2011) study. The scaffolds had a positive effect on group learning performance (r = .26), but there was no significant effect on domain knowledge.

Delen, Liew, and Wilson (Citation2014) tested the effectiveness of SRL tools in an integrated support system. This system—a newly developed video-watching environment—was designed with tools to support generative note taking, seeking additional resources, and self-evaluation through reflective prompts. The authors reported that this integrated video-watching environment significantly increased the learners’ learning performance. The experimental group scored higher in a recall test (d = .75) and spent more time with the instructional material (d = 1.34) than the control group.

An intelligent tutoring system (ITS) was designed to provide immediate feedback along with other metacognitive scaffolds to its users in a medical discipline. El Saadawi et al.’s (Citation2010) study examined the effect of immediate feedback and whether once the immediate feedback is withdrawn, other metacognitive scaffolds can be beneficial. The results indicated that metacognitive scaffolds in ITS need to be paired with immediate feedback to have an impact on learning. When the immediate feedback is removed, learning gains suffered and other metacognitive scaffolds in the ITS were not able to recover the learning gains achieved by immediate feedback.

Kamarski and Gutman (Citation2006) evaluated the effect of an integrated support system in a mathematics course in a lab setting. The system was designed to support three key aspects of SRL: (a) self-metacognitive questioning, (b) providing math explanations, and (c) metacognitive feedback. The study compared two e-learning environments: one with SRL support and one without. The authors reported the condition with SRL support to be far superior to the other: SRL-supported learners exhibited higher performance in solving both procedural (d = .44) and transfer tasks (d = 1.75). Furthermore, the SRL-supported learners were more effective at self-monitoring while completing problem-solving tasks.

Manlove, Lazonder, and de Jong (Citation2007) developed and tested an integrated support system designed primarily to help goal setting. The experimental condition received the full SRL service of the goal-setting support system, and the control group only received minimal SRL support. The study revealed that subjects who were given the full set of tools to scaffold their goal-setting, monitoring, and evaluation tactics used the SRL tools in the system more often (d = .66), spent more time (d = 1.21), and also produced better-structured lab reports (d = 1.49). Surprisingly, learners who did not receive the full SRL support produced better model quality (d = 1.26) than learners who received the full SRL support.

Wang (Citation2011) evaluated the effect of an integrated SRL support system that enabled five specific types of activities important to SRL: (a) adding answer notes, (b) self-reporting confidence, (c) reading peer work, (d) recommending peer answer notes, and (e) soliciting help from peers. The authors indicated that the learners who were exposed to and used the integrated SRL support system were more willing to engage with formative assessments, displayed higher levels of SRL, and scored higher on the summative test than the control group.

3.3.1. Conclusion and discussion on integrated support systems

Learning in online learning environments requires learners to self-regulate their learning, as teachers are not physically present to offer support. Therefore, supporting SRL by embedding various features in the online learning environment seems to be effective in enhancing SRL strategies and learning outcomes. However, Chen et al.’s (Citation2014) study showed that integrated support systems are only effective when learners used the tools or support provided. Generally, when learners use the systems provided, there are positive effects on SRL strategies (Chen & Huang, Citation2014; Delen et al., Citation2014; Kamarski & Gutman, Citation2006; Molenaar et al., Citation2011; Wang, Citation2011) and learning outcomes (Chen et al., Citation2014; Manlove et al., Citation2007; Molenaar et al., Citation2012). However, most of the studies did not examine human factors that might have an impact on the effectiveness of the approach. It is not clear whether learners who have lower cognitive abilities will be overwhelmed by the system with an array of support given or will they be able to make full use of the support given.

3.4. Other approaches to support self-regulated learning

Other than prompts, feedback, and integrated support systems, the studies included in this review also examined a number of other approaches that could not be easily categorized. These approaches are reviewed in this section.

Two studies found promising results for the use of explicit self-monitoring strategies in online language learning courses. Chang (Citation2007) investigated the effects of instructing learners to use a self-monitoring form. The learners also received visual feedback on the history of their study time. Learners in the experimental group showed higher academic performance (d = .73) as well as motivational beliefs (d = .60). The results also showed that learners’ initial English proficiency did not influence the efficacy of the self-monitoring intervention. Therefore, the approach used seemed to be effective in supporting SRL strategies and learning performance. However, it is not clear whether it was the monitoring or the visual feedback that increased the motivational beliefs and learning performance.

Using a similar design in a similar English reading course, Chang and Lin (Citation2014) investigated a more elaborate self-monitoring tool in the form of an e-journal. Learners in the experimental group were given a brief explanation on the use of the e-journal and were instructed to (i) write a list of important vocabulary and phrases, (ii) use those in sentences, (iii) write down personal experiences related to the course topics, and (iv) write reflective summaries about the course materials. For each of the eight e-journal entries they had to write, they received feedback from the instructor. Learners in the experimental group outperformed learners in the control condition on academic performance (d = .41). However, the use of SRL strategies was not measured in the study.

Chun-Yi and Hsiu-Chuan (Citation2011) examined whether learners who actively practice self-regulated learning skills maintain and improve the skill over time. The web-based SRL training, followed weekly by half of the learners, provided information and exercises for four SRL skills: planning, monitoring, modifying, and self-evaluation. Before and after the course, all the learners filled in the Metacognitive Skills Evaluation Questionnaire that measured the same four SRL skills. Although the learners in the experimental group received multiple hours of training, they only scored higher than the control group on the planning measure (d = .68). Therefore, the training provided in this study was not effective in supporting all the SRL strategies that were trained.

In two experiments, Kostons, Van Gog, and Paas (Citation2012) investigated the effects of watching videos of someone modeling various SRL strategies in an online learning environment. The results showed that watching a video model on self-assessment was related to higher self-assessment accuracy (ŋ2 = .10), while watching a video on task-selection was related to higher task-selection accuracy (ŋ2 = .14). However, all the groups had the same performance on the problem-solving tasks, showing that while their SRL skills improved, no effect was observed on learning or performance. The design of the second experiment was similar, but another group of learners was given time to practice the SRL strategies. For self-assessment accuracy, learners who watched the video model outperformed the learners in the control group (d = .41), but did not perform significantly better than the practice group. For task-selection accuracy, both the practice and video model groups outperformed the control condition (d = .75 and d = .74, respectively), but did not differ significantly from each other. Results of the study suggest that explicit training of both self-assessment and task-selection supports SRL and increases learners’ efficiency in regulating their learning.

Tangworakitthaworn, Gilbert, and Wills (Citation2015) investigated whether providing learners with highly structured intended learning goals would enhance learning compared to providing unstructured learning goals. The results were mixed, as more structured learning goals appeared to influence learner performance on only some tasks. The mixed results in combination with the small scope (N = 21) limited the informational value of the study, but it warrants follow-up research on the effects of providing different types of learning goals.

3.5. Human factors

In the sections above, we have examined the type of approaches to support SRL strategies and learning outcomes and whether the approaches are effective. Only 12 of the included studies examined the role of human factors as shown in . It is important to examine the role of human factors since individuals differ in many ways (e.g., low and high prior knowledge, low and high cognitive abilities, gender, level of expertise, learning preferences). The effectiveness of an approach to support SRL strategies and academic success is dependent on human factors, as illustrated in .

Table 3. Systematic review table of human factors addressed in studies (alphabetical order according to the first author).

Figure 1. Impact of human factors on approaches to support SRL, SRL strategies, and learning performance.

Figure 1. Impact of human factors on approaches to support SRL, SRL strategies, and learning performance.

3.5.1. Moderating effects of human factors

Several studies found in this review showed that human factors moderate the effects of SRL support on SRL strategies and learning outcomes. The arrow labeled A in represents this effect. For instance, Duffy and Azevedo (Citation2015) found a significant achievement-goal by treatment condition interaction effect on learners’ achievement. The performance-approach learners scored better than the mastery-approach learners. The results suggest that receiving feedback with prompts had a more positive effect for performance-approach learners who adjusted their SRL strategies to outperform others upon receiving feedback. On the other hand, feedback with prompts had little effect on mastery-approach learners who were focused on improving their understanding of the learning topic.

Manlove et al. (Citation2007) investigated the moderating effects of achievement levels. The results showed that achievement levels had no significant effects on the lab report scores but there is a significant group by achievement interaction on model quality. Low-achieving learners working in pairs who received the support produced models of lower quality compared to low-achieving learners working in pairs who did not receive the support. However, the model quality did not differ for the high-achieving learners working in pairs with or without support. The authors reasoned that it was possible that low-achieving learners working in pairs required more time to understand and use the SRL supports offered by the integrated support system due to their lower levels of domain knowledge. Therefore, they did not have sufficient time allocated to produce models of better quality.

Sitzmann et al. (Citation2009) examined the moderating effects of cognitive ability and self-efficacy. The results showed that prompting had a stronger positive effect on basic performance over time for learners with higher cognitive ability than learners with lower cognitive ability when compared to the control group. The decline in high ability learners’ basic performance over time in the control group suggests that basic performance might be mundane for high ability learners over time. Therefore, prompting high ability learners helps to sustain their basic performance over time. Similarly, a stronger positive effect on strategic performance over time was found for learners with higher levels of self-efficacy. The authors reasoned that the prompts enabled learners with higher levels of self-efficacy to identify gaps between their performance and goals.

In Lehmann et al.’s (Citation2014) second experiment, directed preflection prompts positively influence novices’ motivation whereas generic preflection prompts negatively influence novices’ motivation. Comparing the results with Ifenthaler’s (Citation2012) study where generic prompts were found to be more effective for the learners who were considered more advanced, the results of the two studies indicate that different specificities of prompts might be beneficial for learners with different levels of expertise.

Yeh et al. (Citation2010) found significant prior knowledge by prompt type interaction. Higher prior knowledge learners benefited more from predicting-based prompts whereas lower prior knowledge learners benefited more from reasoning-based prompts. The results suggest that reasoning-based prompts are ineffective for higher prior knowledge learners as it involves explaining concepts that are already mastered. However, for lower prior knowledge learners, the reasoning-based prompts supported them in their understanding, bringing about better learning outcomes. Therefore, the results suggest that learners with different levels of prior knowledge required different types of prompts.

Zhang et al. (Citation2015) found that learners in the high and medium metacognition groups scored better compared to learners in the low metacognition group on analyzing practice when given only cognitive prompts. There were no significant differences in scores on analyzing practice for learners of all three metacognition groups when given combined cognitive and metacognitive prompts. The results suggest that metacognitive prompts helped learners with low metacognition in identifying learning goals and monitoring their learning. By doing so, metacognitive prompts provide equal opportunities for learners of different metacognitive levels to optimize their learning success. Therefore, suggesting that learners with different levels of metacognition would benefit from different combinations of prompts.

3.5.2. Impact on SRL strategies and learning performances

Other than moderating effects, human factors are also related to differences in use of SRL strategies and learning performances. The arrow labeled B in represents the differences in SRL strategies (e.g., graduates reported greater use of critical thinking strategies than undergraduates; Artino & Stephens, Citation2009), and the arrow labeled C in represents the differences in learning outcomes (e.g., learners with higher prior knowledge score better than learners with lower prior knowledge; Yeh et al., Citation2010). Although Chang (Citation2007) and Biesinger and Crippen’s (Citation2010) studies, respectively, showed that regardless of level of English proficiency and learning environment perceptions, the supports did not have any significant effects on the learners’ learning outcomes and SRL processes, several studies found in this review suggest that approaches to support SRL may have differential impact due to relationship of different human factors with SRL strategies and learning outcomes.

Chen et al. (Citation2014) and Chen and Huang (Citation2014) investigated gender differences in using SRL strategies and learning performances. In Chen et al.’s (Citation2014) study, the integrated support system effectively enhanced reading annotation abilities of male learners who were supported but there were no significant differences in reading comprehension performances between the supported and not supported male learners. The opposite was observed in the female learners. Although female learners who were supported had better annotation abilities in only one learning unit, they had better comprehension performances than female learners who were not supported.

Similarly, gendered effects were also found in Chen and Huang’s (Citation2014) study. Although the integrated support system sustained the attention of both male and female learners, only the females in the experimental group outperformed the females in the control group. There were no significant differences in performance between males in the experimental group and the control group.

Prior knowledge was found to have a significant effect in both Bannert and Reimann’s (Citation2012) experiments. In the study, higher prior knowledge learners benefited more when prompted than lower prior knowledge learners in the first experiment. However, when training was added to the prompts in their second study, lower prior knowledge learners were able to comply with the prompts and benefited from using the SRL strategies. They found that prompts were ineffective when lower prior knowledge learners were unable to act accordingly when prompted.

In Lehmann et al.’s (Citation2014) first experiment, the authors examined the effect of learning preferences on predicting SRL outcomes with learners’ quality of written essays and knowledge maps during learning. In the first experiment, human factors (i.e., domain-specific knowledge gain and metacognitive regulation) significantly predicted the quality of written essays and knowledge maps. Taken together, these results suggest that human factors influence learners’ use of SRL strategies during learning.

Learners’ initial levels of SRL may also affect their learning effectiveness as demonstrated by Wang’s (Citation2011) study. They found that learners with different levels of SRL ability (i.e., high and low) receiving SRL support from an integrated support system did not differ in their learning effectiveness. However, without support, high self-regulated learners performed better than low self-regulated learners.

3.5.3. Conclusion and discussion on human factors

There were not many studies found in this review to have examined human factors to begin with. Most studies used human factors to ensure that the learners in the study were randomly allocated. However, findings from the studies that examined human factors suggest that additional or differentiated support (e.g., training, additional prompt) should be in place to assist learners with different levels of prior knowledge (Yeh et al., Citation2010) cognitive ability (Sitzmann et al., Citation2009), or metacognitive ability (Zhang et al., Citation2015).

The arrow labeled D in represents adapting approaches to support different learners. In this way, all learners can become better at regulating their own learning and achieve greater academic success. A singular approach to support SRL might not be as effective as approaches that are adapted for different needs (e.g., deploying generic or direct prompt depending on learner’s level of expertise). To meet the learning needs of different learners, technology can be harnessed to adapt instructional methods and learning environments (Yukselturk & Top, Citation2013).

4. General discussion

In view of the lack of empirical studies in MOOCs to understand how SRL can be supported, the current review examined empirical studies conducted in online learning environments to gain insights into the possible ways to support SRL in MOOCs. The current review presents findings from studies that explored the effects of supporting self-regulated learning (SRL) strategies in online learning environments by organizing them according to the approaches examined. Out of the 35 studies included in the review, 14 investigated prompts, 10 investigated integrated support systems, two investigated feedback, four investigated a combination of prompts and feedback, while the remaining five studies investigated different approaches. Prompts came in the form of questions, suggestions, and short answer problems. Within integrated support systems, studies provided different tools to support SRL strategies. Manlove et al. (Citation2007) employed a system to support goal setting whereas Wang (Citation2011) employed a system to support five SRL strategies. Therefore, the results section reviewed studies individually and organized them according to similarities in approach type for a better understanding on the different approaches examined in online learning environments (research question 1a) and their effectiveness (research question 1b).

Findings from the studies reviewed provide evidence for enhanced SRL strategies. Bannert and Reimann’s (Citation2012) study demonstrated an increased use of SRL strategies by learners who were prompted and the largest effect was found for planning. In Wäschle et al.’s (Citation2014) study, learners reported less procrastination behavior when shown a visual feedback. In line with Zimmerman’s model where SRL is triadic and cyclical, approaches that support multiple processes of SRL or when they carry learners through the cyclical process are more beneficial. For example, Kauffman et al.’s (Citation2008) study showed that reflection prompts are only effective when problem-solving prompts are given. Sitzmann and Ely’s (Citation2010) study showed that continuous prompting is the most effective prompt schedule. The cyclical SRL phases (i.e., forethought, performance, and reflection) suggest that SRL activities at any phase will have an effect on SRL activities in the subsequent phase (Zimmerman, Citation1990). Supporting and enhancing SRL strategies in one phase (i.e., goal setting) might not be enough to create a change in learners’ learning processes if learners do not have support for the complementary SRL strategies (i.e., monitoring the goals). Therefore, future research should take into account the connectedness between the different phases of SRL and provide support to ensure learners follow the process accordingly.

Beyond the positive effects on SRL, the results from this review corroborate the findings in Zheng’s (Citation2016) meta-analysis. Most of the studies found positive effects on learner performance. For prompts, significant effects were found for transfer tasks (e.g., Bannert & Reimann, Citation2012), factual test performance (e.g., Kauffman et al., Citation2011), and problem-solving tasks (e.g., Kauffman et al., Citation2008). The evidence indicates that prompting is an effective way to enhance SRL and learning performance. However, the results should be interpreted with caution as publication bias was not examined. As more studies examine prompting as an approach to support SRL strategies, future research should specifically examine the efficacy of various prompting strategies in online learning environments.

For feedback, none of the two studies found significant effects on learning performances. However, when feedback is combined with prompts, significant effects were found on learning performance (i.e., Lee et al., Citation2010; Van Den Boom et al., Citation2007). Feedback received during learning has the potential to trigger reflective activities and deepen learner understanding (Van Den Boom et al., Citation2007). Especially in online learning environments where learners may experience a sense of isolation (McInnerney & Roberts, Citation2004), feedback may play an important role in creating a sense of belonging to an online learning community and foster more reflective learning. Future studies should explore the efficacy of feedback that can be made available using social networking capabilities of the Internet.

Learning performances also improved in general when they were supported by integrated support systems (e.g., Delen et al., Citation2004). However, Chen at al. (Citation2014) noted that learners only performed better when they used the support embedded within the system, indicating that providing support in itself is not sufficient to enhance learning success. Learners should be encouraged and taught how to use the support, and the support should be readily adapted in alignment with their current behavior.

In general, most of the aforementioned approaches are promising in supporting SRL and learning. However, there is still a gap in understanding the underlying mechanisms that lead to better SRL and learning. For example, it is not clear whether prompting SRL activities by asking questions or providing suggestions leads to better self-monitoring. Future research should examine how the different approaches support SRL. It is also important for future studies to take into account human factors and move towards adaptive support systems. Research has demonstrated the influence of human factors on SRL and learning performances (e.g., Bannert & Reimann, Citation2012; Chen & Huang, Citation2014). Therefore, studies should explore tailored supports to optimize individual learning.

This review synthesizes the role of human factors examined across all studies included within our search criteria (research questions 2a and 2b). The studies reviewed showed that human factors not only have an impact on SRL and learning performances (e.g., Chen et al., Citation2014), they can also moderate the effects of SRL supports (e.g., Sitzmann et al., Citation2009). Different learners benefit from different supports (e.g., Yeh et al., Citation2010; higher prior knowledge learners benefited more from predicting-based prompts while lower prior knowledge learners benefited more from reasoning-based prompts). Hence, future studies should strive to address the human factors when examining approaches to support SRL in online learning environments.

There are fundamental differences between traditional classroom environments and online learning environments: (i) online learning environments, such as MOOCs (Hew & Cheung, Citation2014) have more diverse learner populations, (ii) instructors are not physically present to guide learners, and (iii) learners have more autonomy. This review identifies two major challenges of conducting studies to examine SRL supports. One of these challenges is that of ensuring learners act according to the prompts or use the tools provided. Online learning environments are highly autonomous and learners decide on their own whether or not to act upon the prompt or to use the tools provided. Effective approaches to support SRL strategies cannot benefit the learner if they are never sufficiently used (Bannert & Reimann, Citation2012; Kizilcec, Pérez-Sanagustín, & Maldonado, Citation2016). Another challenge is to provide adaptive support to meet diverse learning needs. Learning is a dynamic process and learners’ needs change depending on many factors (e.g., interest, preferences, and prior experiences). Therefore, studies should explore the use of technology to meet the diverse learning needs of individuals. Human factors should be considered as an independent variable when examining approaches to support SRL strategies and academic success. This would enable a better understanding of the different effects the various approaches might have on different learners.

4.1. Recommendations

4.1.1. Methodology

When reviewing the studies, some limitations of the studies were found. Here we will present four methodological recommendations for future studies on SRL interventions, which we believe will increase ability of such studies to inform educational practice and research.

First, some studies did not compare the intervention with an active control group. Some studies used no control group at all, and instead relied on a pre-post design to test the SRL intervention(s) of interest (e.g., Alharbi, Henskens, & Hannaford, Citation2012). However, such a design is not optimal to assess the impact of an intervention, as differences between the pre- and post-test might be due to any number of reasons, such as natural growth, regression to the mean, testing effect, measurement error, or a range of other plausible causes. The addition of a ‘no-intervention’ control group can be used to partly control for these alternative explanations, which is needed to more accurately test the added benefits of the intervention of interest. However, comparing an intervention versus a no-intervention group typically has the structure of ‘A + B > A’. Such a comparison does not directly answer the question whether intervention ‘B’ works, but whether ‘A+ anything’ is better than just ‘A’. In these cases, it is more informative to not only have a no intervention control group, but to include an active control group that receives an intervention of similar scope.

Second, comparing different variants of the same intervention in different ‘dosages’ can inform us the dose-response effects. See Boot, Simons, Stothart, and Stutts (Citation2013) for a much more thorough discussion about the proper use of active control groups. Supporting SRL strategies in different “doses” might have different effects for different learners. Sitzmann et al. (Citation2009) found continuous schedule to be the most effective prompting schedule. However, it is not clear whether continuous prompting may be more effective for novices compared to experts.

Third, comparing short-term and long-term effects of the approaches in supporting SRL can extend the understanding of how supporting SRL enhances learning. Only one study (i.e., Bannert et al., Citation2015) was found to examine the long-term and short-term effects of the SRL support. The approaches to support SRL will be stronger if the effect can be carried over to benefit learning in the long-term. The essence of SRL is the act of learning initiated by the learners themselves, and hence an effective support should be one that enables the learner to continue to self-regulate their learning after the support is removed. Future studies should work towards measuring short-term and long-term effects to examine effective SRL support.

Finally, many of the reviewed studies were underpowered, which diminishes the ability of these studies to accurately assess real but small differences between groups. Therefore, studies should strive to examine the approaches using larger sample sizes based on a-priori sample size calculations.

4.1.2. Addressing the role of human factors

The present set of research synthesized in this literature review reveals a substantial shortage in the area of human factor-dependent learning interventions. We, therefore, outline three recommendations to guide and advance future research. First, harness learning analytics to provide adaptive support. Part of the promise of learning analytics is its potential for adaptive, personalized learning environments that cater instruction precisely to each learner’s unique needs. This is helpful in the near-term because these random assignments expose every type of person to each condition evenly. As more studies are run and we learn more about how various human conditions benefit differently from certain approaches to learning and SRL strategies, we can begin to strategically target interventions to the populations who we can safely predict will benefit most from them. Given the rise in MOOCs’ popularity and research efforts, there will soon be a substantial corpus of MOOC experiments offering deeper insights on how to design and deploy scalable, targeted interventions. Once we have a corpus of research that sheds light on which interventions are best for certain types of learners, we can cater MOOC instruction to best fit the individual learner needs.

Another shortcoming of the present literature reviewed is that there has not been a cross-population study or analysis for a given intervention. For example, we find many studies that find positive SRL and learning outcomes among undergraduates, but there is no evidence or indication that such findings would transfer to a different population or setting. However, given that MOOCs are so heterogeneous in their learner populations, more studies are needed to examine whether the effectiveness of the approaches can be generalized across populations and how the approaches can be adapted to meet the learning needs of a massive but diverse population.

In an effort towards targeted interventions to most efficiently encourage SRL, future studies examining the effectiveness of various SRL strategies ought to place a larger focus on human factors in their analyses. By accounting for human factors, we can identify not only the most effective SRL strategies per human factor, but also potentially undesirable ones as well. Tapping on the potential of technology to address the human factors at a fine-grained level, we can better cater to the specific needs of each individual. Learners may differ not only at one level (e.g., low and high prior knowledge) but also at multiple levels (e.g., gender, prior knowledge and cognitive ability) of human factors. Therefore, by using learning analytics to understand learners, we can move towards scaling education while accounting for the vast array of human factors.

4.2. Conclusion

Researchers have discovered ways for SRL to effectively support online learners, accounting for the fact that each learner benefits differently from each support (e.g., prompts, feedback, and integrated support system). Moreover, human factors play an essential role in understanding SRL supports in online learning. Future research must integrate human factors and learning theories into the development of online learning environments in order to enable adaptive support systems that optimize learning on an individual level.

Additional information

Funding

The authors’ (Jacqueline Wong, Dan Davis and Tim van der Zee) research is supported by the Leiden-Delft-Erasmus Centre for Education and Learning.

Notes on contributors

Jacqueline Wong

Jacqueline Wong is a PhD candidate at the Department of Psychology, Education and Child Studies of Erasmus University Rotterdam. Her research focuses on motivation and self-regulated learning in open online learning environments. She examines the influence of student characteristic and the effect of learning supports on student success in Massive Open Online Courses (MOOCs).

Martine Baars

Martine Baars is an assistant professor of Educational Psychology at the Erasmus University Rotterdam. Her research is focused on instructional strategies to improve self-regulated learning. She investigates what cues are used for self-regulated learning. Also, she explores the role of cognitive load, task complexity, and motivation in self-regulated learning.

Dan Davis

Dan Davis is a PhD candidate at the Web Information Systems Group at TU Delft in the Netherlands. His research develops methods to gain a deeper understanding about how the design of online learning environments affects learner success and engagement, often by implementing and testing instructional interventions at scale.

Tim Van Der Zee

Tim van der Zee has a MSc in Psychology and is currently a PhD student at the Leiden University Graduate School of Teaching (ICLON) in the Netherlands. In his research he focuses on understanding and improving the educational quality of open online courses such as MOOCs (Massive Open Online Courses).

Geert-Jan Houben

Geert-Jan Houben is a full professor of Web Information Systems at Delft University of Technology. Also, he is scientific director of Delft Data Science, TU Delft’s coordinating initiative in the field of Data Science, holds the KIVI-chair Big Data Science, and leads TU Delft’s research program on Open & Online Education in TU Delft Extension School.

Fred Paas

Fred Paas is a full professor of Educational Psychology at the Department of Psychology, Education and Child Studies of Erasmus University Rotterdam in the Netherlands and at the Early Start Research Institute of the University of Wollongong in Australia. His research focuses on the use of cognitive load theory in the instructional design of complex cognitive tasks.

References

  • Aleven, V., Sewall, J., Popescu, O., Xhakaj, F., Chand, D., Baker, R., … Gasevic, D. (2015). The beginning of a beautiful friendship? Intelligent tutoring systems and MOOCs. In International Conference on Artificial Intelligence in Education, 525–528.
  • Alharbi, A., Henskens, F., & Hannaford, M. (2012). A domain-based learning object search engine to support self-regulated learning. International Journal of Computer and Information Technology, 1(01), 2277–2764.
  • Allen, I. E., & Seaman, J. (2015, Accessed March, 10, 2015.). Grade level: Tracking online education in the United States. Babson Park, MA: Babson Survey Research Group.
  • Artino, A. R., & Stephens, J. M. (2009). Academic motivation and self-regulation: A comparative analysis of undergraduate and graduate students learning online. The Internet and Higher Education, 12(3–4), 146–151. doi:10.1016/j.iheduc.2009.02.001
  • Azevedo, R. (2005). Using hypermedia as a metacognitive tool for enhancing student learning? The role of self-regulated learning. Educational Psychologist, 40(4), 199–209. doi:10.1207/s15326985ep4004_2
  • Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and metacognition–Implications for the design of computer-based scaffolds. Instructional Science, 33(5–6), 367–379. doi:10.1007/s11251-005-1272-9
  • *Bannert, M., & Reimann, P. (2012). Supporting self-regulated hypermedia learning through prompts. Instructional Science, 40(1), 193–211. doi:10.1007/s11251-011-9167-4
  • *Bannert, M., Sonnenberg, C., Mengelkamp, C., & Pieger, E. (2015). Short- and long-term effects of students’ self-directed metacognitive prompts on navigation behavior and learning performance. Computers in Human Behavior, 52, 293–306. doi:10.1016/j.chb.2015.05.038
  • Barbour, M. K. (2013). The landscape of K-12 online learning: Examining what is known. Handbook of Distance Education, 3, 574–593.
  • *Biesinger, K., & Crippen, K. (2010). The effects of feedback protocol on self-regulated learning in a web-based worked example learning environment. Computers & Education, 55(4), 1470–1482. doi:10.1016/j.compedu.2010.06.013
  • Boekaerts, M. (1997). Self-regulated learning: A new concept embraced by researchers, policy makers, educators, teachers, and students. Learning and Instruction, 7(2), 161–186. doi:10.1016/S0959-4752(96)00015-1
  • Boekaerts, M. (1999). Self-regulated learning: Where we are today. International. Journal of Educational Research, 31(6), 445–457.
  • Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8(4), 445–454. doi:10.1177/1745691613491271
  • Breslow, L., Pritchard, D. E., DeBoer, J., Stump, G. S., Ho, A. D., & Seaton, D. T. (2013). Studying learning in the worldwide classroom: Research into edX’s first MOOC. Research & Practice in Assessment, 8, 13–25.
  • Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 1–13. doi:10.1016/j.iheduc.2015.04.007
  • *Chang, M. M. (2007). Enhancing web-based language learning through self-monitoring. Journal of Computer Assisted Learning, 23(3), 187–196. doi:10.1111/j.1365-2729.2006.00203.x
  • *Chen, N. S., Wei, C. W., Wu, K. T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers & Education, 52(2), 283–291. doi:10.1016/j.compedu.2008.08.007
  • *Chun-Yi, S., & Hsiu-Chuan, L. (2011). Metacognitive skills development: A web-based approach in higher education. TOJET: the Turkish Online Journal of Educational Technology, 10(2), 140–150.
  • *Chen, C. M., Wang, J. Y., & Chen, Y.-C. (2014). Facilitating English-language reading performance by a digital reading annotation system with self-regulated learning mechanisms. Educational Technology & Society, 17(1), 102–114.
  • *Chang, M. M., & Lin, M. C. (2014). The effect of reflective learning e-journals on reading comprehension and communication in language learning. Computers & Education, 71, 124–132. doi:10.1016/j.compedu.2013.09.023
  • *Chen, C. M., & Huang, S. H. (2014). Web‐based reading annotation system with an attention‐based self‐regulated learning mechanism for promoting reading performance. British Journal of Educational Technology, 45(5), 959–980. doi:10.1111/bjet.12119
  • Corbett, A. T., Koedinger, K. R., & Anderson, J. R. (1997). Intelligent tutoring systems. In M. Helander, T. K. Landauer, & P. Prabhu (Eds.), Handbook of human computer interaction (pp. 849–874). New York, NY: Elsevier.
  • Corbett, A. T., & Anderson, J. R. (1994). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4(4), 253–278. doi:10.1007/BF01099821
  • *Crippen, K. J., & Earl, B. L. (2007). The impact of web-based worked examples and self-explanation on performance, problem solving, and self-efficacy. Computers & Education, 49(3), 809–821.
  • Davis, D., Chen, G., Hauff, C., & Houben, G. J. (2016). Gauging MOOC learners’ adherence to the designed learning path. In Proceeding of the 9th International Conference on Educational Data Mining, 54–61.
  • *El Saadawi, G. M., Azevedo, R., Castine, M., Payne, V., Medvedeva, O., Tseytlin, E., … Crowley, R. S. (2010). Factors affecting feeling-of-knowing in a medical intelligent tutoring system: The role of immediate feedback as a metacognitive scaffold. Advances in Health Sciences Education, 15(1), 9–30. doi:10.1007/s10459-009-9162-6
  • *Delen, E., Liew, J., & Willson, V. (2014). Effects of interactivity and instructional scaffolding on learning: Self-regulation in online video-based environments. Computers & Education, 78, 312–320. doi:10.1016/j.compedu.2014.06.018
  • *Duffy, M. C., & Azevedo, R. (2015). Motivation matters: Interactions between achievement goals and agent scaffolding for self-regulated learning within an intelligent tutoring system. Computers in Human Behavior, 52, 338–348. doi:10.1016/j.chb.2015.05.041
  • Fournier, H., Kop, R., & Durand, G. (2014). Challenges to research in MOOCs. Journal of Online Learning and Teaching, 10(1), 1.
  • Guo, P. J., & Reinecke, K. (2014). Demographic differences in how students navigate through MOOCs. In Proceedings of the first ACM conference on Learning@ scale conference, 21–30.
  • Hew, K. F., & Cheung, W. S. (2014). Students’ and instructors’ use of massive open online courses (MOOCs): Motivations and challenges. Educational Research Review, 12, 45–58. doi:10.1016/j.edurev.2014.05.001
  • Hill, J. R., & Hannafin, M. J. (2001). Teaching and learning in digital environments: The resurgence of resource-based learning. Educational Technology Research and Development, 49(3), 37–52. doi:10.1007/BF02504914
  • Hood, N., Littlejohn, A., & Milligan, C. (2015). Context counts: How learners’ contexts influence learning in a MOOC. Computers & Education, 91, 83–91. doi:10.1016/j.compedu.2015.10.019
  • *Ifenthaler, D. (2012). Determining the effectiveness of prompts for self-regulated learning in problem-solving scenarios. Educational Technology & Society, 15(1), 38–52.
  • Johnson, S. D., & Aragon, S. R. (2003). An instructional strategy framework for online learning environments. New Directions for Adult and Continuing Education, 2003, 31–43. doi:10.1002/(ISSN)1536-0717
  • Jordan, K. (2014). Initial trends in enrolment and completion of massive open online courses. The International Review of Research in Open and Distributed Learning, 15(1), 133–160. doi:10.19173/irrodl.v15i1.1651
  • Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional design. Human Factors: the Journal of the Human Factors and Ergonomics Society, 40(1), 1–17. doi:10.1518/001872098779480587
  • *Kauffman, D. F., Ge, X., Xie, K., & Chen, C. H. (2008). Prompting in web-based environments: Supporting self-monitoring and problem solving skills in college students. Journal of Educational Computing Research, 38(2), 115–137. doi:10.2190/EC.38.2.a
  • *Kauffman, D. F., Zhao, R., & Yang, Y.-S. (2011). Effects of online note taking formats and self-monitoring prompts on learning from online text: Using technology to enhance self-regulated learning. Contemporary Educational Psychology, 36(4), 313–322. doi:10.1016/j.cedpsych.2011.04.001
  • Khan, K. S., Kunz, R., Kleijnen, J., & Antes, G. (2003). Five steps to conducting a systematic review. Journal of the Royal Society of Medicine, 96(3), 118–121.
  • *Kim, H. J., & Pedersen, S. (2011). Advancing young adolescents’ hypothesis-development performance in a computer-supported and problem-based learning environment. Computers & Education, 57(2), 1780–1789. doi:10.1016/j.compedu.2011.03.014
  • *Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2016). Recommending self-regulated learning strategies does not improve performance in a MOOC. In Proceedings of the Third (2016) ACM Conference on Learning@ Scale, 101–104.
  • Kizilcec, R. F., Pérez-Sanagustín, M., & Maldonado, J. J. (2017). Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Computers & Education, 104, 18–33. doi:10.1016/j.compedu.2016.10.001
  • Kop, R. (2011). The challenges to connectivist learning on open online networks: Learning experiences during a massive open online course. The International Review of Research in Open and Distributed Learning, 12(3), 19–38. doi:10.19173/irrodl.v12i3.882
  • *Kramarski, B., & Gutman, M. (2006). How can self-regulated learning be supported in mathematical E‐learning environments?. Journal of Computer Assisted Learning, 22(1), 24–33. doi:10.1111/j.1365-2729.2006.00157.x
  • *Kostons, D., Van Gog, T., & Paas, F. (2012). Training self-assessment and task-selection skills: A cognitive approach to improving self-regulated learning. Learning and Instruction, 22(2), 121–132. doi:10.1016/j.learninstruc.2011.08.004
  • Lee, D. Y., & Ryu, H. (2013). Learner acceptance of a multimedia-based learning system. International Journal of Human-Computer Interaction, 29(6), 419–437. doi:10.1080/10447318.2012.715278
  • *Lee, H. W., Lim, K. Y., & Grabowski, B. L. (2010). Improving self-regulation, learning strategy use, and achievement with metacognitive feedback. Educational Technology Research and Development, 58(6), 629–648. doi:10.1007/s11423-010-9153-6
  • *Lehmann, T., Hähnlein, I., & Ifenthaler, D. (2014). Cognitive, metacognitive and motivational perspectives on preflection in self-regulated online learning. Computers in Human Behavior, 32, 313–323. doi:10.1016/j.chb.2013.07.051
  • Littlejohn, A., Hood, N., Milligan, C., & Mustain, P. (2016). Learning in MOOCs: Motivations and self-regulated learning in MOOCs. The Internet and Higher Education, 29, 40–48. doi:10.1016/j.iheduc.2015.12.003
  • Liyanagunawardena, T. R., Adams, A. A., & Williams, S. A. (2013). MOOCs: A systematic study of the published literature 2008–2012. The International Review of Research in Open and Distributed Learning, 14(3), 202–227. doi:10.19173/irrodl.v14i3.1455
  • *Manlove, S., Lazonder, A. W., & de Jong, T. (2007). Software scaffolds to promote regulation during scientific inquiry learning. Metacognition and Learning, 2(2–3), 141–155. doi:10.1007/s11409-007-9012-y
  • McInnerney, J. M, & Roberts, T. S. (2004). Online learning: social interaction and the creation of a sense of community. Educational Technology & Society, 7(3), 73-81.
  • McSporran, M., & Young, S. (2001). Does gender matter in online learning?. Research in Learning Technology, 9(2), 3–15. doi:10.3402/rlt.v9i2.12024
  • *Molenaar, I., van Boxtel, C. A., & Sleegers, P. J. (2011). Metacognitive scaffolding in an innovative learning arrangement. Instructional Science, 39(6), 785–803. doi:10.1007/s11251-010-9154-1
  • *Molenaar, I., Roda, C., van Boxtel, C., & Sleegers, P. (2012). Dynamic scaffolding of socially regulated learning in a computer-based learning environment. Computers & Education, 59(2), 515–523. doi:10.1016/j.compedu.2011.12.006
  • Moore, J. L., Dickson-Deane, C., & Galyen, K. (2011). E-Learning, online learning, and distance learning environments: Are they the same?. The Internet and Higher Education, 14(2), 129–135. doi:10.1016/j.iheduc.2010.10.001
  • *Moos, D. C., & Azevedo, R. (2008). Exploring the fluctuation of motivation and use of self-regulatory processes during learning with hypermedia. Instructional Science, 36(3), 203–231. doi:10.1007/s11251-007-9028-3
  • Muñoz Cristóbal, J. A., Rodríguez-Triana, M. J., Gallego-Lema, V., Arribas-Cubero, H. F., Asensio-Pérez, J. I., & Martínez-Monés, A. (2017). Monitoring for awareness and reflection in ubiquitous learning environments. International Journal of Human–Computer Interaction. Advance online publication. doi:10.1080/10447318.2017.1331536.
  • Picciano, A. G., Seaman, J., Shea, P., & Swan, K. (2012). Examining the extent and nature of online learning in American K-12 education: The research initiatives of the Alfred P. Sloan Foundation. The Internet and Higher Education, 15(2), 127–135. doi:10.1016/j.iheduc.2011.07.004
  • Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor: National Center for Research to Improve Postsecondary Teaching and Learning, The University of Michigan.
  • Puustinen, M., & Pulkkinen, L. (2001). Models of self-regulated learning: A review. Scandinavian Journal of Educational Research, 45(3), 269–286. doi:10.1080/00313830120074206
  • Shah, D. (2016). By the numbers: MOOCs in 2016. Retrieved from https://www.class-central.com/report/mooc-stats-2016/
  • *Sitzmann, T., Bell, B. S., Kraiger, K., & Kanar, A. M. (2009). A multilevel analysis of the effect of prompting self-regulation in technology-delivered instruction. Personnel Psychology, 62(4), 697–734. doi:10.1111/peps.2009.62.issue-4
  • *Stahl, E., & Bromme, R. (2009). Not everybody needs help to seek help: Surprising effects of metacognitive instructions to foster help-seeking in an online-learning environment. Computers & Education, 53(4), 1020–1028. doi:10.1016/j.compedu.2008.10.004
  • *Sitzmann, T., & Ely, K. (2010). Sometimes you need a reminder: The effects of prompting self-regulation on regulatory processes, learning, and attrition. Journal of Applied Psychology, 95(1), 132–144. doi:10.1037/a0018080
  • Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3), 312–327. doi:10.19173/irrodl.v15i3.1680
  • *Tangworakitthaworn, P., Gilbert, L., & Wills, G. B. (2015). A conceptualization of intended learning outcomes supporting self‐regulated learners in indicating learning paths. Journal of Computer Assisted Learning, 31(5), 393–404. doi:10.1111/jcal.2015.31.issue-5
  • Tsai, C. W., Shen, P. D., & Fan, Y. T. (2013). Research trends in self-regulated learning research in online learning environments: A review of studies published in selected journals from 2003 to 2012. British Journal of Educational Technology, 44(5), 107–110. doi:10.1111/bjet.12017
  • Tsai, S., & Machado, P. (2002). E-learning, online learning, web-based learning, or distance learning: Unveiling the ambiguity in current terminology. Retrieved from http://elearnmag.acm.org/featured.cfm?aid=568597
  • *Van den Boom, G., Paas, F., & van Merriënboer, J. J. (2007). Effects of elicited reflections combined with tutor or peer feedback on self-regulated learning and learning outcomes. Learning and Instruction, 17(5), 532–548. doi:10.1016/j.learninstruc.2007.09.003
  • Veletsianos, G., & Shepherdson, P. (2016). A Systematic Analysis and Synthesis of the Empirical MOOC Literature Published in 2013–2015. The International Review of Research in Open and Distributed Learning, 17(2), 198–221. doi:10.19173/irrodl.v17i2.2448
  • Vu, K. P. L., Hanley, G. L., Strybel, T. Z., & Proctor, R. W. (2000). Metacognitive processes in human-computer interaction: Self-assessments of knowledge as predictors of computer expertise. International Journal of Human-Computer Interaction, 12(1), 43–71. doi:10.1207/S15327590IJHC1201_2
  • *Wang, T. H. (2011). Developing Web-based assessment strategies for facilitating junior high school learners to perform self-regulated learning in an e-Learning environment. Computers & Education, 57(2), 1801–1812. doi:10.1016/j.compedu.2011.01.003
  • *Wäschle, K., Lachner, A., Stucke, B., Rey, S., Frömmel, C., & Nückles, M. (2014). Effects of visual feedback on medical students’ procrastination within web-based planning and reflection protocols. Computers in Human Behavior, 41, 120–136. doi:10.1016/j.chb.2014.09.022
  • Winters, F. I., Greene, J. A., & Costich, C. M. (2008). Self-regulation of learning within computer-based learning environments: A critical analysis. Educational Psychology Review, 20(4), 429–444. doi:10.1007/s10648-008-9080-9
  • *Yeh, Y. F., Chen, M. C., Hung, P. H., & Hwang, G. J. (2010). Optimal self-explanation prompt design in dynamic multi-representational learning environments. Computers & Education, 54(4), 1089–1100. doi:10.1016/j.compedu.2009.10.013
  • Yukselturk, E., & Top, E. (2013). Exploring the link among entry characteristics, participation behaviors and course outcomes of online learners: An examination of learner profile using cluster analysis. British Journal of Educational Technology, 44(5), 716–728. doi:10.1111/bjet.2013.44.issue-5
  • *Zhang, W. X., Hsu, Y. S., Wang, C. Y., & Ho, Y. T. (2015). Exploring the Impacts of Cognitive and Metacognitive Prompting on Students’ Scientific Inquiry Practices Within an E-Learning Environment. International Journal of Science Education, 37(3), 529–553. doi:10.1080/09500693.2014.996796
  • Zheng, L. (2016). The effectiveness of self-regulated learning scaffolds on academic performance in computer-based learning environments: A meta-analysis. Asia Pacific Education Review, 17(2), 187–202. doi:10.1007/s12564-016-9426-9
  • Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329. doi:10.1037/0022-0663.81.3.329
  • Zimmerman, B. J. (1990). Self-regulated learning and academic achievement: An overview. Educational Psychologist, 25(1), 3–17. doi:10.1207/s15326985ep2501_2
  • Zimmerman, B. J., & Campillo, M. (2003). Motivating self-regulated problem solvers. In J. E. Davidson & R. J. Sternberg (Eds.), The psychology of problem solving (pp. 233–262). Cambridge, UK: Cambridge University Press.
  • Zimmerman, B. J., & Martinez-Pons, M. (1990). Student differences in self-regulated learning: Relating grade, sex, and giftedness to self-efficacy and strategy use. Journal of Educational Psychology, 82(1), 51–59. doi:10.1037/0022-0663.82.1.51